los*_*imm 5 ext4 mdadm data-recovery raid5
2.6.38-8-server5 SATA drives; 4 Samsung, 1 Western Digital; 500 GB每个LSI SAS 9201-16i主机总线适配卡mdadm. 另外两个数组 ( /dev/md1, /dev/md2) 存在没有问题。所以我的突袭是敬酒。在这一点上,我几乎超出了我的深度,所以我希望这里有人可以为我指出一些好的方向。正如我在下面提到的,我已经在这 16 个小时左右(休息一下以清除头脑!)我一直在阅读这里和其他地方的所有内容。大多数建议都是一样的,并不令人鼓舞,但我希望能引起比我更博学的人的注意。
所以...昨天我试图向我的 RAID 5 阵列添加一个额外的驱动器。为此,我关闭了盒子的电源,插入了新驱动器,并重新为机器供电。到目前为止一切都很好。
然后我卸载了阵列
% sudo umount /dev/md0
并继续进行文件系统检查。
% sudo e2fsck -f /dev/md0
一切都很好。
我在新驱动器上创建了一个主分区/dev/sdh1并将其设置为 type Linux raid autodetect。写入磁盘并退出。
我将新驱动器添加到阵列中
% sudo mdadm --add /dev/md0 /dev/sdh1
并跟进了
sudo mdadm --grow --raid-devices=5 --backup-file=/home/foundation/grow_md0.bak /dev/md0
(如果您此时因为备份而充满希望,请不要这样,我的文件系统中不存在该文件,但我记得输入它,并且它在我的 bash 历史记录中)
同样,一切似乎都很好。我让它坐下来,同时它做它的事情。完成后,没有任何错误,我e2fsck -f /dev/md0再次运行。仍然没有什么异常。在这一点上,我有足够的信心调整它的大小。
% sudo resize2fs /dev/md0
这在没有窥视的情况下完成了。为了完整起见,我关闭了盒子并等待它恢复正常。
在尝试挂载分区的引导过程中失败。阵列的组装看似顺利,但由于无法找到 EXT4 文件系统而安装失败。
部分内容dmesg如下:
# [ 9.237762] md: bind<sdh1>
# [ 9.246063] md: bind<sdo>
# [ 9.248308] md: bind<sdn>
# [ 9.249661] bio: create slab <bio-1> at 1
# [ 9.249668] md/raid0:md2: looking at sdn
# [ 9.249669] md/raid0:md2: comparing sdn(1953524992) with sdn(1953524992)
# [ 9.249671] md/raid0:md2: END
# [ 9.249672] md/raid0:md2: ==> UNIQUE
# [ 9.249673] md/raid0:md2: 1 zones
# [ 9.249674] md/raid0:md2: looking at sdo
# [ 9.249675] md/raid0:md2: comparing sdo(1953524992) with sdn(1953524992)
# [ 9.249676] md/raid0:md2: EQUAL
# [ 9.249677] md/raid0:md2: FINAL 1 zones
# [ 9.249679] md/raid0:md2: done.
# [ 9.249680] md/raid0:md2: md_size is 3907049984 sectors.
# [ 9.249681] md2 configuration
# [ 9.249682] zone0=[sdn/sdo/]
# [ 9.249683] zone offset=0kb device offset=0kb size=1953524992kb
# [ 9.249684]
# [ 9.249685]
# [ 9.249690] md2: detected capacity change from 0 to 2000409591808
# [ 9.250162] sd 2:0:7:0: [sdk] Write Protect is off
# [ 9.250164] sd 2:0:7:0: [sdk] Mode Sense: 73 00 00 08
# [ 9.250331] md2: unknown partition table
# [ 9.252371] sd 2:0:7:0: [sdk] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
# [ 9.252642] sd 2:0:9:0: [sdm] Write Protect is off
# [ 9.252644] sd 2:0:9:0: [sdm] Mode Sense: 73 00 00 08
# [ 9.254798] sd 2:0:9:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
# [ 9.256555] sdg: sdg1
# [ 9.261439] sd 2:0:8:0: [sdl] Write Protect is off
# [ 9.261441] sd 2:0:8:0: [sdl] Mode Sense: 73 00 00 08
# [ 9.263594] sd 2:0:8:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
# [ 9.302372] sdf: sdf1
# [ 9.310770] md: bind<sdd1>
# [ 9.317153] sdj: sdj1
# [ 9.327325] sdi: sdi1
# [ 9.327686] md: bind<sde1>
# [ 9.372897] sd 2:0:3:0: [sdg] Attached SCSI disk
# [ 9.391630] sdm: sdm1
# [ 9.397435] sdk: sdk1
# [ 9.400372] sdl: sdl1
# [ 9.424751] sd 2:0:6:0: [sdj] Attached SCSI disk
# [ 9.439342] sd 2:0:5:0: [sdi] Attached SCSI disk
# [ 9.450533] sd 2:0:2:0: [sdf] Attached SCSI disk
# [ 9.464315] md: bind<sdg1>
# [ 9.534946] md: bind<sdj1>
# [ 9.541004] md: bind<sdf1>
[ 9.542537] md/raid:md0: device sdf1 operational as raid disk 2
[ 9.542538] md/raid:md0: device sdg1 operational as raid disk 3
[ 9.542540] md/raid:md0: device sde1 operational as raid disk 1
[ 9.542541] md/raid:md0: device sdd1 operational as raid disk 0
[ 9.542879] md/raid:md0: allocated 5334kB
[ 9.542918] md/raid:md0: raid level 5 active with 4 out of 5 devices, algorithm 2
[ 9.542923] RAID conf printout:
[ 9.542924] --- level:5 rd:5 wd:4
[ 9.542925] disk 0, o:1, dev:sdd1
[ 9.542926] disk 1, o:1, dev:sde1
[ 9.542927] disk 2, o:1, dev:sdf1
[ 9.542927] disk 3, o:1, dev:sdg1
[ 9.542928] disk 4, o:1, dev:sdh1
[ 9.542944] md0: detected capacity change from 0 to 2000415883264
[ 9.542959] RAID conf printout:
[ 9.542962] --- level:5 rd:5 wd:4
[ 9.542963] disk 0, o:1, dev:sdd1
[ 9.542964] disk 1, o:1, dev:sde1
[ 9.542965] disk 2, o:1, dev:sdf1
[ 9.542966] disk 3, o:1, dev:sdg1
[ 9.542967] disk 4, o:1, dev:sdh1
[ 9.542968] RAID conf printout:
[ 9.542969] --- level:5 rd:5 wd:4
[ 9.542970] disk 0, o:1, dev:sdd1
[ 9.542971] disk 1, o:1, dev:sde1
[ 9.542972] disk 2, o:1, dev:sdf1
[ 9.542972] disk 3, o:1, dev:sdg1
[ 9.542973] disk 4, o:1, dev:sdh1
[ 9.543005] md: recovery of RAID array md0
[ 9.543007] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[ 9.543008] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 9.543013] md: using 128k window, over a total of 488382784 blocks.
[ 9.543014] md: resuming recovery of md0 from checkpoint.
# [ 9.549495] sd 2:0:9:0: [sdm] Attached SCSI disk
# [ 9.555022] sd 2:0:8:0: [sdl] Attached SCSI disk
# [ 9.555612] sd 2:0:7:0: [sdk] Attached SCSI disk
# [ 9.561410] md: bind<sdi1>
[ 9.565538] md0: unknown partition table
# [ 9.639444] md: bind<sdm1>
# [ 9.642729] md: bind<sdk1>
# [ 9.650048] md: bind<sdl1>
# [ 9.652342] md/raid:md1: device sdl1 operational as raid disk 3
# [ 9.652343] md/raid:md1: device sdk1 operational as raid disk 2
# [ 9.652345] md/raid:md1: device sdm1 operational as raid disk 4
# [ 9.652346] md/raid:md1: device sdi1 operational as raid disk 0
# [ 9.652347] md/raid:md1: device sdj1 operational as raid disk 1
# [ 9.652627] md/raid:md1: allocated 5334kB
# [ 9.652654] md/raid:md1: raid level 5 active with 5 out of 5 devices, algorithm 2
# [ 9.652655] RAID conf printout:
# [ 9.652656] --- level:5 rd:5 wd:5
# [ 9.652657] disk 0, o:1, dev:sdi1
# [ 9.652658] disk 1, o:1, dev:sdj1
# [ 9.652658] disk 2, o:1, dev:sdk1
# [ 9.652659] disk 3, o:1, dev:sdl1
# [ 9.652660] disk 4, o:1, dev:sdm1
# [ 9.652676] md1: detected capacity change from 0 to 3000614518784
# [ 9.654507] md1: unknown partition table
# [ 11.093897] vesafb: framebuffer at 0xfd000000, mapped to 0xffffc90014200000, using 1536k, total 1536k
# [ 11.093899] vesafb: mode is 1024x768x16, linelength=2048, pages=0
# [ 11.093901] vesafb: scrolling: redraw
# [ 11.093903] vesafb: Truecolor: size=0:5:6:5, shift=0:11:5:0
# [ 11.094010] Console: switching to colour frame buffer device 128x48
# [ 11.206677] fb0: VESA VGA frame buffer device
# [ 11.301061] EXT4-fs (sda1): re-mounted. Opts: user_xattr,errors=remount-ro
# [ 11.428472] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 11.896204] EXT4-fs (sdc6): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 12.262728] r8169 0000:01:00.0: eth0: link up
# [ 12.263975] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
# [ 13.528097] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 13.681339] EXT4-fs (md2): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 14.310098] EXT4-fs (md1): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 14.357675] EXT4-fs (sdc5): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 16.933348] audit_printk_skb: 9 callbacks suppressed
# [ 22.350011] eth0: no IPv6 routers present
# [ 27.094760] ppdev: user-space parallel port driver
# [ 27.168812] kvm: Nested Virtualization enabled
# [ 27.168814] kvm: Nested Paging enabled
# [ 30.383664] EXT4-fs (sda1): re-mounted. Opts: user_xattr,errors=remount-ro,commit=0
# [ 30.385125] EXT4-fs (sdb1): re-mounted. Opts: user_xattr,commit=0
# [ 32.105044] EXT4-fs (sdc6): re-mounted. Opts: user_xattr,commit=0
# [ 33.078017] EXT4-fs (sdc1): re-mounted. Opts: user_xattr,commit=0
# [ 33.079491] EXT4-fs (md2): re-mounted. Opts: user_xattr,commit=0
# [ 33.082411] EXT4-fs (md1): re-mounted. Opts: user_xattr,commit=0
# [ 35.369796] EXT4-fs (sdc5): re-mounted. Opts: user_xattr,commit=0
# [ 35.674390] CE: hpet increased min_delta_ns to 20113 nsec
# [ 35.676242] CE: hpet increased min_delta_ns to 30169 nsec
# [ 35.677808] CE: hpet increased min_delta_ns to 45253 nsec
# [ 35.679349] CE: hpet increased min_delta_ns to 67879 nsec
# [ 35.680312] CE: hpet increased min_delta_ns to 101818 nsec
# [ 35.680312] CE: hpet increased min_delta_ns to 152727 nsec
# [ 35.680312] CE: hpet increased min_delta_ns to 229090 nsec
# [ 35.680312] CE: hpet increased min_delta_ns to 343635 nsec
# [ 35.681590] CE: hpet increased min_delta_ns to 515452 nsec
# [ 436.595366] EXT4-fs (md2): mounted filesystem with ordered data mode. Opts: user_xattr
# [ 607.364501] exe (14663): /proc/14663/oom_adj is deprecated, please use /proc/14663/oom_score_adj instead.
[ 2016.476772] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[ 2246.923154] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[ 2293.383934] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[ 2337.292080] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[ 2364.812150] EXT4-fs (md0): VFS: Can't find ext4 filesystem
[ 2392.624988] EXT4-fs (md0): VFS: Can't find ext4 filesystem
# [ 3098.003646] CE: hpet increased min_delta_ns to 773178 nsec
[ 4208.380943] md: md0: recovery done.
[ 4208.470356] RAID conf printout:
[ 4208.470363] --- level:5 rd:5 wd:5
[ 4208.470369] disk 0, o:1, dev:sdd1
[ 4208.470374] disk 1, o:1, dev:sde1
[ 4208.470378] disk 2, o:1, dev:sdf1
[ 4208.470382] disk 3, o:1, dev:sdg1
[ 4208.470385] disk 4, o:1, dev:sdh1
[ 7982.600595] EXT4-fs (md0): VFS: Can't find ext4 filesystem
Run Code Online (Sandbox Code Playgroud)
在启动期间,它问我想对此做什么。我告诉它继续前进并在机器恢复后开始处理它。我做的第一件事是检查/proc/mdstat...
# Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
# md1 : active raid5 sdl1[3] sdk1[2] sdm1[4] sdi1[0] sdj1[1]
# 2930287616 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]
#
# md2 : active raid0 sdn[0] sdo[1]
# 1953524992 blocks 64k chunks
md0 : active raid5 sdf1[2] sdg1[3] sde1[1] sdd1[0] sdh1[5]
1953531136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
#
# unused devices: <none>
Run Code Online (Sandbox Code Playgroud)
...和/etc/mdadm/mdadm.conf:
ARRAY /dev/md0 level=raid5 num-devices=5 UUID=98941898:e5652fdb:c82496ec:0ebe2003
# ARRAY /dev/md1 level=raid5 num-devices=5 UUID=67d5a3ed:f2890ea4:004365b1:3a430a78
# ARRAY /dev/md2 level=raid0 num-devices=2 UUID=d1ea9162:cb637b4b:004365b1:3a430a78
Run Code Online (Sandbox Code Playgroud)
然后我检查了fdisk:
foundation@foundation:~$ sudo fdisk -l /dev/sd[defgh]
Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000821e5
Device Boot Start End Blocks Id System
/dev/sdd1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00004a72
Device Boot Start End Blocks Id System
/dev/sde1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdf: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000443c2
Device Boot Start End Blocks Id System
/dev/sdf1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdg: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000e428
Device Boot Start End Blocks Id System
/dev/sdg1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdh: 500.1 GB, 500107862016 bytes
81 heads, 63 sectors/track, 191411 cylinders
Units = cylinders of 5103 * 512 = 2612736 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8c4d0ecf
Device Boot Start End Blocks Id System
/dev/sdh1 1 191412 488385560 fd Linux raid autodetect
Run Code Online (Sandbox Code Playgroud)
一切似乎都井井有条,所以我检查了阵列的详细信息并检查了它的组成部分。
foundation@foundation:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri May 13 00:57:15 2011
Raid Level : raid5
Array Size : 1953531136 (1863.03 GiB 2000.42 GB)
Used Dev Size : 488382784 (465.76 GiB 500.10 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Fri May 13 04:43:10 2011
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : foundation:0 (local to host foundation)
UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd
Events : 32
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
2 8 81 2 active sync /dev/sdf1
3 8 97 3 active sync /dev/sdg1
5 8 113 4 active sync /dev/sdh1
foundation@foundation:~$ sudo mdadm --examine /dev/sd[defgh]1
/dev/sdd1: (samsung)
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd
Name : foundation:0 (local to host foundation)
Creation Time : Fri May 13 00:57:15 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
Array Size : 3907062272 (1863.03 GiB 2000.42 GB)
Used Dev Size : 976765568 (465.76 GiB 500.10 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 6e6422de:f39c618a:2cab1161:b36c8341
Update Time : Fri May 13 15:53:06 2011
Checksum : 679bf575 - correct
Events : 32
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAAA ('A' == active, '.' == missing)
/dev/sde1: (samsung)
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd
Name : foundation:0 (local to host foundation)
Creation Time : Fri May 13 00:57:15 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
Array Size : 3907062272 (1863.03 GiB 2000.42 GB)
Used Dev Size : 976765568 (465.76 GiB 500.10 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : bd02892c:a346ec88:7ffcf757:c18eee12
Update Time : Fri May 13 15:53:06 2011
Checksum : 7cdeb0d5 - correct
Events : 32
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdf1: (samsung)
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd
Name : foundation:0 (local to host foundation)
Creation Time : Fri May 13 00:57:15 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
Array Size : 3907062272 (1863.03 GiB 2000.42 GB)
Used Dev Size : 976765568 (465.76 GiB 500.10 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : acd3d576:54c09121:0636980e:0a490f59
Update Time : Fri May 13 15:53:06 2011
Checksum : 5c91ef46 - correct
Events : 32
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdg1: (samsung)
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd
Name : foundation:0 (local to host foundation)
Creation Time : Fri May 13 00:57:15 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 976765954 (465.76 GiB 500.10 GB)
Array Size : 3907062272 (1863.03 GiB 2000.42 GB)
Used Dev Size : 976765568 (465.76 GiB 500.10 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 5f923d06:993ac9f3:a41ffcde:73876130
Update Time : Fri May 13 15:53:06 2011
Checksum : 65e75047 - correct
Events : 32
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdh1: (western digital)
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : a81ad850:3ce5e5a5:38de6ac7:9699b3dd
Name : foundation:0 (local to host foundation)
Creation Time : Fri May 13 00:57:15 2011
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 976769072 (465.76 GiB 500.11 GB)
Array Size : 3907062272 (1863.03 GiB 2000.42 GB)
Used Dev Size : 976765568 (465.76 GiB 500.10 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 622c546d:41fe9683:42ecf909:cebcf6a4
Update Time : Fri May 13 15:53:06 2011
Checksum : fc5ebc1a - correct
Events : 32
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 4
Array State : AAAAA ('A' == active, '.' == missing)
Run Code Online (Sandbox Code Playgroud)
我试图自己安装它:
foundation@foundation:~$ sudo mount -t ext4 -o defaults,rw /dev/md0 mnt
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Run Code Online (Sandbox Code Playgroud)
不行。所以在这一点上,我开始尝试做一些这里和其他地方的各种帖子建议的事情。第一件事就是做一个e2fsck.
foundation@foundation:~$ sudo e2fsck -f /dev/md0
e2fsck 1.41.14 (22-Dec-2010)
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md0
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
Run Code Online (Sandbox Code Playgroud)
由于上述建议与我正在阅读的内容相呼应,因此我尝试了一下。
foundation@foundation:~$ sudo mke2fs -n /dev/md0
mke2fs 1.41.14 (22-Dec-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=16 blocks, Stripe width=64 blocks
122101760 inodes, 488382784 blocks
24419139 blocks (5.00%) rese
| 归档时间: |
|
| 查看次数: |
6309 次 |
| 最近记录: |