RAID 重建似乎已停止

Tho*_*ang 6 raid mdadm

我的服务器正在运行带有两个磁盘的RAID 1阵列。其中一个磁盘今天发生故障并被更换。

我已使用以下命令将GPT分区复制到新硬盘 (sda):

sgdisk -R /dev/sda /dev/sdb
Run Code Online (Sandbox Code Playgroud)

并更改了 UDID

sgdisk -G /dev/sda
Run Code Online (Sandbox Code Playgroud)

然后我将两个分区添加到 RAID 阵列:

mdadm /dev/md4 -a /dev/sda4
Run Code Online (Sandbox Code Playgroud)

mdadm /dev/md5 -a /dev/sda5
Run Code Online (Sandbox Code Playgroud)

/dev/md4已正确重建,但未正确重建/dev/md5

当我cat /proc/mdstat在运行这些命令后不久运行时,它显示了这一点:

Personalities : [raid1]
md5 : active raid1 sda5[2] sdb5[1]
      2820667711 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.0% (2109952/2820667711) finish=423.0min speed=111050K/sec

md4 : active raid1 sda4[2] sdb4[0]
      15727544 blocks super 1.2 [2/2] [UU]

unused devices: <none>
Run Code Online (Sandbox Code Playgroud)

哪个是正确的;它试图重建md5,但几分钟后,它停止了,现在cat /proc/mdstat返回:

Personalities : [raid1]
md5 : active raid1 sda5[2](S) sdb5[1]
      2820667711 blocks super 1.2 [2/1] [_U]

md4 : active raid1 sda4[2] sdb4[0]
      15727544 blocks super 1.2 [2/2] [UU]

unused devices: <none>
Run Code Online (Sandbox Code Playgroud)

为什么它停止在那个新磁盘上重建?这是我在跑步时得到的mdadm --detail /dev/md5

    /dev/md5:
        Version : 1.2
  Creation Time : Sun Sep 16 15:26:58 2012
     Raid Level : raid1
     Array Size : 2820667711 (2690.00 GiB 2888.36 GB)
  Used Dev Size : 2820667711 (2690.00 GiB 2888.36 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Dec 27 04:01:26 2014
          State : clean, degraded
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

           Name : rescue:5  (local to host rescue)
           UUID : 29868a4d:f63c6b43:ee926581:fd775604
         Events : 5237753

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       21        1      active sync   /dev/sdb5

       2       8        5        -      spare   /dev/sda5
Run Code Online (Sandbox Code Playgroud)

感谢@Michael Hampton 的回答。经过一夜的睡眠,我又回来了:-)所以我检查了 dmesg 并得到了这个:

[Sat Dec 27 04:01:04 2014] md: recovery of RAID array md5
[Sat Dec 27 04:01:04 2014] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[Sat Dec 27 04:01:04 2014] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[Sat Dec 27 04:01:04 2014] md: using 128k window, over a total of 2820667711k.
[Sat Dec 27 04:01:04 2014] RAID1 conf printout:
[Sat Dec 27 04:01:04 2014]  --- wd:2 rd:2
[Sat Dec 27 04:01:04 2014]  disk 0, wo:0, o:1, dev:sdb4
[Sat Dec 27 04:01:04 2014]  disk 1, wo:0, o:1, dev:sda4
[Sat Dec 27 04:01:21 2014] ata2.00: exception Emask 0x0 SAct 0x1e000 SErr 0x0 action 0x0
[Sat Dec 27 04:01:21 2014] ata2.00: irq_stat 0x40000008
[Sat Dec 27 04:01:21 2014] ata2.00: cmd 60/80:68:00:12:51/03:00:0d:00:00/40 tag 13 ncq 458752 in
[Sat Dec 27 04:01:21 2014]          res 41/40:80:68:14:51/00:03:0d:00:00/00 Emask 0x409 (media error) <F>
[Sat Dec 27 04:01:21 2014] ata2.00: configured for UDMA/133
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb] Unhandled sense code
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:21 2014] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:21 2014] Sense Key : Medium Error [current] [descriptor]
[Sat Dec 27 04:01:21 2014] Descriptor sense data with sense descriptors (in hex):
[Sat Dec 27 04:01:21 2014]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[Sat Dec 27 04:01:21 2014]         0d 51 14 68 
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:21 2014] Add. Sense: Unrecovered read error - auto reallocate failed
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb] CDB: 
[Sat Dec 27 04:01:21 2014] Read(16): 88 00 00 00 00 00 0d 51 12 00 00 00 03 80 00 00
[Sat Dec 27 04:01:21 2014] end_request: I/O error, dev sdb, sector 223417448
[Sat Dec 27 04:01:21 2014] ata2: EH complete
[Sat Dec 27 04:01:24 2014] ata2.00: exception Emask 0x0 SAct 0x8 SErr 0x0 action 0x0
[Sat Dec 27 04:01:24 2014] ata2.00: irq_stat 0x40000008
[Sat Dec 27 04:01:24 2014] ata2.00: cmd 60/08:18:68:14:51/00:00:0d:00:00/40 tag 3 ncq 4096 in
[Sat Dec 27 04:01:24 2014]          res 41/40:08:68:14:51/00:00:0d:00:00/00 Emask 0x409 (media error) <F>
[Sat Dec 27 04:01:24 2014] ata2.00: configured for UDMA/133
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb] Unhandled sense code
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:24 2014] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:24 2014] Sense Key : Medium Error [current] [descriptor]
[Sat Dec 27 04:01:24 2014] Descriptor sense data with sense descriptors (in hex):
[Sat Dec 27 04:01:24 2014]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[Sat Dec 27 04:01:24 2014]         0d 51 14 68 
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:24 2014] Add. Sense: Unrecovered read error - auto reallocate failed
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb] CDB: 
[Sat Dec 27 04:01:24 2014] Read(16): 88 00 00 00 00 00 0d 51 14 68 00 00 00 08 00 00
[Sat Dec 27 04:01:24 2014] end_request: I/O error, dev sdb, sector 223417448
[Sat Dec 27 04:01:24 2014] ata2: EH complete
[Sat Dec 27 04:01:24 2014] md/raid1:md5: sdb: unrecoverable I/O read error for block 4219904
[Sat Dec 27 04:01:24 2014] md: md5: recovery interrupted.
[Sat Dec 27 04:01:24 2014] RAID1 conf printout:
[Sat Dec 27 04:01:24 2014]  --- wd:1 rd:2
[Sat Dec 27 04:01:24 2014]  disk 0, wo:1, o:1, dev:sda5
[Sat Dec 27 04:01:24 2014]  disk 1, wo:0, o:1, dev:sdb5
[Sat Dec 27 04:01:24 2014] RAID1 conf printout:
[Sat Dec 27 04:01:24 2014]  --- wd:1 rd:2
[Sat Dec 27 04:01:24 2014]  disk 1, wo:0, o:1, dev:sdb5
Run Code Online (Sandbox Code Playgroud)

所以它似乎确实是一个读取错误。但是 SMART 似乎还不错(如果我理解正确的话):

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   088   087   006    Pre-fail  Always       -       154455820
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       5
  5 Reallocated_Sector_Ct   0x0033   084   084   036    Pre-fail  Always       -       21664
  7 Seek_Error_Rate         0x000f   072   060   030    Pre-fail  Always       -       38808769144
  9 Power_On_Hours          0x0032   071   071   000    Old_age   Always       -       26073
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       5
183 Runtime_Bad_Block       0x0032   099   099   000    Old_age   Always       -       1
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   001   001   000    Old_age   Always       -       721
188 Command_Timeout         0x0032   100   099   000    Old_age   Always       -       4295032833
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   063   061   045    Old_age   Always       -       37 (Min/Max 33/37)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       3
193 Load_Cycle_Count        0x0032   095   095   000    Old_age   Always       -       10183
194 Temperature_Celsius     0x0022   037   040   000    Old_age   Always       -       37 (0 21 0 0)
197 Current_Pending_Sector  0x0012   088   088   000    Old_age   Always       -       2072
198 Offline_Uncorrectable   0x0010   088   088   000    Old_age   Offline      -       2072
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       157045479198210
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       4435703883570
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       5487937263078

SMART Error Log Version: 1
ATA Error Count: 6 (device log contains only the most recent five errors)
Run Code Online (Sandbox Code Playgroud)

无论如何感谢您的回答。是的,如果我再次设置服务器,我绝对不会为我的 RAID 阵列使用多个分区(在这种情况下,实际上 md5 甚至使用 LVM。

谢谢,

Mic*_*ton 10

看起来您在 Linux 没有完全意识到它的情况下物理删除了故障磁盘,因此当您添加新磁盘时,它被标记为备用磁盘(并且系统仍在等待您将旧磁盘放回)。很可能 /dev/md4 失败并且 Linux 检测到失败,但由于 /dev/md5 是一个单独的阵列(它本身没有失败)Linux 仍然认为它是好的。

要从这种情况中恢复,您需要告诉系统开始使用备用磁盘,并忘记移除的磁盘。

首先,将 RAID 阵列增加到三个设备,以便它可以使用备用设备。

mdadm --grow /dev/md5 --raid-devices=3
Run Code Online (Sandbox Code Playgroud)

在这一点上应该开始同步到备用,将被列为spare rebuildingmdadm --detail,你应该看到在同步操作/proc/mdstat

同步完成后,您将告诉 mdadm 忘记不再存在的设备。

mdadm --remove /dev/md5 detached
Run Code Online (Sandbox Code Playgroud)

最后,您将设备数量设置回 2。

mdadm --grow /dev/md5 --raid-devices=2
Run Code Online (Sandbox Code Playgroud)

你的系统是如何进入这种状态的,我不能确定。但可能是您的另一个磁盘出现读取错误,导致重新同步停止并出现此故障状态。如果是这种情况,您将dmesg在同步操作终止时看到该效果的日志条目。如果事实证明是这种情况,您将需要一些更深层次的魔法(如果发生这种情况,请更新您的问题),并且可能需要方便您的备份。


您可能还想在超级用户上阅读这个几乎相同的问题,因为它包含一些其他可能的解决方案。


最后,最佳做法是使用整个磁盘作为 RAID 阵列成员,或者最多使用磁盘的单个分区,然后您可以根据需要使用 LVM 划分 RAID 块设备。这种配置本来可以防止这个问题。