RAID 阵列在重新启动后不会重新组装

use*_*027 9 raid

RAID 阵列在重新启动后不组装。

我有一个用于启动系统的 SSD,以及三个属于阵列一部分的 HDD。系统是 Ubuntu 16.04。

我遵循的步骤主要基于本指南:

https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04#creating-a-raid-5-array

  1. 验证我是否可以去。

    lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
    
    Run Code Online (Sandbox Code Playgroud)

输出显示除 SSD 分区之外的 sda、sdb 和 sdc 设备。我已经通过查看以下输出来验证这些是否实际上代表 HDD:

hwinfo --disk
Run Code Online (Sandbox Code Playgroud)

一切都匹配。

  1. 组装阵列。

    sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
    
    Run Code Online (Sandbox Code Playgroud)

我通过输入验证它是否正常: cat /proc/mdstat

输出如下所示:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
      7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=======>.............]  recovery = 37.1% (1449842680/3906887168) finish=273.8min speed=149549K/sec
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>
Run Code Online (Sandbox Code Playgroud)

我等到过程结束。

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sdc[3] sdb[1] sda[0]
      209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
Run Code Online (Sandbox Code Playgroud)
  1. 创建并挂载文件系统。

    sudo mkfs.ext4 -F /dev/md0
    
    sudo mkdir -p /mnt/md0
    
    sudo mount /dev/md0 /mnt/md0
    
    df -h -x devtmpfs -x tmpfs
    
    Run Code Online (Sandbox Code Playgroud)

我输入了一些数据,输出如下所示:

Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme0n1p2  406G  191G  196G  50% /
/dev/nvme0n1p1  511M  3.6M  508M   1% /boot/efi
/dev/md0        7.3T  904G  6.0T  13% /mnt/md0
Run Code Online (Sandbox Code Playgroud)
  1. 保存阵列布局。

    sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
    
    sudo update-initramfs -u
    
    echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
    
    Run Code Online (Sandbox Code Playgroud)
  2. 重新启动并验证一切是否正常。

重启后我尝试: cat /proc/mdstat
它没有显示任何活动的raid设备。

ls /mnt/md0 
Run Code Online (Sandbox Code Playgroud)

是空的。

以下命令不打印任何内容,也不起作用:

mdadm --assemble --scan -v
Run Code Online (Sandbox Code Playgroud)

只有以下操作才能恢复包含数据的阵列:

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
Run Code Online (Sandbox Code Playgroud)

应该怎么做?

额外的,可能有用的信息:

sudo dpkg-reconfigure mdadm
Run Code Online (Sandbox Code Playgroud)

输出显示:

update-initramfs: deferring update (trigger activated)
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.4.0-51-generic
Found initrd image: /boot/initrd.img-4.4.0-51-generic
Found linux image: /boot/vmlinuz-4.4.0-31-generic
Found initrd image: /boot/initrd.img-4.4.0-31-generic
Adding boot menu entry for EFI firmware configuration
done
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Processing triggers for initramfs-tools (0.122ubuntu8.5) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-51-generic
Run Code Online (Sandbox Code Playgroud)

对我来说有趣的部分是“不再支持启动和停止操作;回到默认值”

此外, /usr/share/mdadm/mkconf 的输出最后不会打印任何数组。

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR craftinity@craftinity.com

# definitions of existing MD arrays
Run Code Online (Sandbox Code Playgroud)

而 cat /etc/mdadm/mdadm.conf 的输出是。

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# DEVICE /dev/sda /dev/sdb /dev/sdc

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR craftinity@craftinity.com

# definitions of existing MD arrays

# This file was auto-generated on Sun, 04 Dec 2016 18:56:42 +0100
# by mkconf $Id$

ARRAY /dev/md0 metadata=1.2 spares=1 name=hinton:0 UUID=616991f1:dc03795b:8d09b1d4:8393060a
Run Code Online (Sandbox Code Playgroud)

解决办法是什么?我浏览了一半的互联网,似乎没有人遇到同样的问题。

几天前,我还在 serverfault 上添加了完全相同的问题(没有答案)。如果我这样做违反了堆栈交换的社区规则,我深表歉意。

小智 6

我遇到了同样的问题,我不确定,但我发现的解决方法是在 LINUX Raid 类型的 raid 成员上创建新分区,然后在创建阵列时我使用了分区而不是设备。