다음을 사용하여 RAID를 만들었습니다.
sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2
sudo mdadm --detail --scan
보고:
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
내가 추가 한 것은 /etc/mdadm/mdadm.conf
아래를 참조하십시오.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500
# by mkconf $Id$
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat
보고:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[0] sdc2[1]
208629632 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]
unused devices: <none>
ls -la /dev | grep md
보고:
brw-rw---- 1 root disk 9, 1 Oct 30 11:06 md1
brw-rw---- 1 root disk 9, 2 Oct 30 11:06 md2
그래서 나는 모든 것이 좋다고 생각하고 재부팅합니다.
재부팅 후, / dev에 / MD1 지금 은 / dev / md126 하고는 / dev / MD2 지금 은 / dev / md127 ?????
sudo mdadm --detail --scan
보고:
ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat
보고:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 sdc2[1] sdb2[0]
208629632 blocks super 1.2 [2/2] [UU]
md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]
unused devices: <none>
ls -la /dev | grep md
보고:
drwxr-xr-x 2 root root 80 Oct 30 11:18 md
brw-rw---- 1 root disk 9, 126 Oct 30 11:18 md126
brw-rw---- 1 root disk 9, 127 Oct 30 11:18 md127
모든 것이 잃어 버리지 않습니다.
sudo mdadm --stop /dev/md126
sudo mdadm --stop /dev/md127
sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1
sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2
모든 것을 확인하십시오.
sudo mdadm --detail --scan
보고:
ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat
보고:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sdb2[0] sdc2[1]
208629632 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]
unused devices: <none>
ls -la /dev | grep md
보고:
brw-rw---- 1 root disk 9, 1 Oct 30 11:26 md1
brw-rw---- 1 root disk 9, 2 Oct 30 11:26 md2
다시 한 번, 모든 것이 좋다고 생각하고 재부팅합니다.
다시, 재부팅 / 디바이스 / IS MD1 / 디바이스 / md126 및 / 디바이스 / IS MD2 / 디바이스 / md127 ?????
sudo mdadm --detail --scan
보고:
ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb
cat /proc/mdstat
보고:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 sdc2[1] sdb2[0]
208629632 blocks super 1.2 [2/2] [UU]
md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]
767868736 blocks super 1.2 [2/2] [UU]
unused devices: <none>
ls -la /dev | grep md
보고:
drwxr-xr-x 2 root root 80 Oct 30 11:42 md
brw-rw---- 1 root disk 9, 126 Oct 30 11:42 md126
brw-rw---- 1 root disk 9, 127 Oct 30 11:42 md127
내가 여기서 무엇을 놓치고 있습니까?