내 전용 서버 중 하나에서 apt-get 업데이트를 실행하여 비교적 무서운 경고가 남았습니다.
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.26-2-686-bigmem
W: mdadm: the array /dev/md/1 with UUID c622dd79:496607cf:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/2 with UUID 24120323:8c54087c:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/6 with UUID eef74de5:9267b2a1:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/5 with UUID 5d45b20c:04d8138f:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
지시에 따라 / usr / share / mdadm / mkconf의 출력을 검사하고 /etc/mdadm/mdadm.conf와 비교 한 결과가 상당히 다릅니다.
다음은 /etc/mdadm/mdadm.conf 내용입니다.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b93b0b87:5f7c2c46:0043fca9:4026c400
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0fa8842:e214fb1a:fad8a3a2:28f2aabc
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=cdc2a9a9:63bbda21:f55e806c:a5371897
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=eca75495:9c9ce18c:d2bac587:f1e79d80
# This file was auto-generated on Wed, 04 Nov 2009 11:32:16 +0100
# by mkconf $Id$
그리고 여기 / usr / share / mdadm / mkconf에서 출력됩니다
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md1 UUID=c622dd79:496607cf:c230666b:5103eba0
ARRAY /dev/md2 UUID=24120323:8c54087c:c230666b:5103eba0
ARRAY /dev/md5 UUID=5d45b20c:04d8138f:c230666b:5103eba0
ARRAY /dev/md6 UUID=eef74de5:9267b2a1:c230666b:5103eba0
# This configuration was auto-generated on Sat, 25 Feb 2012 13:10:00 +1030
# by mkconf 3.1.4-1+8efb9d1+squeeze1
알다시피, /etc/mdadm/mdadm.conf 파일에서 'ARRAY'로 시작하는 네 줄을 / usr / share / mdadm / mkconf 출력의 다른 네 개의 'ARRAY'줄로 바꿔야합니다.
내가 이것을하고 update-initramfs -u를 실행했을 때 더 이상 경고가 없었습니다.
내가 위에서 한 일이 맞습니까? 이제는 재부팅하지 않을 것이라는 두려움 때문에 서버를 재부팅하는 것이 무섭습니다. 원격 전용 서버 인 경우 다운 타임을 의미하며 다시 실행하는 데 비용이 많이들 수 있습니다.
팔로우 (질문에 대한 답변) :
마운트 출력 :
/dev/md1 on / type ext3 (rw,usrquota,grpquota)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md2 on /boot type ext2 (rw)
/dev/md5 on /tmp type ext3 (rw)
/dev/md6 on /home type ext3 (rw,usrquota,grpquota)
mdadm --detail / dev / md0
mdadm: md device /dev/md0 does not appear to be active.
mdadm --detail / dev / md1
/dev/md1:
Version : 0.90
Creation Time : Sun Aug 14 09:43:08 2011
Raid Level : raid1
Array Size : 31463232 (30.01 GiB 32.22 GB)
Used Dev Size : 31463232 (30.01 GiB 32.22 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Feb 25 14:03:47 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : c622dd79:496607cf:c230666b:5103eba0
Events : 0.24
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
mdadm --detail / dev / md2
/dev/md2:
Version : 0.90
Creation Time : Sun Aug 14 09:43:09 2011
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Used Dev Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sat Feb 25 13:20:20 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 24120323:8c54087c:c230666b:5103eba0
Events : 0.30
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
mdadm --detail / dev / md3
mdadm: md device /dev/md3 does not appear to be active.
mdadm --detail / dev / md5
/dev/md5:
Version : 0.90
Creation Time : Sun Aug 14 09:43:09 2011
Raid Level : raid1
Array Size : 2104448 (2.01 GiB 2.15 GB)
Used Dev Size : 2104448 (2.01 GiB 2.15 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Sat Feb 25 14:09:03 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 5d45b20c:04d8138f:c230666b:5103eba0
Events : 0.30
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
mdadm --detail / dev / md6
/dev/md6:
Version : 0.90
Creation Time : Sun Aug 14 09:43:09 2011
Raid Level : raid1
Array Size : 453659456 (432.64 GiB 464.55 GB)
Used Dev Size : 453659456 (432.64 GiB 464.55 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 6
Persistence : Superblock is persistent
Update Time : Sat Feb 25 14:10:00 2012
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : eef74de5:9267b2a1:c230666b:5103eba0
Events : 0.31
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
팔로우 2 (질문에 대한 답변) :
/ etc / fstab의 출력
/dev/md1 / ext3 defaults,usrquota,grpquota 1 1
devpts /dev/pts devpts mode=0620,gid=5 0 0
proc /proc proc defaults 0 0
#usbdevfs /proc/bus/usb usbdevfs noauto 0 0
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
/dev/dvd /media/dvd auto ro,noauto,user,exec 0 0
#
#
#
/dev/md2 /boot ext2 defaults 1 2
/dev/sda3 swap swap pri=42 0 0
/dev/sdb3 swap swap pri=42 0 0
/dev/md5 /tmp ext3 defaults 0 0
/dev/md6 /home ext3 defaults,usrquota,grpquota 1 2
mount
과mdadm --detail
명령을 제공 할 수 있습니까 ?