apt-get update mdadm scary warnings
Posted
by
user568829
on Server Fault
See other posts from Server Fault
or by user568829
Published on 2012-02-24T19:16:26Z
Indexed on
2014/08/25
4:23 UTC
Read the original article
Hit count: 554
Just ran an apt-get update on one of my dedicated servers to be left with a relatively scary warning:
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.26-2-686-bigmem
W: mdadm: the array /dev/md/1 with UUID c622dd79:496607cf:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/2 with UUID 24120323:8c54087c:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/6 with UUID eef74de5:9267b2a1:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/5 with UUID 5d45b20c:04d8138f:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
As instructed I inspected the output of /usr/share/mdadm/mkconf and compared with /etc/mdadm/mdadm.conf and they are quite different.
Here is the /etc/mdadm/mdadm.conf contents:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b93b0b87:5f7c2c46:0043fca9:4026c400
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0fa8842:e214fb1a:fad8a3a2:28f2aabc
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=cdc2a9a9:63bbda21:f55e806c:a5371897
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=eca75495:9c9ce18c:d2bac587:f1e79d80
# This file was auto-generated on Wed, 04 Nov 2009 11:32:16 +0100
# by mkconf $Id$
And here is the out put from /usr/share/mdadm/mkconf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md1 UUID=c622dd79:496607cf:c230666b:5103eba0
ARRAY /dev/md2 UUID=24120323:8c54087c:c230666b:5103eba0
ARRAY /dev/md5 UUID=5d45b20c:04d8138f:c230666b:5103eba0
ARRAY /dev/md6 UUID=eef74de5:9267b2a1:c230666b:5103eba0
# This configuration was auto-generated on Sat, 25 Feb 2012 13:10:00 +1030
# by mkconf 3.1.4-1+8efb9d1+squeeze1
As I understand it I need to replace the four lines that start with 'ARRAY' in the /etc/mdadm/mdadm.conf file with the different four 'ARRAY' lines from the /usr/share/mdadm/mkconf output.
When I did this and then ran update-initramfs -u there were no more warnings.
Is what I have done above correct? I am now terrified of rebooting the server for fear it will not reboot and being a remote dedicated server this would certainly mean downtime and possibly would be expensive to get running again.
FOLLOW UP (response to question):
the output from mount:
/dev/md1 on / type ext3 (rw,usrquota,grpquota)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md2 on /boot type ext2 (rw)
/dev/md5 on /tmp type ext3 (rw)
/dev/md6 on /home type ext3 (rw,usrquota,grpquota)
mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sun Aug 14 09:43:08 2011
Raid Level : raid1
Array Size : 31463232 (30.01 GiB 32.22 GB)
Used Dev Size : 31463232 (30.01 GiB 32.22 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Feb 25 14:03:47 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : c622dd79:496607cf:c230666b:5103eba0
Events : 0.24
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
mdadm --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Sun Aug 14 09:43:09 2011
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Used Dev Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sat Feb 25 13:20:20 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 24120323:8c54087c:c230666b:5103eba0
Events : 0.30
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
mdadm --detail /dev/md3
mdadm: md device /dev/md3 does not appear to be active.
mdadm --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Sun Aug 14 09:43:09 2011
Raid Level : raid1
Array Size : 2104448 (2.01 GiB 2.15 GB)
Used Dev Size : 2104448 (2.01 GiB 2.15 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Sat Feb 25 14:09:03 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 5d45b20c:04d8138f:c230666b:5103eba0
Events : 0.30
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
mdadm --detail /dev/md6
/dev/md6:
Version : 0.90
Creation Time : Sun Aug 14 09:43:09 2011
Raid Level : raid1
Array Size : 453659456 (432.64 GiB 464.55 GB)
Used Dev Size : 453659456 (432.64 GiB 464.55 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 6
Persistence : Superblock is persistent
Update Time : Sat Feb 25 14:10:00 2012
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : eef74de5:9267b2a1:c230666b:5103eba0
Events : 0.31
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
FOLLOW UP 2 (response to question):
Output from /etc/fstab
/dev/md1 / ext3 defaults,usrquota,grpquota 1 1
devpts /dev/pts devpts mode=0620,gid=5 0 0
proc /proc proc defaults 0 0
#usbdevfs /proc/bus/usb usbdevfs noauto 0 0
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
/dev/dvd /media/dvd auto ro,noauto,user,exec 0 0
#
#
#
/dev/md2 /boot ext2 defaults 1 2
/dev/sda3 swap swap pri=42 0 0
/dev/sdb3 swap swap pri=42 0 0
/dev/md5 /tmp ext3 defaults 0 0
/dev/md6 /home ext3 defaults,usrquota,grpquota 1 2
© Server Fault or respective owner