How do I tell mdadm to start using a missing disk in my RAID5 array again?
- by Jon Cage
I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine.
When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it.
I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this:
root@uberserver:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md_d1 : inactive sdf1[2](S)
1953511936 blocks
md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0]
2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
I have two RAID arrays and the one that normally pops up as md1 isn't appearing.
I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started:
root@uberserver:~# mdadm --stop /dev/md_d1
mdadm: stopped /dev/md_d1
...and then tried to tell ubuntu to pick the disks up again:
root@uberserver:~# mdadm --assemble --scan
mdadm: /dev/md/1 has been started with 2 drives (out of 3).
So that's started md1 again but it's not picking up the disk from md_d1:
root@uberserver:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sde1[1] sdf1[2]
3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]
md_d1 : inactive sdd1[0](S)
1953511936 blocks
md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0]
2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array?
How do I get that missing disk back home again?
[Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it:
root@uberserver:~# mdadm --assemble --scan
/dev/md1: File exists
mdadm: /dev/md/1 already active, cannot restart it!
mdadm: /dev/md/1 needed for /dev/sdd1...
What am I missing?