How to re-add a RAID-10 failed drive on Ubuntu?

Posted by thiesdiggity on Server Fault See other posts from Server Fault or by thiesdiggity
Published on 2012-10-25T15:31:25Z Indexed on 2012/10/25 17:02 UTC
Read the original article Hit count: 285

Filed under:
|
|
|

I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command:

mdadm --manage --re-add /dev/md2 /dev/sdc1

I get the following error message:

mdadm: Cannot open /dev/sdc1: Device or resource busy

When I do a "cat /proc/mdstat" I get the following:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$
md2 : active raid10 sdb1[0] sdd1[3]
    1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U]

md1 : active raid1 sda2[0] sdc2[1]
    468853696 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdc1[1]
    19530688 blocks [2/2] [UU]

unused devices: <none>

When I run "/sbin/mdadm --detail /dev/md2" I get the following:

/dev/md2:
    Version : 00.90
Creation Time : Mon Sep  5 23:41:13 2011
   Raid Level : raid10
   Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
  Raid Devices : 4
  Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Thu Oct 25 09:25:08 2012
      State : active, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

     Layout : near=2, far=1
Chunk Size : 64K

       UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb
     Events : 0.6688691

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       0        0        1      removed
   2       0        0        2      removed
   3       8       49        3      active sync   /dev/sdd1

Output of df -h is:

Filesystem            Size  Used Avail Use% Mounted on
/dev/md1              441G  2.0G  416G   1% /
none                   32G  236K   32G   1% /dev
tmpfs                  32G     0   32G   0% /dev/shm
none                   32G  112K   32G   1% /var/run
none                   32G     0   32G   0% /var/lock
none                   32G     0   32G   0% /lib/init/rw
tmpfs                  64G  215M   63G   1% /mnt/vmware
none                  441G  2.0G  416G   1% /var/lib/ureadahead/debugfs
/dev/mapper/RAID10VG-RAID10LV
                  1.8T  139G  1.6T   8% /mnt/RAID10

When I do a "fdisk -l" I can see all the drives needed for the RAID-10.

The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array?

Any help would be greatly appreciated.

Thanks!

© Server Fault or respective owner

Related posts about linux

Related posts about ubuntu