Unable to access intel fake RAID 1 array in Fedora 14 after reboot
- by Sim
Hello everyone,
1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays:
2x320GB RAID1 - used for operating systems md126
2x1TB RAID1 - used for data md125
I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only":
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md125 : active (auto-read-only) raid1 sdc[1] sdd[0]
976759808 blocks super external:/md127/0 [2/2] [UU]
md127 : inactive sdc[1](S) sdd[0](S)
4514 blocks super external:imsm
md126 : active raid1 sda[1] sdb[0]
312566784 blocks super external:/md1/0 [2/2] [UU]
md1 : inactive sdb[1](S) sda[0](S)
4514 blocks super external:imsm
unused devices: <none>
[root@localhost ~]#
and the partition in it was not available as device special file in /dev:
[root@localhost ~]# ls -l /dev/md125*
brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125
[root@localhost ~]#
But the partition is there according to fdisk:
[root@localhost ~]# fdisk -l /dev/md125
Disk /dev/md125: 1000.2 GB, 1000202043392 bytes
19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1b238ea9
Device Boot Start End Blocks Id System
/dev/md125p1 2048 1953519615 976758784 83 Linux
[root@localhost ~]#
I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot.
So, my question is -
How do I make the array automatically available after reboot?
Here is some additional information:
1st I am able to see the UUID of the array in blkid:
[root@localhost ~]# blkid
/dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3"
/dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs"
/dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4"
/dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member"
/dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4"
/dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap"
/dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4"
/dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4"
/dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4"
/dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda: TYPE="isw_raid_member"
/dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4"
[root@localhost ~]#
Here is how /etc/mdadm.conf looks like:
[root@localhost ~]# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9
ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b
ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a
[root@localhost ~]#
here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md125 : active raid1 sdc[1] sdd[0]
976759808 blocks super external:/md127/0 [2/2] [UU]
md127 : inactive sdc[1](S) sdd[0](S)
4514 blocks super external:imsm
md126 : active raid1 sda[1] sdb[0]
312566784 blocks super external:/md1/0 [2/2] [UU]
md1 : inactive sdb[1](S) sda[0](S)
4514 blocks super external:imsm
unused devices: <none>
[root@localhost ~]#
Detailed output regarding the array in subject:
[root@localhost ~]# mdadm --detail /dev/md125
/dev/md125:
Container : /dev/md127, member 0
Raid Level : raid1
Array Size : 976759808 (931.51 GiB 1000.20 GB)
Used Dev Size : 976759940 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
Update Time : Fri Jan 7 00:38:00 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782
Number Major Minor RaidDevice State
1 8 32 0 active sync /dev/sdc
0 8 48 1 active sync /dev/sdd
[root@localhost ~]#
and /etc/fstab, with /data commented (the filesystem that is on this array):
#
# /etc/fstab
# Created by anaconda on Thu Jan 6 03:32:40 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg00-rootlv / ext4 defaults 1 1
UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2
#UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1
/dev/mapper/vg00-homelv /home ext4 defaults 1 2
/dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2
/dev/mapper/vg00-varlv /var ext4 defaults 1 2
/dev/mapper/vg00-swaplv swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
[root@localhost ~]#
Thanks in advance to everyone that even read this whole issue :-)