I saw similar questions, but didn't find what solution to my problem.
After power-cut, one of RAID10 (4 disks were) appears to be malfunctioning. I make tha array active one, but can not mount it. Always the same error:
mount: you must specify the filesystem type
So, here is what I have when type
mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Tue Sep 1 11:00:40 2009
Raid Level : raid10
Array Size : 1465148928 (1397.27 GiB 1500.31 GB)
Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Jun 11 09:54:27 2012
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : near=2, far=1
Chunk Size : 64K
UUID : 1a02e789:c34377a1:2e29483d:f114274d
Events : 0.166
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 0 0 1 removed
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
At the /etc/mdadm/mdadm.conf I have
by default, scan all partitions (/proc/partitions) for MD superblocks.
alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
automatically tag new arrays as belonging to the local system
HOMEHOST <system>
instruct the monitoring daemon where to send mail alerts
MAILADDR root
definitions of existing MD arrays
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d
This file was auto-generated...
So, my question is, how can I mount md0 array (md1 has been mounted without problem) in order to preserve existing data? One more thing, fdisk -l command gives the following result:
Disk /dev/sdb: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x660a6799
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 88217 708603021 83 Linux
/dev/sdb2 88218 91201 23968980 5 Extended
/dev/sdb5 88218 91201 23968948+ 82 Linux swap / Solaris
Disk /dev/sdc: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008f8ae
Device Boot Start End Blocks Id System
/dev/sdc1 1 88217 708603021 83 Linux
/dev/sdc2 88218 91201 23968980 5 Extended
/dev/sdc5 88218 91201 23968948+ 82 Linux swap / Solaris
Disk /dev/sdd: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x4be1abdb
Device Boot Start End Blocks Id System
Disk /dev/sde: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xa4d5632e
Device Boot Start End Blocks Id System
Disk /dev/sdf: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xdacb141c
Device Boot Start End Blocks Id System
Disk /dev/sdg: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xdacb141c
Device Boot Start End Blocks Id System
Disk /dev/md1: 750.1 GB, 750156251136 bytes
2 heads, 4 sectors/track, 183143616 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0xdacb141c
Device Boot Start End Blocks Id System
Warning: ignoring extra data in partition table 5
Warning: ignoring extra data in partition table 5
Warning: ignoring extra data in partition table 5
Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite)
Disk /dev/md0: 1500.3 GB, 1500312502272 bytes
255 heads, 63 sectors/track, 182402 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x660a6799
Device Boot Start End Blocks Id System
/dev/md0p1 * 1 88217 708603021 83 Linux
/dev/md0p2 88218 91201 23968980 5 Extended
/dev/md0p5 ? 121767 155317 269488144 20 Unknown
And one more thing. When using mdadm --examine command, here ise result:
mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d
devices=/dev/sdf
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
md0 has 3 devices which are active. Can someone instruct me how to solve this issue? If it is possible, I would like not to removing faulty HDD. Please advise