Raid-z unaccessible after putting one disk offline
- by varesa
I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should.
The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything.
# zpool status -v
pool: raid
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: http://www.sun.com/msg/ZFS-8000-HC
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
raid DEGRADED 1 30 0
raidz1 DEGRADED 4 56 0
gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0
gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0
gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0
errors: Permanent errors have been detected in the following files:
/mnt/raid/
raid/iscsivol:<0x0>
raid/iscsivol:<0x1>
Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...