mkfs Operation Takes Very Long on Linux Software Raid 5
- by Elmar Weber
I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters.
After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful:
writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display)
when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled
The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source.
I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation.
The server details:
Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux
mdadm - v2.6.7.1
Does anyone has a suggestion on how to further debug this?