I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows:
Dual core (2,4 GHz) with 4 GB RAM
3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB)
We want to evaluate the use of a L2ARC, so the current ZPOOL is:
$ zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
afstank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c11t0d0 ONLINE 0 0 0
c11t1d0 ONLINE 0 0 0
c11t2d0 ONLINE 0 0 0
c11t3d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c13t0d0 ONLINE 0 0 0
c13t1d0 ONLINE 0 0 0
c13t2d0 ONLINE 0 0 0
c13t3d0 ONLINE 0 0 0
cache
c14t3d0 ONLINE 0 0 0
where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences)
write_chr write_blk rewrite read_chr read_blk random seeks
101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s
With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD.
So, my questions are mainly:
Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run.
Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.