Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good:
[root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s
So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results:
[root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s
[root@localhost ~]#
Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host:
~ # vmkping 10.0.0.1 -s 9000
PING 10.0.0.1 (10.0.0.1): 9000 data bytes
9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms
9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms
9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms
The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple.
I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?