MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

Posted by Igor Polishchuk on Server Fault See other posts from Server Fault or by Igor Polishchuk
Published on 2011-03-04T22:18:01Z Indexed on 2011/03/04 23:26 UTC
Read the original article Hit count: 584

Filed under:

I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests:

Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd

Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd

Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200?

Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter?

More details about the configurations:

  1. A good one in my office:

Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64

20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host

  1. Not so good one at my vendor's:

Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64

20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each

Additional information.

I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives.

When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200.

The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput.

Thank you for looking into it.

Regards

Igor

© Server Fault or respective owner

Related posts about disk-performance