I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load.
One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server.
So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance.
I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference.
I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference.
I am lost. What can I do, to figure out, what is going on?