Merge sort versus quick sort performance

Posted by Giorgio on Programmers See other posts from Programmers or by Giorgio
Published on 2012-04-26T20:21:46Z Indexed on 2012/06/28 3:24 UTC
Read the original article Hit count: 471

I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms.

The prototypes of the sorting functions are:

void merge_sort(const char **lines, int start, int end);

void quick_sort(const char **lines, int start, int end);

i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end.

I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines.

I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following.

  • Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds.
  • For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s.
  • For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort.
  • For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort.
  • For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort.

So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident?

Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e.

merge_sort(lines, start, (start + end) / 2);
merge_sort(lines, 1 + (start + end) / 2, end);

Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers.

For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type

start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end

it calls itself recursively:

quick_sort(lines, start,          pivotIndex - 1);
quick_sort(lines, pivotIndex + 1, end);

Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation.

So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort?

EDIT

Thank you for your answers.

My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version:

function partition(array, 'left', 'right', 'pivotIndex')

where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it).

Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

© Programmers or respective owner

Related posts about algorithms

Related posts about Performance