Worse is better. Is there an example?
- by J.F. Sebastian
Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)?
An acceptable answer might be in a form:
There are algorithms A and B that
have O(N**2) and O(N) time
complexity correspondingly, but B
has such a big constant that it has no
advantages over A for inputs less
then a number of atoms in the
Universe.
Examples highlights from the answers:
Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems.
A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm.
Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines.
All these examples exploit worst-case vs. average scenarios.
Are there examples that do not rely on the difference between the worst case vs. average case scenario?
Related:
The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article)
Python's Design Philosophy:
The ABC group strived for perfection.
For example, they used tree-based data
structure algorithms that were proven
to be optimal for asymptotically large
collections (but were not so great for
small collections).
This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case).
Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others?
From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."