Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?
- by jamie
Paul Graham argues that:
It would be great if a startup could give us something of the old
Moore's Law back, by writing software that could make a large number
of CPUs look to the developer like one very fast CPU. ...
The most ambitious is to try to do it automatically: to write a
compiler that will parallelize our code for us. There's a name for
this compiler, the sufficiently smart compiler, and it is a byword for
impossibility. But is it really impossible?
Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground:
is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for?
A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem.
Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.)
I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.