Is it too early to start designing for Task Parallel Library?
Posted
by Joe Erickson
on Stack Overflow
See other posts from Stack Overflow
or by Joe Erickson
Published on 2010-01-28T17:43:44Z
Indexed on
2010/03/17
20:51 UTC
Read the original article
Hit count: 346
I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it.
There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer.
Why Start Now?
- The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs.
- I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM.
- Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today.
- It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance.
Why Wait?
- Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA?
- Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it?
Limitations
- Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.
© Stack Overflow or respective owner