CUDA: When to use shared memory and when to rely on L1 caching?
Posted
by
Roger Dahl
on Stack Overflow
See other posts from Stack Overflow
or by Roger Dahl
Published on 2012-06-30T16:31:18Z
Indexed on
2012/07/01
3:16 UTC
Read the original article
Hit count: 349
After Compute Capability 2.0 (Fermi) was released, I've wondered if there are any use cases left for shared memory. That is, when is it better to use shared memory than just let L1 perform its magic in the background?
Is shared memory simply there to let algorithms designed for CC < 2.0 run efficiently without modifications?
To collaborate via shared memory, threads in a block write to shared memory and synchronize with __syncthreads()
. Why not simply write to global memory (through L1), and synchronize with __threadfence_block()
? The latter option should be easier to implement since it doesn't have to relate to two different locations of values, and it should be faster because there is no explicit copying from global to shared memory. Since the data gets cached in L1, threads don't have to wait for data to actually make it all the way out to global memory.
With shared memory, one is guaranteed that a value that was put there remains there throughout the duration of the block. This is as opposed to values in L1, which get evicted if they are not used often enough. Are there any cases where it's better too cache such rarely used data in shared memory than to let the L1 manage them based on the usage pattern that the algorithm actually has?
© Stack Overflow or respective owner