Optimizing Solaris 11 SHA-1 on Intel Processors
- by danx
SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file.
It is intended to uniquely identify text and to verify it hasn't been modified.
Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques.
This code has been incorporated into Solaris 11
and is available in the Solaris Crypto Framework
via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1.
The optimized code is used automatically on systems with a x86 CPU supporting
SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families.
Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge).
Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more.
Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort.
SHA-1 is used heavily though in SSL/TLS, for example.
And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS.
Optimizations
Review
SHA-1 operates by reading an arbitrary amount of data.
The data is read in 512 bit (64 byte) blocks
(the last block is padded in a specific way to ensure it's a full 64 bytes).
Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block.
Each round produces a 32-bit intermediate result, called W[i].
Here's what each round operates:
The first 16 rounds, rounds 0 to 15, read
the 512 bit block 32 bits at-a-time.
These 32 bits is used as input to the round.
The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit.
The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators
on the 32-bit input and some constants.
The 32-bit result is saved as W[i] for round i.
The 32-bit result of the final round, W[79], is the SHA-1 checksum.
Optimization: Vectorization
The first 16 rounds can be vectorized (computed in parallel)
because they don't depend on the output of a previous round.
As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time.
Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with:
W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1
Initialize W[i] as follows:
W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2
That means that 6 rounds could be vectorized at once,
with no additional calculations, instead of just 3!
This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1.
Optimization: SSSE3
Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide.
The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register.
The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions:
movdqa W_minus_04, W_TMP
pxor W_minus_28, W // W equals W[i-32:i-29] before XOR
// W = W[i-32:i-29] ^ W[i-28:i-25]
palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from
// W[i-4:i-1] and W[i-8:i-5] vectors
pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13]
pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3])
movdqa W, W_TMP // 4 dwords in W are rotated left by 2
psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2)
pslld $2, W_TMP
por W, W_TMP
movdqa W_TMP, W // four new W values W[i:i+3] are now calculated
paddd (K_XMM), W_TMP // adding 4 current round's values of K
movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read
A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack.
This is best illustrated with a chart. Without vectorization, computing the rounds is like this
(each "R" represents 1 round of SHA-1 computation):
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
With vectorization, 4 rounds can be computed in parallel:
RRRRRRRRRRRRRRRRRRRR
RRRRRRRRRRRRRRRRRRRR
RRRRRRRRRRRRRRRRRRRR
RRRRRRRRRRRRRRRRRRRR
Optimization: AVX
The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization.
SSSE3 instructions have two operands, a input and an output.
AVX allows three operands, two inputs and an output.
In many cases two SSSE3 instructions can be combined into one AVX instruction.
The difference is best illustrated with an example.
Consider these two instructions from the snippet above:
pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13]
pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3])
With AVX they can be combined in one instruction:
vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3])
This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet.
As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15).
Can %ymm registers be used to parallelize the code even more?
Optimization: Solaris-specific
In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code:
Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well.
Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time.
Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-).
Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits)
Performance
I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are:
The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB).
I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds.
The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second.
Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module.
The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module.
The operations are on a 64Kbyte buffer with 100 iterations.
The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second.
For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second.
Availability
This code is in Solaris 11 FCS.
It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel.
You must be running hardware that supports Intel's SSSE3 instructions
(for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures).
The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example,
nehalem $ isainfo -v
$ isainfo -v
64-bit amd64 applications
sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov
amd_sysc cx8 tsc fpu
32-bit i386 applications
sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov
sep cx8 tsc fpu
If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above:
sandybridge $ isainfo -v
64-bit amd64 applications
avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3
sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu
32-bit i386 applications
avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3
sse2 sse fxsr mmx cmov sep cx8 tsc fpu
No special configuration or setup is needed to take advantage of this code.
Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines
and execute the correctly-tuned code for that microprocessor.
Summary
The Solaris 11 Crypto Framework,
via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries,
incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors.
As with other Solaris optimizations, they come automatically "under the hood"
with the current Solaris release.
References
"Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010).
The source for these SHA-1 optimizations used in Solaris
"SHA-1", Wikipedia
Good overview of SHA-1
FIPS 180-1 SHA-1 standard (FIPS, 1995)
NIST Comments on Cryptanalytic Attacks on SHA-1
(2005, revised 2006)