diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2020-07-08 11:11:18 +0200 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2020-07-16 13:49:04 +0200 |
commit | e79a31715193686e92dadb4caedfbb1f5de3659c (patch) | |
tree | fe9d960b52083a72242d8c1a5ecbd0db8b90ff3d /include | |
parent | crypto: lib/chacha20poly1305 - Add missing function declaration (diff) | |
download | linux-e79a31715193686e92dadb4caedfbb1f5de3659c.tar.xz linux-e79a31715193686e92dadb4caedfbb1f5de3659c.zip |
crypto: x86/chacha-sse3 - use unaligned loads for state array
Due to the fact that the x86 port does not support allocating objects
on the stack with an alignment that exceeds 8 bytes, we have a rather
ugly hack in the x86 code for ChaCha to ensure that the state array is
aligned to 16 bytes, allowing the SSE3 implementation of the algorithm
to use aligned loads.
Given that the performance benefit of using of aligned loads appears to
be limited (~0.25% for 1k blocks using tcrypt on a Corei7-8650U), and
the fact that this hack has leaked into generic ChaCha code, let's just
remove it.
Cc: Martin Willi <martin@strongswan.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Martin Willi <martin@strongswan.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'include')
-rw-r--r-- | include/crypto/chacha.h | 4 |
1 files changed, 0 insertions, 4 deletions
diff --git a/include/crypto/chacha.h b/include/crypto/chacha.h index 2676f4fbd4c1..3a1c72fdb7cf 100644 --- a/include/crypto/chacha.h +++ b/include/crypto/chacha.h @@ -25,11 +25,7 @@ #define CHACHA_BLOCK_SIZE 64 #define CHACHAPOLY_IV_SIZE 12 -#ifdef CONFIG_X86_64 -#define CHACHA_STATE_WORDS ((CHACHA_BLOCK_SIZE + 12) / sizeof(u32)) -#else #define CHACHA_STATE_WORDS (CHACHA_BLOCK_SIZE / sizeof(u32)) -#endif /* 192-bit nonce, then 64-bit stream position */ #define XCHACHA_IV_SIZE 32 |