summaryrefslogtreecommitdiffstats
path: root/arch/x86/crypto/sha512-avx2-asm.S
diff options
context:
space:
mode:
authorArd Biesheuvel <ard.biesheuvel@linaro.org>2015-04-24 08:37:09 +0200
committerHerbert Xu <herbert@gondor.apana.org.au>2015-04-24 14:09:01 +0200
commit00425bb181c204c8f250fec122e2817a930e0286 (patch)
tree80589d0267e418abe1ba3100536d0bfe39d3feb4 /arch/x86/crypto/sha512-avx2-asm.S
parentcrypto: fix broken crypto_register_instance() module handling (diff)
downloadlinux-00425bb181c204c8f250fec122e2817a930e0286.tar.xz
linux-00425bb181c204c8f250fec122e2817a930e0286.zip
crypto: x86/sha512_ssse3 - fixup for asm function prototype change
Patch e68410ebf626 ("crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer") changed the prototypes of the core asm SHA-512 implementations so that they are compatible with the prototype used by the base layer. However, in one instance, the register that was used for passing the input buffer was reused as a scratch register later on in the code, and since the input buffer param changed places with the digest param -which needs to be written back before the function returns- this resulted in the scratch register to be dereferenced in a memory write operation, causing a GPF. Fix this by changing the scratch register to use the same register as the input buffer param again. Fixes: e68410ebf626 ("crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer") Reported-By: Bobby Powers <bobbypowers@gmail.com> Tested-By: Bobby Powers <bobbypowers@gmail.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to '')
-rw-r--r--arch/x86/crypto/sha512-avx2-asm.S2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S
index a4771dcd1fcf..1f20b35d8573 100644
--- a/arch/x86/crypto/sha512-avx2-asm.S
+++ b/arch/x86/crypto/sha512-avx2-asm.S
@@ -79,7 +79,7 @@ NUM_BLKS = %rdx
c = %rcx
d = %r8
e = %rdx
-y3 = %rdi
+y3 = %rsi
TBL = %rbp