summaryrefslogtreecommitdiffstats
path: root/arch/arm64/crypto (follow)
Commit message (Collapse)AuthorAgeFilesLines
* crypto: arm64/gcm - use inline helper to suppress indirect callsArd Biesheuvel2020-07-091-39/+46
| | | | | | | | | | | Introduce an inline wrapper for ghash_do_update() that incorporates the indirect call to the asm routine that is passed as an argument, and keep the non-SIMD fallback code out of line. This ensures that all references to the function pointer are inlined where the address is taken, removing the need for any indirect calls to begin with. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/gcm - use variably sized key structArd Biesheuvel2020-07-091-28/+21
| | | | | | | | | Now that the ghash and gcm drivers are split, we no longer need to allocate a key struct for the former that carries powers of H that are only used by the latter. Also, take this opportunity to clean up the code a little bit. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/gcm - disentangle ghash and gcm setkey() routinesArd Biesheuvel2020-07-091-25/+22
| | | | | | | | | | The remaining ghash implementation does not support aggregation, and so there is no point in including the precomputed powers of H in the key struct. So move that into the GCM setkey routine, and get rid of the shared sub-routine entirely. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/ghash - drop PMULL based shashArd Biesheuvel2020-07-091-78/+12
| | | | | | | | | | | | | | | | | There are two ways to implement SIMD accelerated GCM on arm64: - using the PMULL instructions for carryless 64x64->128 multiplication, in which case the architecture guarantees that the AES instructions are available as well, and so we can use the AEAD implementation that combines both, - using the PMULL instructions for carryless 8x8->16 bit multiplication, which is implemented as a shash, and can be combined with any ctr(aes) implementation by the generic GCM AEAD template driver. So let's drop the 64x64->128 shash driver, which is never needed for GCM, and not suitable for use anywhere else. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2020-06-014-6/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Introduce crypto_shash_tfm_digest() and use it wherever possible. - Fix use-after-free and race in crypto_spawn_alg. - Add support for parallel and batch requests to crypto_engine. Algorithms: - Update jitter RNG for SP800-90B compliance. - Always use jitter RNG as seed in drbg. Drivers: - Add Arm CryptoCell driver cctrng. - Add support for SEV-ES to the PSP driver in ccp" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (114 commits) crypto: hisilicon - fix driver compatibility issue with different versions of devices crypto: engine - do not requeue in case of fatal error crypto: cavium/nitrox - Fix a typo in a comment crypto: hisilicon/qm - change debugfs file name from qm_regs to regs crypto: hisilicon/qm - add DebugFS for xQC and xQE dump crypto: hisilicon/zip - add debugfs for Hisilicon ZIP crypto: hisilicon/hpre - add debugfs for Hisilicon HPRE crypto: hisilicon/sec2 - add debugfs for Hisilicon SEC crypto: hisilicon/qm - add debugfs to the QM state machine crypto: hisilicon/qm - add debugfs for QM crypto: stm32/crc32 - protect from concurrent accesses crypto: stm32/crc32 - don't sleep in runtime pm crypto: stm32/crc32 - fix multi-instance crypto: stm32/crc32 - fix run-time self test issue. crypto: stm32/crc32 - fix ext4 chksum BUG_ON() crypto: hisilicon/zip - Use temporary sqe when doing work crypto: hisilicon - add device error report through abnormal irq crypto: hisilicon - remove codes of directly report device errors through MSI crypto: hisilicon - QM memory management optimization crypto: hisilicon - unify initial value assignment into QM ...
| * crypto: lib/sha1 - remove unnecessary includes of linux/cryptohash.hEric Biggers2020-05-082-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <linux/cryptohash.h> sounds very generic and important, like it's the header to include if you're doing cryptographic hashing in the kernel. But actually it only includes the library implementation of the SHA-1 compression function (not even the full SHA-1). This should basically never be used anymore; SHA-1 is no longer considered secure, and there are much better ways to do cryptographic hashing in the kernel. Most files that include this header don't actually need it. So in preparation for removing it, remove all these unneeded includes of it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm64/aes-glue - use crypto_shash_tfm_digest()Eric Biggers2020-05-081-3/+1
| | | | | | | | | | | | | | | | | | Instead of manually allocating a 'struct shash_desc' on the stack and calling crypto_shash_digest(), switch to using the new helper function crypto_shash_tfm_digest() which does this for us. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm64 - Consistently enable extensionMark Brown2020-04-241-1/+1
| | | | | | | | | | | | | | | | | | | | Currently most of the crypto files enable the crypto extension using the .arch directive but crct10dif-ce-core.S uses .cpu instead. Move that over to .arch for consistency. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arch/nhpoly1305 - process in explicit 4k chunksJason A. Donenfeld2020-04-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than chunking via PAGE_SIZE, this commit changes the arch implementations to chunk in explicit 4k parts, so that calculations on maximum acceptable latency don't suddenly become invalid on platforms where PAGE_SIZE isn't 4k, such as arm64. Fixes: 0f961f9f670e ("crypto: x86/nhpoly1305 - add AVX2 accelerated NHPoly1305") Fixes: 012c82388c03 ("crypto: x86/nhpoly1305 - add SSE2 accelerated NHPoly1305") Fixes: a00fa0c88774 ("crypto: arm64/nhpoly1305 - add NEON-accelerated NHPoly1305") Fixes: 16aae3595a9d ("crypto: arm/nhpoly1305 - add NEON-accelerated NHPoly1305") Cc: stable@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arch/lib - limit simd usage to 4k chunksJason A. Donenfeld2020-04-302-7/+22
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial Zinc patchset, after some mailing list discussion, contained code to ensure that kernel_fpu_enable would not be kept on for more than a 4k chunk, since it disables preemption. The choice of 4k isn't totally scientific, but it's not a bad guess either, and it's what's used in both the x86 poly1305, blake2s, and nhpoly1305 code already (in the form of PAGE_SIZE, which this commit corrects to be explicitly 4k for the former two). Ard did some back of the envelope calculations and found that at 5 cycles/byte (overestimate) on a 1ghz processor (pretty slow), 4k means we have a maximum preemption disabling of 20us, which Sebastian confirmed was probably a good limit. Unfortunately the chunking appears to have been left out of the final patchset that added the glue code. So, this commit adds it back in. Fixes: 84e03fa39fbe ("crypto: x86/chacha - expose SIMD ChaCha routine as library function") Fixes: b3aad5bad26a ("crypto: arm64/chacha - expose arm64 ChaCha routine as library function") Fixes: a44a3430d71b ("crypto: arm/chacha - expose ARM ChaCha routine as library function") Fixes: d7d7b8535662 ("crypto: x86/poly1305 - wire up faster implementations for kernel") Fixes: f569ca164751 ("crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation") Fixes: a6b803b3ddc7 ("crypto: arm/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation") Fixes: ed0356eda153 ("crypto: blake2s - x86_64 SIMD implementation") Cc: Eric Biggers <ebiggers@google.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge tag 'spdx-5.7-rc1' of ↵Linus Torvalds2020-04-031-0/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx Pull SPDX updates from Greg KH: "Here are three SPDX patches for 5.7-rc1. One fixes up the SPDX tag for a single driver, while the other two go through the tree and add SPDX tags for all of the .gitignore files as needed. Nothing too complex, but you will get a merge conflict with your current tree, that should be trivial to handle (one file modified by two things, one file deleted.) All three of these have been in linux-next for a while, with no reported issues other than the merge conflict" * tag 'spdx-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: ASoC: MT6660: make spdxcheck.py happy .gitignore: add SPDX License Identifier .gitignore: remove too obvious comments
| * .gitignore: add SPDX License IdentifierMasahiro Yamada2020-03-251-0/+1
| | | | | | | | | | | | | | Add SPDX License Identifier to all .gitignore files. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | Merge branch 'linus' of ↵Linus Torvalds2020-04-014-0/+45
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Fix out-of-sync IVs in self-test for IPsec AEAD algorithms Algorithms: - Use formally verified implementation of x86/curve25519 Drivers: - Enhance hwrng support in caam - Use crypto_engine for skcipher/aead/rsa/hash in caam - Add Xilinx AES driver - Add uacce driver - Register zip engine to uacce in hisilicon - Add support for OCTEON TX CPT engine in marvell" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (162 commits) crypto: af_alg - bool type cosmetics crypto: arm[64]/poly1305 - add artifact to .gitignore files crypto: caam - limit single JD RNG output to maximum of 16 bytes crypto: caam - enable prediction resistance in HRWNG bus: fsl-mc: add api to retrieve mc version crypto: caam - invalidate entropy register during RNG initialization crypto: caam - check if RNG job failed crypto: caam - simplify RNG implementation crypto: caam - drop global context pointer and init_done crypto: caam - use struct hwrng's .init for initialization crypto: caam - allocate RNG instantiation descriptor with GFP_DMA crypto: ccree - remove duplicated include from cc_aead.c crypto: chelsio - remove set but not used variable 'adap' crypto: marvell - enable OcteonTX cpt options for build crypto: marvell - add the Virtual Function driver for CPT crypto: marvell - add support for OCTEON TX CPT engine crypto: marvell - create common Kconfig and Makefile for Marvell crypto: arm/neon - memzero_explicit aes-cbc key crypto: bcm - Use scnprintf() for avoiding potential buffer overflow crypto: atmel-i2c - Fix wakeup fail ...
| * | crypto: arm/neon - memzero_explicit aes-cbc keyTorsten Duwe2020-03-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At function exit, do not leave the expanded key in the rk struct which got allocated on the stack. Signed-off-by: Torsten Duwe <duwe@suse.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/sha-ce - implement export/importCorentin Labbe2020-03-062-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an ahash algorithm fallback to another ahash and that fallback is shaXXX-CE, doing export/import lead to error like this: alg: ahash: sha1-sun8i-ce export() overran state buffer on test vector 0, cfg=\"import/export\" This is due to the descsize of shaxxx-ce being larger than struct shaxxx_state off by an u32. For fixing this, let's implement export/import which rip the finalize variant instead of using generic export/import. Fixes: 6ba6c74dfc6b ("arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions") Fixes: 2c98833a42cd ("arm64/crypto: SHA-1 using ARMv8 Crypto Extensions") Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/poly1305 - ignore build filesMatteo Croce2020-02-131-0/+1
| |/ | | | | | | | | | | | | | | | | Add arch/arm64/crypto/poly1305-core.S to .gitignore as it's built from poly1305-core.S_shipped Fixes: f569ca164751 ("crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation") Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | Merge tag 'arm64-upstream' of ↵Linus Torvalds2020-03-314-36/+36
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "The bulk is in-kernel pointer authentication, activity monitors and lots of asm symbol annotations. I also queued the sys_mremap() patch commenting the asymmetry in the address untagging. Summary: - In-kernel Pointer Authentication support (previously only offered to user space). - ARM Activity Monitors (AMU) extension support allowing better CPU utilisation numbers for the scheduler (frequency invariance). - Memory hot-remove support for arm64. - Lots of asm annotations (SYM_*) in preparation for the in-kernel Branch Target Identification (BTI) support. - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the PMU init callbacks, support for new DT compatibles. - IPv6 header checksum optimisation. - Fixes: SDEI (software delegated exception interface) double-lock on hibernate with shared events. - Minor clean-ups and refactoring: cpu_ops accessor, cpu_do_switch_mm() converted to C, cpufeature finalisation helper. - sys_mremap() comment explaining the asymmetric address untagging behaviour" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits) mm/mremap: Add comment explaining the untagging behaviour of mremap() arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Introduce get_cpu_ops() helper function arm64: Rename cpu_read_ops() to init_cpu_ops() arm64: Declare ACPI parking protocol CPU operation if needed arm64: move kimage_vaddr to .rodata arm64: use mov_q instead of literal ldr arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file ...
| * | arm64: crypto: Modernize names for AES function macrosMark Brown2020-03-093-28/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the rest of the code has been converted to the modern START/END macros the AES_ENTRY() and AES_ENDPROC() macros look out of place and like they need updating. Rename them to AES_FUNC_START() and AES_FUNC_END() to line up with the modern style assembly macros. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | arm64: crypto: Modernize some extra assembly annotationsMark Brown2020-03-091-8/+8
| |/ | | | | | | | | | | | | | | | | A couple of functions were missed in the modernisation of assembly macros, update them too. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | Merge branch 'linus' of ↵Linus Torvalds2020-03-231-4/+4
|\ \ | |/ |/| | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fix from Herbert Xu: "This fixes a correctness bug in the ARM64 version of ChaCha for lib/crypto used by WireGuard" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: arm64/chacha - correctly walk through blocks
| * crypto: arm64/chacha - correctly walk through blocksJason A. Donenfeld2020-03-201-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior, passing in chunks of 2, 3, or 4, followed by any additional chunks would result in the chacha state counter getting out of sync, resulting in incorrect encryption/decryption, which is a pretty nasty crypto vuln: "why do images look weird on webpages?" WireGuard users never experienced this prior, because we have always, out of tree, used a different crypto library, until the recent Frankenzinc addition. This commit fixes the issue by advancing the pointers and state counter by the actual size processed. It also fixes up a bug in the (optional, costly) stride test that prevented it from running on arm64. Fixes: b3aad5bad26a ("crypto: arm64/chacha - expose arm64 ChaCha routine as library function") Reported-and-tested-by: Emil Renner Berthing <kernel@esmil.dk> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | Merge branch 'linus' of ↵Linus Torvalds2020-01-2927-195/+172
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Removed CRYPTO_TFM_RES flags - Extended spawn grabbing to all algorithm types - Moved hash descsize verification into API code Algorithms: - Fixed recursive pcrypt dead-lock - Added new 32 and 64-bit generic versions of poly1305 - Added cryptogams implementation of x86/poly1305 Drivers: - Added support for i.MX8M Mini in caam - Added support for i.MX8M Nano in caam - Added support for i.MX8M Plus in caam - Added support for A33 variant of SS in sun4i-ss - Added TEE support for Raven Ridge in ccp - Added in-kernel API to submit TEE commands in ccp - Added AMD-TEE driver - Added support for BCM2711 in iproc-rng200 - Added support for AES256-GCM based ciphers for chtls - Added aead support on SEC2 in hisilicon" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (244 commits) crypto: arm/chacha - fix build failured when kernel mode NEON is disabled crypto: caam - add support for i.MX8M Plus crypto: x86/poly1305 - emit does base conversion itself crypto: hisilicon - fix spelling mistake "disgest" -> "digest" crypto: chacha20poly1305 - add back missing test vectors and test chunking crypto: x86/poly1305 - fix .gitignore typo tee: fix memory allocation failure checks on drv_data and amdtee crypto: ccree - erase unneeded inline funcs crypto: ccree - make cc_pm_put_suspend() void crypto: ccree - split overloaded usage of irq field crypto: ccree - fix PM race condition crypto: ccree - fix FDE descriptor sequence crypto: ccree - cc_do_send_request() is void func crypto: ccree - fix pm wrongful error reporting crypto: ccree - turn errors to debug msgs crypto: ccree - fix AEAD decrypt auth fail crypto: ccree - fix typo in comment crypto: ccree - fix typos in error msgs crypto: atmel-{aes,sha,tdes} - Retire crypto_platform_data crypto: x86/sha - Eliminate casts on asm implementations ...
| * crypto: {arm,arm64,mips}/poly1305 - remove redundant non-reduction from emitJason A. Donenfeld2020-01-161-16/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This appears to be some kind of copy and paste error, and is actually dead code. Pre: f = 0 ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[0]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst); Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[1]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst + 4); Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[2]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst + 8); Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[3]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst + 12); Therefore this sequence is redundant. And Andy's code appears to handle misalignment acceptably. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Tested-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: remove CRYPTO_TFM_RES_BAD_KEY_LENEric Biggers2020-01-094-46/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to make the ->setkey() functions provide more information about errors. However, no one actually checks for this flag, which makes it pointless. Also, many algorithms fail to set this flag when given a bad length key. Reviewing just the generic implementations, this is the case for aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309, rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably many more in arch/*/crypto/ and drivers/crypto/. Some algorithms can even set this flag when the key is the correct length. For example, authenc and authencesn set it when the key payload is malformed in any way (not just a bad length), the atmel-sha and ccree drivers can set it if a memory allocation fails, and the chelsio driver sets it for bad auth tag lengths, not just bad key lengths. So even if someone actually wanted to start checking this flag (which seems unlikely, since it's been unused for a long time), there would be a lot of work needed to get it working correctly. But it would probably be much better to go back to the drawing board and just define different return values, like -EINVAL if the key is invalid for the algorithm vs. -EKEYREJECTED if the key was rejected by a policy like "no weak keys". That would be much simpler, less error-prone, and easier to test. So just remove this flag. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm64 - Use modern annotations for assembly functionsMark Brown2019-12-2017-84/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the crypto code to the new macros. There are a small number of files imported from OpenSSL where the assembly is generated using perl programs, these are not currently annotated at all and have not been modified. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm64/ghash-neon - bump priority to 150Ard Biesheuvel2019-12-111-1/+1
| | | | | | | | | | | | | | | | | | | | The SIMD based GHASH implementation for arm64 is typically much faster than the generic one, and doesn't use any lookup tables, so it is clearly preferred when available. So bump the priority to reflect that. Fixes: 5a22b198cd527447 ("crypto: arm64/ghash - register PMULL variants ...") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm64/sha - fix function typesSami Tolvanen2019-12-115-48/+76
| | | | | | | | | | | | | | | | | | | | | | Instead of casting pointers to callback functions, add C wrappers to avoid type mismatch failures with Control-Flow Integrity (CFI) checking. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | sched/rt, arm64: Use CONFIG_PREEMPTIONThomas Gleixner2019-12-081-1/+1
|/ | | | | | | | | | | | | | | | | | | | | | CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Switch the Kconfig dependency, entry code and preemption handling over to use CONFIG_PREEMPTION. Add PREEMPT_RT output in show_stack(). [bigeasy: +traps.c, Kconfig] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20191015191821.11479-3-bigeasy@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
* crypto: arch - conditionalize crypto api in arch glue for lib codeJason A. Donenfeld2019-11-272-4/+6
| | | | | | | | | | | | For glue code that's used by Zinc, the actual Crypto API functions might not necessarily exist, and don't need to exist either. Before this patch, there are valid build configurations that lead to a unbuildable kernel. This fixes it to conditionalize those symbols on the existence of the proper config entry. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementationArd Biesheuvel2019-11-175-1/+2000
| | | | | | | | | | | | | | | | This is a straight import of the OpenSSL/CRYPTOGAMS Poly1305 implementation for NEON authored by Andy Polyakov, and contributed by him to the OpenSSL project. The file 'poly1305-armv8.pl' is taken straight from this upstream GitHub repository [0] at commit ec55a08dc0244ce570c4fc7cade330c60798952f, and already contains all the changes required to build it as part of a Linux kernel module. [0] https://github.com/dot-asm/cryptogams Co-developed-by: Andy Polyakov <appro@cryptogams.org> Signed-off-by: Andy Polyakov <appro@cryptogams.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha - remove dependency on generic ChaCha driverArd Biesheuvel2019-11-171-1/+1
| | | | | | | | | | | Instead of falling back to the generic ChaCha skcipher driver for non-SIMD cases, use a fast scalar implementation for ARM authored by Eric Biggers. This removes the module dependency on chacha-generic altogether, which also simplifies things when we expose the ChaCha library interface from this module. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha - expose arm64 ChaCha routine as library functionArd Biesheuvel2019-11-172-11/+43
| | | | | | | | | | | | | | Expose the accelerated NEON ChaCha routine directly as a symbol export so that users of the ChaCha library API can use it directly. Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha - depend on generic chacha library instead of crypto driverArd Biesheuvel2019-11-172-19/+23
| | | | | | | | | | | | | | | Depend on the generic ChaCha library routines instead of pulling in the generic ChaCha skcipher driver, which is more than we need, and makes managing the dependencies between the generic library, generic driver, accelerated library and driver more complicated. While at it, drop the logic to prefer the scalar code on short inputs. Turning the NEON on and off is cheap these days, and one major use case for ChaCha20 is ChaCha20-Poly1305, which is guaranteed to hit the scalar path upon every invocation (when doing the Poly1305 nonce generation) Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha - move existing library code into lib/cryptoArd Biesheuvel2019-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | Currently, our generic ChaCha implementation consists of a permute function in lib/chacha.c that operates on the 64-byte ChaCha state directly [and which is always included into the core kernel since it is used by the /dev/random driver], and the crypto API plumbing to expose it as a skcipher. In order to support in-kernel users that need the ChaCha streamcipher but have no need [or tolerance] for going through the abstractions of the crypto API, let's expose the streamcipher bits via a library API as well, in a way that permits the implementation to be superseded by an architecture specific one if provided. So move the streamcipher code into a separate module in lib/crypto, and expose the init() and crypt() routines to users of the library. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - rename the crypto_blkcipher module and kconfig optionEric Biggers2019-11-011-4/+4
| | | | | | | | | | | | Now that the blkcipher algorithm type has been removed in favor of skcipher, rename the crypto_blkcipher kernel module to crypto_skcipher, and rename the config options accordingly: CONFIG_CRYPTO_BLKCIPHER => CONFIG_CRYPTO_SKCIPHER CONFIG_CRYPTO_BLKCIPHER2 => CONFIG_CRYPTO_SKCIPHER2 Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-neonbs - add return value of skcipher_walk_done() in ↵Yunfeng Ye2019-11-011-1/+1
| | | | | | | | | | | | | | __xts_crypt() A warning is found by the static code analysis tool: "Identical condition 'err', second condition is always false" Fix this by adding return value of skcipher_walk_done(). Fixes: 67cfa5d3b721 ("crypto: arm64/aes-neonbs - implement ciphertext stealing for XTS") Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/gcm-ce - implement 4 way interleaveArd Biesheuvel2019-10-042-327/+467
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To improve performance on cores with deep pipelines such as ThunderX2, reimplement gcm(aes) using a 4-way interleave rather than the 2-way interleave we use currently. This comes down to a complete rewrite of the GCM part of the combined GCM/GHASH driver, and instead of interleaving two invocations of AES with the GHASH handling at the instruction level, the new version uses a more coarse grained approach where each chunk of 64 bytes is encrypted first and then ghashed (or ghashed and then decrypted in the converse case). The core NEON routine is now able to consume inputs of any size, and tail blocks of less than 64 bytes are handled using overlapping loads and stores, and processed by the same 4-way encryption and hashing routines. This gets rid of most of the branches, and avoids having to return to the C code to handle the tail block using a stack buffer. The table below compares the performance of the old driver and the new one on various micro-architectures and running in various modes. | AES-128 | AES-192 | AES-256 | #bytes | 512 | 1500 | 4k | 512 | 1500 | 4k | 512 | 1500 | 4k | -------+-----+------+-----+-----+------+-----+-----+------+-----+ TX2 | 35% | 23% | 11% | 34% | 20% | 9% | 38% | 25% | 16% | EMAG | 11% | 6% | 3% | 12% | 4% | 2% | 11% | 4% | 2% | A72 | 8% | 5% | -4% | 9% | 4% | -5% | 7% | 4% | -5% | A53 | 11% | 6% | -1% | 10% | 8% | -1% | 10% | 8% | -2% | Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-neonbs - implement ciphertext stealing for XTSArd Biesheuvel2019-09-095-14/+110
| | | | | | | | | | | | | | | Update the AES-XTS implementation based on NEON instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Since the bit slicing driver is only faster if it can operate on at least 7 blocks of input at the same time, let's reuse the alternate path we are adding for CTS to process any data tail whose size is not a multiple of 128 bytes. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - implement support for XTS ciphertext stealingArd Biesheuvel2019-09-092-30/+195
| | | | | | | | | | | | | | | | Add the missing support for ciphertext stealing in the implementation of AES-XTS, which is part of the XTS specification but was omitted up until now due to lack of a need for it. The asm helpers are updated so they can deal with any input size, as long as the last full block and the final partial block are presented at the same time. The glue code is updated so that the common case of operating on a sector or page is mostly as before. When CTS is needed, the walk is split up into two pieces, unless the entire input is covered by a single step. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-cts-cbc - move request context data to the stackArd Biesheuvel2019-09-091-35/+26
| | | | | | | | | Since the CTS-CBC code completes synchronously, there is no point in keeping part of the scratch data it uses in the request context, so move it to the stack instead. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-cts-cbc-ce - performance tweakArd Biesheuvel2019-09-091-3/+2
| | | | | | | | Optimize away one of the tbl instructions in the decryption path, which turns out to be unnecessary. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-neon - limit exposed routines if faster driver is enabledArd Biesheuvel2019-09-091-53/+59
| | | | | | | | | | | | | | | | | | | | | | | The pure NEON AES implementation predates the bit-slicing one, and is generally slower, unless the algorithm in question can only execute sequentially. So advertising the skciphers that the bit-slicing driver implements as well serves no real purpose, and we can just disable them. Note that the bit-slicing driver also has a link time dependency on the pure NEON driver, for CBC encryption and for XTS tweak calculation, so we still need both drivers on systems that do not implement the Crypto Extensions. At the same time, expose those modaliases for the AES instruction based driver. This is necessary since otherwise, we may end up loading the wrong driver when any of the skciphers are instantiated before the CPU capability based module loading has completed. Finally, add the missing modalias for cts(cbc(aes)) so requests for this algorithm will autoload the correct module. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-neonbs - replace tweak mask literal with compositionArd Biesheuvel2019-09-091-6/+3
| | | | | | | | Replace the vector load from memory sequence with a simple instruction sequence to compose the tweak vector directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - Use PTR_ERR_OR_ZERO rather than its implementation.zhong jiang2019-09-091-3/+1
| | | | | | | | | PTR_ERR_OR_ZERO contains if(IS_ERR(...)) + PTR_ERR. It is better to use it directly. hence just replace it. Signed-off-by: zhong jiang <zhongjiang@huawei.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64 - Rename functions to avoid conflict with crypto/sha256.hHans de Goede2019-09-051-12/+12
| | | | | | | | | | Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - implement accelerated ESSIV/CBC modeArd Biesheuvel2019-08-302-0/+152
| | | | | | | | | | | Add an accelerated version of the 'essiv(cbc(aes),sha256)' skcipher, which is used by fscrypt or dm-crypt on systems where CBC mode is signficantly more performant than XTS mode (e.g., when using a h/w accelerator which supports the former but not the latter) This avoids a separate call into the AES cipher for every invocation. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-cts-cbc - factor out CBC en/decryption of a walkArd Biesheuvel2019-08-301-42/+40
| | | | | | | | | | The plain CBC driver and the CTS one share some code that iterates over a scatterwalk and invokes the CBC asm code to do the processing. The upcoming ESSIV/CBC mode will clone that pattern for the third time, so let's factor it out first. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-cipher - switch to shared AES inverse SboxArd Biesheuvel2019-07-261-39/+1
| | | | | Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-neon - switch to shared AES SboxesArd Biesheuvel2019-07-261-71/+3
| | | | | Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-ce-cipher - use AES library as fallbackArd Biesheuvel2019-07-263-9/+3
| | | | | | | | | | Instead of calling into the table based scalar AES code in situations where the SIMD unit may not be used, use the generic AES code, which is more appropriate since it is less likely to be susceptible to timing attacks. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>