| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Include module.h in arch/arm64/crypto/sha256-glue.c as it uses
various macros (such as MODULE_AUTHOR) that are defined there.
Also fix the ordering of types.h.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202305191953.PIB1w80W-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aesbs_ecb_encrypt(), aesbs_ecb_decrypt(), aesbs_xts_encrypt(), and
aesbs_xts_decrypt() are called via indirect function calls. Therefore
they need to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause
their type hashes to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y. Otherwise, the code crashes with a CFI failure if
the compiler doesn't happen to optimize out the indirect calls.
Fixes: c50d32859e70 ("arm64: Add types to indirect called assembly functions")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An often overlooked aspect of the skcipher walker API is that an
error is not just indicated by a non-zero return value, but by the
fact that walk->nbytes is zero.
Thus it is an error to call skcipher_walk_done after getting back
walk->nbytes == 0 from the previous interaction with the walker.
This is because when walk->nbytes is zero the walker is left in
an undefined state and any further calls to it may try to free
uninitialised stack memory.
The sm4 arm64 ccm code gets this wrong and ends up calling
skcipher_walk_done even when walk->nbytes is zero.
This patch rewrites the loop in a form that resembles other callers.
Reported-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fact that an error in the skcipher walker API are indicated
not only by a non-zero return value, but also by the fact that
walk->nbytes is zero, causes the layout of the skcipher walker
loop to be sufficiently different from the usual layout, which
is not a problem in itself, but it is likely to cause reading
confusion and difficulty in code maintenance.
This patch rewrites skcipher walker loop, and separates the
last chunk cryption from the loop to avoid wrong calls to the
skcipher walker API. In addition to following the usual convention
of checking walk->nbytes, it also makes the loop execute logic
clearer and easier to understand.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An often overlooked aspect of the skcipher walker API is that an
error is not just indicated by a non-zero return value, but by the
fact that walk->nbytes is zero.
Thus it is an error to call skcipher_walk_done after getting back
walk->nbytes == 0 from the previous interaction with the walker.
This is because when walk->nbytes is zero the walker is left in
an undefined state and any further calls to it may try to free
uninitialised stack memory.
The arm64 ccm code has to deal with zero-length messages, and
it needs to process data even when walk->nbytes == 0 is returned.
It doesn't have this bug because there is an explicit check for
walk->nbytes != 0 prior to the skcipher_walk_done call.
However, the loop is still sufficiently different from the usual
layout and it appears to have been copied into other code which
then ended up with this bug. This patch rewrites it to follow the
usual convention of checking walk->nbytes.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Add support for RFC4106 ESP encapsulation to the accelerated GCM
implementation. This results in a ~10% speedup for IPsec frames of
typical size (~1420 bytes) on Cortex-A53.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SM4 CCM/GCM assembly functions for encryption and decryption is
called via indirect function calls. Therefore they need to use
SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause its type hash
to be emitted when the kernel is built with CONFIG_CFI_CLANG=y.
Otherwise, the code crashes with a CFI failure (if the compiler didn't
happen to optimize out the indirect call).
Fixes: 67fa3a7fdf80 ("crypto: arm64/sm4 - add CE implementation for CCM mode")
Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Use the frame_push and frame_pop macros to set up the stack frame so
that return address protections will be enabled automically when
configured.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Use the frame_push and frame_pop macros to set up the stack frame so
that return address protections will be enabled automically when
configured.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Use the frame_push and frame_pop macros to create the stack frames in
the AES chaining mode wrappers so that they will get PAC and/or shadow
call stack protection when configured.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Use the frame_push and frame_pop macros consistently to create the stack
frame, so that we will get PAC and/or shadow call stack handling as well
when enabled.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
The helper crypto_tfm_ctx is only used by the Crypto API algorithm
code and should really be in algapi.h. However, for historical
reasons many files relied on it to be in crypto.h. This patch
changes those files to use algapi.h instead in prepartion for a
move.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sm3_neon_transform() is called via indirect function calls. Therefore
it needs to use SYM_TYPED_FUNC_START instead of SYM_FUNC_START to cause
its type hash to be emitted when the kernel is built with
CONFIG_CFI_CLANG=y. Otherwise, the code crashes with a CFI failure (if
the compiler didn't happen to optimize out the indirect call).
Fixes: c50d32859e70 ("arm64: Add types to indirect called assembly functions")
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
Since the CFI implementation now supports indirect calls to assembly
functions, take advantage of that rather than use a wrapper function.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cpu feature defined by MODULE_DEVICE_TABLE is only referenced when
compiling as a module, and the warning of unused variable will be
encountered when compiling with intree. The warning can be removed by
adding the __maybe_unused flag.
Fixes: 03c9a333fef1 ("crypto: arm64/ghash - add NEON accelerated fallback for 64-bit PMULL")
Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The gf128mul library does not depend on the crypto API at all, so it can
be moved into lib/crypto. This will allow us to use it in other library
code in a subsequent patch without having to depend on CONFIG_CRYPTO.
While at it, change the Kconfig symbol name to align with other crypto
library implementations. However, the source file name is retained, as
it is reflected in the module .ko filename, and changing this might
break things for users.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is a CE-optimized assembly implementation for GCM mode.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 224 and 224
modes of tcrypt, and compared the performance before and after this patch (the
driver used before this patch is gcm_base(ctr-sm4-ce,ghash-generic)).
The abscissas are blocks of different lengths. The data is tabulated and the
unit is Mb/s:
Before (gcm_base(ctr-sm4-ce,ghash-generic)):
gcm(sm4) | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------------
GCM enc | 25.24 64.65 104.66 116.69 123.81 125.12 129.67 130.62
GCM dec | 25.40 64.80 104.74 116.70 123.81 125.21 129.68 130.59
GCM mb enc | 24.95 64.06 104.20 116.38 123.55 124.97 129.63 130.61
GCM mb dec | 24.92 64.00 104.13 116.34 123.55 124.98 129.56 130.48
After:
gcm-sm4-ce | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------------
GCM enc | 108.62 397.18 971.60 1283.92 1522.77 1513.39 1777.00 1806.96
GCM dec | 116.36 398.14 1004.27 1319.11 1624.21 1635.43 1932.54 1974.20
GCM mb enc | 107.13 391.79 962.05 1274.94 1514.76 1508.57 1769.07 1801.58
GCM mb dec | 113.40 389.36 988.51 1307.68 1619.10 1631.55 1931.70 1970.86
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is a CE-optimized assembly implementation for CCM mode.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 223 and 225
modes of tcrypt, and compared the performance before and after this patch (the
driver used before this patch is ccm_base(ctr-sm4-ce,cbcmac-sm4-ce)).
The abscissas are blocks of different lengths. The data is tabulated and the
unit is Mb/s:
Before (rfc4309(ccm_base(ctr-sm4-ce,cbcmac-sm4-ce))):
ccm(sm4) | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------
CCM enc | 35.07 125.40 336.47 468.17 581.97 619.18 712.56 736.01
CCM dec | 34.87 124.40 335.08 466.75 581.04 618.81 712.25 735.89
CCM mb enc | 34.71 123.96 333.92 465.39 579.91 617.49 711.45 734.92
CCM mb dec | 34.42 122.80 331.02 462.81 578.28 616.42 709.88 734.19
After (rfc4309(ccm-sm4-ce)):
ccm-sm4-ce | 16 64 256 512 1024 1420 4096 8192
-------------+---------------------------------------------------------------
CCM enc | 77.12 249.82 569.94 725.17 839.27 867.71 952.87 969.89
CCM dec | 75.90 247.26 566.29 722.12 836.90 865.95 951.74 968.57
CCM mb enc | 75.98 245.25 562.91 718.99 834.76 864.70 950.17 967.90
CCM mb dec | 75.06 243.78 560.58 717.13 833.68 862.70 949.35 967.11
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is a CE-optimized assembly implementation for cmac/xcbc/cbcmac.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 300 mode of
tcrypt, and compared the performance before and after this patch (the driver
used before this patch is XXXmac(sm4-ce)). The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:
Before:
update-size | 16 64 256 1024 2048 4096 8192
---------------+--------------------------------------------------------
cmac(sm4-ce) | 293.33 403.69 503.76 527.78 531.10 535.46 535.81
xcbc(sm4-ce) | 292.83 402.50 504.02 529.08 529.87 536.55 538.24
cbcmac(sm4-ce) | 318.42 415.79 497.12 515.05 523.15 521.19 523.01
After:
update-size | 16 64 256 1024 2048 4096 8192
---------------+--------------------------------------------------------
cmac-sm4-ce | 371.99 675.28 903.56 971.65 980.57 990.40 991.04
xcbc-sm4-ce | 372.11 674.55 903.47 971.61 980.96 990.42 991.10
cbcmac-sm4-ce | 371.63 675.33 903.23 972.07 981.42 990.93 991.45
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is a CE-optimized assembly implementation for XTS mode.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218 mode of
tcrypt, and compared the performance before and after this patch (the driver
used before this patch is xts(ecb-sm4-ce)). The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:
Before:
xts(ecb-sm4-ce) | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
XTS enc | 117.17 430.56 732.92 1134.98 2007.03 2136.23 2347.20
XTS dec | 116.89 429.02 733.40 1132.96 2006.13 2130.50 2347.92
After:
xts-sm4-ce | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
XTS enc | 224.68 798.91 1248.08 1714.60 2413.73 2467.84 2612.62
XTS dec | 229.85 791.34 1237.79 1720.00 2413.30 2473.84 2611.95
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is a CE-optimized assembly implementation for CTS-CBC mode.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218 mode of
tcrypt, and compared the performance before and after this patch (the driver
used before this patch is cts(cbc-sm4-ce)). The abscissas are blocks of
different lengths. The data is tabulated and the unit is Mb/s:
Before:
cts(cbc-sm4-ce) | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
CTS-CBC enc | 286.09 297.17 457.97 627.75 868.58 900.80 957.69
CTS-CBC dec | 286.67 285.63 538.35 947.08 2241.03 2577.32 3391.14
After:
cts-cbc-sm4-ce | 16 64 128 256 1024 1420 4096
----------------+--------------------------------------------------------------
CTS-CBC enc | 288.19 428.80 593.57 741.04 911.73 931.80 950.00
CTS-CBC dec | 292.22 468.99 838.23 1380.76 2741.17 3036.42 3409.62
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
In the accelerated implementation of the SM4 algorithm using the Crypto
Extension instructions, there are some functions that can be reused in
the upcoming accelerated implementation of the GCM/CCM mode, and the
CBC/CFB encryption is reused in the optimized implementation of SVESM4.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
| |
Use a 128-bit swap mask and tbl instruction to simplify the implementation
for generating SM4 rkey_dec.
Also fixed the issue of not being wrapped by kernel_neon_begin/end() when
using the sm4_ce_expand_key() function.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch does not add new features, but only refactors and simplifies the
implementation of the Crypto Extension acceleration of the SM4 algorithm:
Extract the macro optimized by SM4 Crypto Extension for reuse in the
subsequent optimization of CCM/GCM modes.
Encryption in CBC and CFB modes processes four blocks at a time instead of
one, allowing the ld1 instruction to load 64 bytes of data at a time, which
will reduces unnecessary memory accesses.
CBC/CFB/CTR makes full use of free registers to reduce redundant memory
accesses, and rearranges some instructions to improve out-of-order execution
capabilities.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch does not add new features. The main work is to refactor and
simplify the implementation of SM4 NEON, which is reflected in the
following aspects:
The accelerated implementation supports the arbitrary number of blocks,
not just multiples of 8, which simplifies the implementation and brings
some optimization acceleration for data that is not aligned by 8 blocks.
When loading the input data, use the ld4 instruction to replace the
original ld1 instruction as much as possible, which will save the cost
of matrix transposition of the input data.
Use 8-block parallelism whenever possible to speed up matrix transpose
and rotation operations, instead of up to 4-block parallelism.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the NEON acceleration implementation of the SM3 hash
algorithm. The main algorithm is based on SM3 NEON accelerated work of
the libgcrypt project.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 326 mode
of tcrypt, and compares the performance data of sm3-generic and sm3-ce.
The abscissas are blocks of different lengths. The data is tabulated and
the unit is Mb/s:
update-size | 16 64 256 1024 2048 4096 8192
---------------+--------------------------------------------------------
sm3-generic | 185.24 221.28 301.26 307.43 300.83 308.82 308.91
sm3-neon | 171.81 220.20 322.94 339.28 334.09 343.61 343.87
sm3-ce | 227.48 333.48 502.62 527.87 520.45 534.91 535.40
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
Raise the priority of the sm3-ce algorithm from 200 to 400, this is
to make room for the implementation of sm3-neon.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Feed untrusted RNGs into /dev/random
- Allow HWRNG sleeping to be more interruptible
- Create lib/utils module
- Setting private keys no longer required for akcipher
- Remove tcrypt mode=1000
- Reorganised Kconfig entries
Algorithms:
- Load x86/sha512 based on CPU features
- Add AES-NI/AVX/x86_64/GFNI assembler implementation of aria cipher
Drivers:
- Add HACE crypto driver aspeed"
* tag 'v6.1-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (124 commits)
crypto: aspeed - Remove redundant dev_err call
crypto: scatterwalk - Remove unused inline function scatterwalk_aligned()
crypto: aead - Remove unused inline functions from aead
crypto: bcm - Simplify obtain the name for cipher
crypto: marvell/octeontx - use sysfs_emit() to instead of scnprintf()
hwrng: core - start hwrng kthread also for untrusted sources
crypto: zip - remove the unneeded result variable
crypto: qat - add limit to linked list parsing
crypto: octeontx2 - Remove the unneeded result variable
crypto: ccp - Remove the unneeded result variable
crypto: aspeed - Fix check for platform_get_irq() errors
crypto: virtio - fix memory-leak
crypto: cavium - prevent integer overflow loading firmware
crypto: marvell/octeontx - prevent integer overflows
crypto: aspeed - fix build error when only CRYPTO_DEV_ASPEED is enabled
crypto: hisilicon/qm - fix the qos value initialization
crypto: sun4i-ss - use DEFINE_SHOW_ATTRIBUTE to simplify sun4i_ss_debugfs
crypto: tcrypt - add async speed test for aria cipher
crypto: aria-avx - add AES-NI/AVX/x86_64/GFNI assembler implementation of aria cipher
crypto: aria - prepare generic module for optimized implementations
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 3f342a23257d ("crypto: Kconfig - simplify hash entries") makes
various changes to the config descriptions as part of some consolidation
and clean-up, but among all those changes, it also accidently renames
CRYPTO_SHA1_ARM64_CE to CRYPTO_SHA1_ARM64.
Revert this unintended config name change.
See Link for the author's confirmation of this happening accidently.
Fixes: 3f342a23257d ("crypto: Kconfig - simplify hash entries")
Link: https://lore.kernel.org/all/MW5PR84MB18424AB8C095BFC041AE33FDAB479@MW5PR84MB1842.NAMPRD84.PROD.OUTLOOK.COM/
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Shorten menu titles and make them consistent:
- acronym
- name
- architecture features in parenthesis
- no suffixes like "<something> algorithm", "support", or
"hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that
https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Shorten menu titles and make them consistent:
- acronym
- name
- architecture features in parenthesis
- no suffixes like "<something> algorithm", "support", or
"hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that
https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Shorten menu titles and make them consistent:
- acronym
- name
- architecture features in parenthesis
- no suffixes like "<something> algorithm", "support", or
"hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that
https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| | |
Sort the arm64 entries so all like entries are together.
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Move ARM- and ARM64-accelerated menus into a submenu under
the Crypto API menu (paralleling all the architectures).
Make each submenu always appear if the corresponding architecture
is supported. Get rid of the ARM_CRYPTO and ARM64_CRYPTO symbols.
The "ARM Accelerated" or "ARM64 Accelerated" entry disappears from:
General setup --->
Platform selection --->
Kernel Features --->
Boot options --->
Power management options --->
CPU Power Management --->
[*] ACPI (Advanced Configuration and Power Interface) Support --->
[*] Virtualization --->
[*] ARM Accelerated Cryptographic Algorithms --->
(or)
[*] ARM64 Accelerated Cryptographic Algorithms --->
...
-*- Cryptographic API --->
Library routines --->
Kernel hacking --->
and moves into the Cryptographic API menu, which now contains:
...
Accelerated Cryptographic Algorithms for CPU (arm) --->
(or)
Accelerated Cryptographic Algorithms for CPU (arm64) --->
[*] Hardware crypto devices --->
...
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Robert Elliott <elliott@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With CONFIG_CFI_CLANG, assembly functions indirectly called from C
code must be annotated with type identifiers to pass CFI checking. Use
SYM_TYPED_FUNC_START for the indirectly called functions, and ensure
we emit `bti c` also with SYM_TYPED_FUNC_START.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220908215504.3686827-10-samitolvanen@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A kasan error was reported during fuzzing:
BUG: KASAN: slab-out-of-bounds in neon_poly1305_blocks.constprop.0+0x1b4/0x250 [poly1305_neon]
Read of size 4 at addr ffff0010e293f010 by task syz-executor.5/1646715
CPU: 4 PID: 1646715 Comm: syz-executor.5 Kdump: loaded Not tainted 5.10.0.aarch64 #1
Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.59 01/31/2019
Call trace:
dump_backtrace+0x0/0x394
show_stack+0x34/0x4c arch/arm64/kernel/stacktrace.c:196
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x158/0x1e4 lib/dump_stack.c:118
print_address_description.constprop.0+0x68/0x204 mm/kasan/report.c:387
__kasan_report+0xe0/0x140 mm/kasan/report.c:547
kasan_report+0x44/0xe0 mm/kasan/report.c:564
check_memory_region_inline mm/kasan/generic.c:187 [inline]
__asan_load4+0x94/0xd0 mm/kasan/generic.c:252
neon_poly1305_blocks.constprop.0+0x1b4/0x250 [poly1305_neon]
neon_poly1305_do_update+0x6c/0x15c [poly1305_neon]
neon_poly1305_update+0x9c/0x1c4 [poly1305_neon]
crypto_shash_update crypto/shash.c:131 [inline]
shash_finup_unaligned+0x84/0x15c crypto/shash.c:179
crypto_shash_finup+0x8c/0x140 crypto/shash.c:193
shash_digest_unaligned+0xb8/0xe4 crypto/shash.c:201
crypto_shash_digest+0xa4/0xfc crypto/shash.c:217
crypto_shash_tfm_digest+0xb4/0x150 crypto/shash.c:229
essiv_skcipher_setkey+0x164/0x200 [essiv]
crypto_skcipher_setkey+0xb0/0x160 crypto/skcipher.c:612
skcipher_setkey+0x3c/0x50 crypto/algif_skcipher.c:305
alg_setkey+0x114/0x2a0 crypto/af_alg.c:220
alg_setsockopt+0x19c/0x210 crypto/af_alg.c:253
__sys_setsockopt+0x190/0x2e0 net/socket.c:2123
__do_sys_setsockopt net/socket.c:2134 [inline]
__se_sys_setsockopt net/socket.c:2131 [inline]
__arm64_sys_setsockopt+0x78/0x94 net/socket.c:2131
__invoke_syscall arch/arm64/kernel/syscall.c:36 [inline]
invoke_syscall+0x64/0x100 arch/arm64/kernel/syscall.c:48
el0_svc_common.constprop.0+0x220/0x230 arch/arm64/kernel/syscall.c:155
do_el0_svc+0xb4/0xd4 arch/arm64/kernel/syscall.c:217
el0_svc+0x24/0x3c arch/arm64/kernel/entry-common.c:353
el0_sync_handler+0x160/0x164 arch/arm64/kernel/entry-common.c:369
el0_sync+0x160/0x180 arch/arm64/kernel/entry.S:683
This error can be reproduced by the following code compiled as ko on a
system with kasan enabled:
#include <linux/module.h>
#include <linux/crypto.h>
#include <crypto/hash.h>
#include <crypto/poly1305.h>
char test_data[] = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e";
int init(void)
{
struct crypto_shash *tfm = NULL;
char *data = NULL, *out = NULL;
tfm = crypto_alloc_shash("poly1305", 0, 0);
data = kmalloc(POLY1305_KEY_SIZE - 1, GFP_KERNEL);
out = kmalloc(POLY1305_DIGEST_SIZE, GFP_KERNEL);
memcpy(data, test_data, POLY1305_KEY_SIZE - 1);
crypto_shash_tfm_digest(tfm, data, POLY1305_KEY_SIZE - 1, out);
kfree(data);
kfree(out);
return 0;
}
void deinit(void)
{
}
module_init(init)
module_exit(deinit)
MODULE_LICENSE("GPL");
The root cause of the bug sits in neon_poly1305_blocks. The logic
neon_poly1305_blocks() performed is that if it was called with both s[]
and r[] uninitialized, it will first try to initialize them with the
data from the first "block" that it believed to be 32 bytes in length.
First 16 bytes are used as the key and the next 16 bytes for s[]. This
would lead to the aforementioned read out-of-bound. However, after
calling poly1305_init_arch(), only 16 bytes were deducted from the input
and s[] is initialized yet again with the following 16 bytes. The second
initialization of s[] is certainly redundent which indicates that the
first initialization should be for r[] only.
This patch fixes the issue by calling poly1305_init_arm64() instead of
poly1305_init_arch(). This is also the implementation for the same
algorithm on arm platform.
Fixes: f569ca164751 ("crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation")
Cc: stable@vger.kernel.org
Signed-off-by: GUO Zihua <guozihua@huawei.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, we could fail to compile.
ld: arch/arm64/crypto/ghash-ce-glue.o: in function 'ghash_ce_mod_exit':
ghash-ce-glue.c:(.exit.text+0x24): undefined reference to 'crypto_unregister_aead'
ld: arch/arm64/crypto/ghash-ce-glue.o: in function 'ghash_ce_mod_init':
ghash-ce-glue.c:(.init.text+0x34): undefined reference to 'crypto_register_aead'
Fixes: 537c1445ab0b ("crypto: arm64/gcm - implement native driver using v8 Crypto Extensions")
Signed-off-by: Qian Cai <quic_qiancai@quicinc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
| |
Delete the redundant word 'the'.
Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add hardware accelerated version of POLYVAL for ARM64 CPUs with
Crypto Extensions support.
This implementation is accelerated using PMULL instructions to perform
the finite field computations. For added efficiency, 8 blocks of the
message are processed simultaneously by precomputing the first 8
powers of the key.
Karatsuba multiplication is used instead of Schoolbook multiplication
because it was found to be slightly faster on ARM64 CPUs. Montgomery
reduction must be used instead of Barrett reduction due to the
difference in modulus between POLYVAL's field and other finite fields.
More information on POLYVAL can be found in the HCTR2 paper:
"Length-preserving encryption with HCTR2":
https://eprint.iacr.org/2021/1441.pdf
Signed-off-by: Nathan Huckleberry <nhuck@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Added some clarifying comments, changed the register allocations to make
the code clearer, and added register aliases.
Signed-off-by: Nathan Huckleberry <nhuck@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add hardware accelerated version of XCTR for ARM64 CPUs with ARMv8
Crypto Extension support. This XCTR implementation is based on the CTR
implementation in aes-modes.S.
More information on XCTR can be found in
the HCTR2 paper: "Length-preserving encryption with HCTR2":
https://eprint.iacr.org/2021/1441.pdf
Signed-off-by: Nathan Huckleberry <nhuck@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit d2825fa9365d ("crypto: sm3,sm4 - move into crypto directory")
moved the sm4 library implementation from the lib/crypto directory to
the crypto directory and configured the name as CRYPTO_SM4. The arm64
SM4 NEON/CE implementation depends on this and needs to be modified
uniformly.
Fixes: 4f1aef9b806f ("crypto: arm64/sm4 - add ARMv8 NEON implementation")
Fixes: 5b33e0ec881c ("crypto: arm64/sm4 - add ARMv8 Crypto Extensions implementation")
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds ARMv8 implementations of SM4 in ECB, CBC, CFB and CTR
modes using Crypto Extensions, also includes key expansion operations
because the Crypto Extensions instruction is much faster than software
implementations.
The Crypto Extensions for SM4 can only run on ARMv8 implementations
that have support for these optional extensions.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218
mode of tcrypt. The abscissas are blocks of different lengths. The
data is tabulated and the unit is Mb/s:
sm4-generic | 16 64 128 256 1024 1420 4096
ECB enc | 80.05 91.42 93.66 94.77 95.69 95.77 95.86
ECB dec | 79.98 91.41 93.64 94.76 95.66 95.77 95.85
CBC enc | 78.55 86.50 88.02 88.77 89.36 89.42 89.48
CBC dec | 76.82 89.06 91.52 92.77 93.75 93.83 93.96
CFB enc | 77.64 86.13 87.62 88.42 89.08 88.83 89.18
CFB dec | 77.57 88.34 90.36 91.45 92.34 92.00 92.44
CTR enc | 77.80 88.28 90.23 91.22 92.11 91.81 92.25
CTR dec | 77.83 88.22 90.22 91.22 92.04 91.82 92.28
sm4-neon
ECB enc | 28.31 112.77 203.03 209.89 215.49 202.11 210.59
ECB dec | 28.36 113.45 203.23 210.00 215.52 202.13 210.65
CBC enc | 79.32 87.02 88.51 89.28 89.85 89.89 89.97
CBC dec | 28.29 112.20 203.30 209.82 214.99 201.51 209.95
CFB enc | 79.59 87.16 88.54 89.30 89.83 89.62 89.92
CFB dec | 28.12 111.05 202.47 209.02 214.21 210.90 209.12
CTR enc | 28.04 108.81 200.62 206.65 211.78 208.78 206.74
CTR dec | 28.02 108.82 200.45 206.62 211.78 208.74 206.70
sm4-ce-cipher
ECB enc | 336.79 587.13 682.70 747.37 803.75 811.52 818.06
ECB dec | 339.18 584.52 679.72 743.68 798.82 803.83 811.54
CBC enc | 316.63 521.47 597.00 647.14 690.82 695.21 700.55
CBC dec | 291.80 503.79 585.66 640.82 689.86 695.16 701.72
CFB enc | 294.79 482.31 552.13 594.71 631.60 628.91 638.92
CFB dec | 293.09 466.44 526.56 563.17 594.41 592.26 601.97
CTR enc | 309.61 506.13 576.86 620.47 656.38 654.51 665.10
CTR dec | 306.69 505.57 576.84 620.18 657.09 654.52 665.32
sm4-ce
ECB enc | 366.96 1329.81 2024.29 2755.50 3790.07 3861.91 4051.40
ECB dec | 367.30 1323.93 2018.72 2747.43 3787.39 3862.55 4052.62
CBC enc | 358.09 682.68 807.24 885.35 958.29 963.60 973.73
CBC dec | 366.51 1303.63 1978.64 2667.93 3624.53 3683.41 3856.08
CFB enc | 351.51 681.26 807.81 893.10 968.54 969.17 985.83
CFB dec | 354.98 1266.61 1929.63 2634.81 3614.23 3611.59 3841.68
CTR enc | 324.23 1121.25 1689.44 2256.70 2981.90 3007.79 3060.74
CTR dec | 324.18 1120.44 1694.31 2258.32 2982.01 3010.09 3060.99
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds ARMv8 NEON implementations of SM4 in ECB, CBC, CFB and CTR
modes. This implementation uses the plain NEON instruction set, All
S-BOX substitutions uses the tbl/tbx instructions of ARMv8, combined
with the out-of-order execution in CPU, this optimization supports
encryption of up to 8 blocks at the same time.
The performance of encrypting one block is not as good as software
implementation, so the encryption operations of CBC and CFB still
use pure software algorithms.
Benchmark on T-Head Yitian-710 2.75 GHz, the data comes from the 218
mode of tcrypt. The abscissas are blocks of different lengths. The
data is tabulated and the unit is Mb/s:
sm4-generic | 16 64 128 256 1024 1420 4096
ECB enc | 80.05 91.42 93.66 94.77 95.69 95.77 95.86
ECB dec | 79.98 91.41 93.64 94.76 95.66 95.77 95.85
CBC enc | 78.55 86.50 88.02 88.77 89.36 89.42 89.48
CBC dec | 76.82 89.06 91.52 92.77 93.75 93.83 93.96
CFB enc | 77.64 86.13 87.62 88.42 89.08 88.83 89.18
CFB dec | 77.57 88.34 90.36 91.45 92.34 92.00 92.44
CTR enc | 77.80 88.28 90.23 91.22 92.11 91.81 92.25
CTR dec | 77.83 88.22 90.22 91.22 92.04 91.82 92.28
sm4-neon
ECB enc | 28.31 112.77 203.03 209.89 215.49 202.11 210.59
ECB dec | 28.36 113.45 203.23 210.00 215.52 202.13 210.65
CBC enc | 79.32 87.02 88.51 89.28 89.85 89.89 89.97
CBC dec | 28.29 112.20 203.30 209.82 214.99 201.51 209.95
CFB enc | 79.59 87.16 88.54 89.30 89.83 89.62 89.92
CFB dec | 28.12 111.05 202.47 209.02 214.21 210.90 209.12
CTR enc | 28.04 108.81 200.62 206.65 211.78 208.78 206.74
CTR dec | 28.02 108.82 200.45 206.62 211.78 208.74 206.70
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The subsequent patches of the series will have an implementation
of SM4-ECB/CBC/CFB/CTR accelerated by the CE instruction set, which
conflicts with the current module name. In order to keep the naming
rules of the AES algorithm consistent, the sm4-ce algorithm is
renamed to sm4-ce-cipher.
In addition, the speed of sm4-ce-cipher is better than that of SM4
NEON. By the way, the priority of the algorithm is adjusted to 300,
which is also to leave room for the priority of SM4 NEON.
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lib/crypto libraries live in lib because they are used by various
drivers of the kernel. In contrast, the various helper functions in
crypto are there because they're used exclusively by the crypto API. The
SM3 and SM4 helper functions were erroniously moved into lib/crypto/
instead of crypto/, even though there are no in-kernel users outside of
the crypto API of those functions. This commit moves them into crypto/.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
For spdx, use // for *.c files
Replacements
significanty to significantly
Signed-off-by: Tom Rix <trix@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even though the kernel's implementations of AES-XTS were updated to
implement ciphertext stealing and can operate on inputs of any size
larger than or equal to the AES block size, this feature is rarely used
in practice.
In fact, in the kernel, AES-XTS is only used to operate on 4096 or 512
byte blocks, which means that not only the ciphertext stealing is
effectively dead code, the logic in the bit sliced NEON implementation
to deal with fewer than 8 blocks at a time is also never used.
Since the bit-sliced NEON driver already depends on the plain NEON
version, which is slower but can operate on smaller data quantities more
straightforwardly, let's fallback to the plain NEON implementation of
XTS for any residual inputs that are not multiples of 128 bytes. This
allows us to remove a lot of complicated logic that rarely gets
exercised in practice.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of processing the entire input with the 8-way bit sliced
algorithm, which is sub-optimal for inputs that are not a multiple of
128 bytes in size, invoke the plain NEON version of CTR for the
remainder of the input after processing the bulk using 128 byte strides.
This allows us to greatly simplify the asm code that implements CTR, and
get rid of all the branches and special code paths. It also gains us a
couple of percent of performance.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of falling back to C code to do a memcpy of the output of the
last block, handle this in the asm code directly if possible, which is
the case if the entire input is longer than 16 bytes.
Cc: Nathan Huckleberry <nhuck@google.com>
Cc: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|