summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* crypto: atmel-tdes - Remove unused header includesTudor Ambarus2019-12-111-3/+0
| | | | | | | Hash headers are not used. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-{sha,tdes} - Change algorithm prioritiesTudor Ambarus2019-12-112-20/+24
| | | | | | | | | Increase the algorithm priorities so the hardware acceleration is now preferred to the software computation: the generic drivers use 100 as priority. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-tdes - Constify value to write to hwTudor Ambarus2019-12-111-1/+1
| | | | | | | atmel_tdes_write_n() should not modify its value argument. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: af_alg - Use bh_lock_sock in sk_destructHerbert Xu2019-12-111-2/+4
| | | | | | | | | | | | | As af_alg_release_parent may be called from BH context (most notably due to an async request that only completes after socket closure, or as reported here because of an RCU-delayed sk_destruct call), we must use bh_lock_sock instead of lock_sock. Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Fixes: c840ac6af3f8 ("crypto: af_alg - Disallow bind/setkey/...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* padata: update documentationDaniel Jordan2019-12-115-161/+198
| | | | | | | | | | | | | | | | | | Remove references to unused functions, standardize language, update to reflect new functionality, migrate to rst format, and fix all kernel-doc warnings. Fixes: 815613da6a67 ("kernel/padata.c: removed unused code") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* padata: remove reorder_objectsDaniel Jordan2019-12-112-5/+0
| | | | | | | | | | | | | reorder_objects is unused since the rework of padata's flushing, so remove it. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* padata: remove cpumask change notifierDaniel Jordan2019-12-114-87/+1
| | | | | | | | | | | | | | | Since commit 63d3578892dc ("crypto: pcrypt - remove padata cpumask notifier") this feature is unused, so get rid of it. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* padata: always acquire cpu_hotplug_lock before pinst->lockDaniel Jordan2019-12-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | lockdep complains when padata's paths to update cpumasks via CPU hotplug and sysfs are both taken: # echo 0 > /sys/devices/system/cpu/cpu1/online # echo ff > /sys/kernel/pcrypt/pencrypt/parallel_cpumask ====================================================== WARNING: possible circular locking dependency detected 5.4.0-rc8-padata-cpuhp-v3+ #1 Not tainted ------------------------------------------------------ bash/205 is trying to acquire lock: ffffffff8286bcd0 (cpu_hotplug_lock.rw_sem){++++}, at: padata_set_cpumask+0x2b/0x120 but task is already holding lock: ffff8880001abfa0 (&pinst->lock){+.+.}, at: padata_set_cpumask+0x26/0x120 which lock already depends on the new lock. padata doesn't take cpu_hotplug_lock and pinst->lock in a consistent order. Which should be first? CPU hotplug calls into padata with cpu_hotplug_lock already held, so it should have priority. Fixes: 6751fb3c0e0c ("padata: Use get_online_cpus/put_online_cpus") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* padata: validate cpumask without removed CPU during offlineDaniel Jordan2019-12-112-12/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Configuring an instance's parallel mask without any online CPUs... echo 2 > /sys/kernel/pcrypt/pencrypt/parallel_cpumask echo 0 > /sys/devices/system/cpu/cpu1/online ...makes tcrypt mode=215 crash like this: divide error: 0000 [#1] SMP PTI CPU: 4 PID: 283 Comm: modprobe Not tainted 5.4.0-rc8-padata-doc-v2+ #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191013_105130-anatol 04/01/2014 RIP: 0010:padata_do_parallel+0x114/0x300 Call Trace: pcrypt_aead_encrypt+0xc0/0xd0 [pcrypt] crypto_aead_encrypt+0x1f/0x30 do_mult_aead_op+0x4e/0xdf [tcrypt] test_mb_aead_speed.constprop.0.cold+0x226/0x564 [tcrypt] do_test+0x28c2/0x4d49 [tcrypt] tcrypt_mod_init+0x55/0x1000 [tcrypt] ... cpumask_weight() in padata_cpu_hash() returns 0 because the mask has no CPUs. The problem is __padata_remove_cpu() checks for valid masks too early and so doesn't mark the instance PADATA_INVALID as expected, which would have made padata_do_parallel() return error before doing the division. Fix by introducing a second padata CPU hotplug state before CPUHP_BRINGUP_CPU so that __padata_remove_cpu() sees the online mask without @cpu. No need for the second argument to padata_replace() since @cpu is now already missing from the online mask. Fixes: 33e54450683c ("padata: Handle empty padata cpumasks") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cipher - remove crt_u.cipher (struct cipher_tfm)Eric Biggers2019-12-114-114/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey() is pointless because it always points to setkey() in crypto/cipher.c. ->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless, since if the algorithm doesn't have an alignmask, they are set directly to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization" isn't worthwhile because: - The "cipher" algorithm type is the only algorithm still using crt_u, so it's bloating every struct crypto_tfm for every algorithm type. - If the algorithm has an alignmask, this "optimization" actually makes things slower, as it causes 2 indirect calls per block rather than 1. - It adds extra code complexity. - Some templates already call ->cia_encrypt()/->cia_decrypt() directly instead of going through ->cit_encrypt_one()/->cit_decrypt_one(). - The "cipher" algorithm type never gives optimal performance anyway. For that, a higher-level type such as skcipher needs to be used. Therefore, just remove the extra indirection, and make crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c. Also remove the unused function crypto_cipher_cast(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: compress - remove crt_u.compress (struct compress_tfm)Eric Biggers2019-12-114-58/+19
| | | | | | | | | | | | | | | crt_u.compress (struct compress_tfm) is pointless because its two fields, ->cot_compress() and ->cot_decompress(), always point to crypto_compress() and crypto_decompress(). Remove this pointless indirection, and just make crypto_comp_compress() and crypto_comp_decompress() be direct calls to what used to be crypto_compress() and crypto_decompress(). Also remove the unused function crypto_comp_cast(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - generate inauthentic AEAD test vectorsEric Biggers2019-12-112-73/+261
| | | | | | | | | | | | | | | | | | | | The whole point of using an AEAD over length-preserving encryption is that the data is authenticated. However currently the fuzz tests don't test any inauthentic inputs to verify that the data is actually being authenticated. And only two algorithms ("rfc4543(gcm(aes))" and "ccm(aes)") even have any inauthentic test vectors at all. Therefore, update the AEAD fuzz tests to sometimes generate inauthentic test vectors, either by generating a (ciphertext, AAD) pair without using the key, or by mutating an authentic pair that was generated. To avoid flakiness, only assume this works reliably if the auth tag is at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp algorithms intentionally ignoring the last 8 AAD bytes, and for some algorithms doing extra checks that result in EINVAL rather than EBADMSG. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - create struct aead_extra_tests_ctxEric Biggers2019-12-111-71/+99
| | | | | | | | | | | | | | | | | | In preparation for adding inauthentic input fuzz tests, which don't require that a generic implementation of the algorithm be available, refactor test_aead_vs_generic_impl() so that instead there's a higher-level function test_aead_extra() which initializes a struct aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a pointer to that struct. As a bonus, this reduces stack usage. Also switch from crypto_aead_alg(tfm)->maxauthsize to crypto_aead_maxauthsize(), now that the latter is available in <crypto/aead.h>. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - test setting misaligned keysEric Biggers2019-12-111-4/+69
| | | | | | | | | | | | | The alignment bug in ghash_setkey() fixed by commit 5c6bc4dfa515 ("crypto: ghash - fix unaligned memory access in ghash_setkey()") wasn't reliably detected by the crypto self-tests on ARM because the tests only set the keys directly from the test vectors. To improve test coverage, update the tests to sometimes pass misaligned keys to setkey(). This applies to shash, ahash, skcipher, and aead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - check skcipher min_keysizeEric Biggers2019-12-111-0/+9
| | | | | | | | | | When checking two implementations of the same skcipher algorithm for consistency, require that the minimum key size be the same, not just the maximum key size. There's no good reason to allow different minimum key sizes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - don't try to decrypt uninitialized buffersEric Biggers2019-12-111-4/+16
| | | | | | | | | | | | | | | | | | | | | Currently if the comparison fuzz tests encounter an encryption error when generating an skcipher or AEAD test vector, they will still test the decryption side (passing it the uninitialized ciphertext buffer) and expect it to fail with the same error. This is sort of broken because it's not well-defined usage of the API to pass an uninitialized buffer, and furthermore in the AEAD case it's acceptable for the decryption error to be EBADMSG (meaning "inauthentic input") even if the encryption error was something else like EINVAL. Fix this for skcipher by explicitly initializing the ciphertext buffer on error, and for AEAD by skipping the decryption test on error. Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation") Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - add crypto_skcipher_min_keysize()Eric Biggers2019-12-111-0/+6
| | | | | | | | | | Add a helper function crypto_skcipher_min_keysize() to mirror crypto_skcipher_max_keysize(). This will be used by the self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aead - move crypto_aead_maxauthsize() to <crypto/aead.h>Eric Biggers2019-12-112-10/+10
| | | | | | | | | | Move crypto_aead_maxauthsize() to <crypto/aead.h> so that it's available to users of the API, not just AEAD implementations. This will be used by the self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-crypto - copy the temporary data to output buffer properlyTero Kristo2019-12-111-1/+36
| | | | | | | | | | | | Both source and destination are scatterlists that can contain multiple entries under the omap crypto cleanup handling. Current code only copies data from the first source scatterlist entry to the target scatterlist, potentially omitting any sg entries following the first one. Instead, implement a new routine that walks through both source and target and copies the data over once it goes. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-des - handle NULL cipher requestTero Kristo2019-12-111-0/+3
| | | | | | | | If no data is provided for DES request, just return immediately. No processing is needed in this case. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-des - avoid unnecessary spam with bad cryptlenTero Kristo2019-12-111-3/+1
| | | | | | | Remove the error print in this case, and just return the error. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - convert to use crypto engineTero Kristo2019-12-113-68/+55
| | | | | | | | | | | Currently omap-aes-gcm algorithms are using local implementation for crypto request queuing logic. Instead, implement this via usage of crypto engine which is used already for rest of the omap aes algorithms. This avoids some random conflicts / crashes also which can happen if both aes and aes-gcm are attempted to be used simultaneously. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-sham - fix unaligned sg list handlingTero Kristo2019-12-111-5/+14
| | | | | | | | | Currently the offset for unaligned sg lists is not handled properly leading into wrong results with certain testmgr self tests. Fix the handling to account for proper offset within the current sg list. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - fix failure with assocdata onlyTero Kristo2019-12-112-27/+42
| | | | | | | | | | | | | | | If we only have assocdata with an omap-aes-gcm, it currently just completes it directly without passing it over to the crypto HW. This produces wrong results. Fix by passing the request down to the crypto HW, and fix the DMA support code to accept a case where we don't expect any output data. In the case where only assocdata is provided, it just passes through the accelerator and provides authentication results, without any encrypted/decrypted buffer via DMA. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - use the AES library to encrypt the tagArd Biesheuvel2019-12-113-98/+33
| | | | | | | | | | | | The OMAP AES-GCM implementation uses a fallback ecb(aes) skcipher to produce the keystream to encrypt the output tag. Let's use the new AES library instead - this is much simpler, and shouldn't affect performance given that it only involves a single block. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - check length of assocdata in RFC4106 modeArd Biesheuvel2019-12-111-2/+4
| | | | | | | | | | | RFC4106 requires the associated data to be a certain size, so reject inputs that are wrong. This also prevents crashes or other problems due to assoclen becoming negative after subtracting 8 bytes. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - add missing .setauthsize hooksArd Biesheuvel2019-12-113-0/+16
| | | | | | | | | | | GCM only permits certain tag lengths, so populate the .setauthsize hooks which ensure that only permitted sizes are accepted by the implementation. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - deal with memory allocation failureArd Biesheuvel2019-12-111-0/+4
| | | | | | | | | | | | The OMAP gcm(aes) driver invokes omap_crypto_align_sg() without dealing with the errors it may return, resulting in a crash if the routine fails in a __get_free_pages(GFP_ATOMIC) call. So bail and return the error rather than limping on if one occurs. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-ctr - set blocksize to 1Ard Biesheuvel2019-12-111-1/+1
| | | | | | | | | CTR is a streamcipher mode of AES, so set the blocksize accordingly. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes - reject invalid input sizes for block modesArd Biesheuvel2019-12-111-0/+3
| | | | | | | | | | | | Block modes such as ECB and CBC only support input sizes that are a round multiple of the block size, so align with the generic code which returns -EINVAL when encountering inputs that violate this rule. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes - fixup aligned data cleanupTero Kristo2019-12-111-2/+2
| | | | | | | | | Aligned data cleanup is using wrong pointers in the cleanup calls. Most of the time these are right, but can cause mysterious problems in some cases. Fix to use the same pointers that were used with the align call. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-sham - fix split update cases with cryptomgr testsTero Kristo2019-12-111-69/+33
| | | | | | | | | | | | | | The updated crypto manager finds a couple of new bugs from the omap-sham driver. Basically the split update cases fail to calculate the amount of data to be sent properly, leading into failed results and hangs with the hw accelerator. To fix these, the buffer handling needs to be fixed, but we do some cleanup for the code at the same time to cut away some unnecessary code so that it is easier to fix. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes-gcm - fix corner case with only auth dataTero Kristo2019-12-111-8/+14
| | | | | | | | | Fix a corner case where only authdata is generated, without any provided assocdata / cryptdata. Passing the empty scatterlists to OMAP AES core driver in this case would confuse it, failing to map DMAs. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-sham - fix buffer handling for split test casesTero Kristo2019-12-111-2/+13
| | | | | | | | | | | | | | | | Current buffer handling logic fails in a case where the buffer contains existing data from previous update which is divisible by block size. This results in a block size of data to be left missing from the sg list going out to the hw accelerator, ending up in stalling the crypto accelerator driver (the last request never completes fully due to missing data.) Fix this by passing the total size of the data instead of the data size of current request, and also parsing the buffer contents within the prepare request handling. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes - add IV output handlingTero Kristo2019-12-111-0/+12
| | | | | | | | | Currently omap-aes driver does not copy end result IV out at all. This is evident with the additional checks done at the crypto test manager. Fix by copying out the IV values from HW. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-des - add IV output handlingTero Kristo2019-12-111-0/+6
| | | | | | | | | Currently omap-des driver does not copy end result IV out at all. This is evident with the additional checks done at the crypto test manager. Fix by copying out the IV values from HW. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-aes - remove the sysfs group during driver removalTero Kristo2019-12-111-1/+2
| | | | | | | | | | The driver removal should also cleanup the created sysfs group. If not, the driver fails the subsequent probe as the files exist already. Also, drop a completely unnecessary pointer assignment from the removal function at the same time. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-sham - remove the sysfs group during driver removalTero Kristo2019-12-111-0/+2
| | | | | | | | The driver removal should also cleanup the created sysfs group. If not, the driver fails the subsequent probe as the files exist already. Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: omap-sham - split up data to multiple sg elements with huge dataTero Kristo2019-12-111-17/+64
| | | | | | | | | | | | | | When using huge data amount, allocating free pages fails as the kernel isn't able to process get_free_page requests larger than MAX_ORDER. Also, the DMA subsystem has an inherent limitation that data size larger than some 2MB can't be handled properly. In these cases, split up the data instead to smaller requests so that the kernel can allocate the data, and also so that the DMA driver can handle the separate SG elements. Signed-off-by: Tero Kristo <t-kristo@ti.com> Tested-by: Bin Liu <b-liu@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - allow essiv and hmac to use OPTIONAL_KEY algorithmsEric Biggers2019-12-114-5/+10
| | | | | | | | | | | | | | | | | | The essiv and hmac templates refuse to use any hash algorithm that has a ->setkey() function, which includes not just algorithms that always need a key, but also algorithms that optionally take a key. Previously the only optionally-keyed hash algorithms in the crypto API were non-cryptographic algorithms like crc32, so this didn't really matter. But that's changed with BLAKE2 support being added. BLAKE2 should work with essiv and hmac, just like any other cryptographic hash. Fix this by allowing the use of both algorithms without a ->setkey() function and algorithms that have the OPTIONAL_KEY flag set. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - remove crypto_skcipher_extsize()Eric Biggers2019-12-111-6/+1
| | | | | | | | | Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher_extsize() now simply calls crypto_alg_extsize(). So remove it and just use crypto_alg_extsize(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - remove crypto_skcipher::decryptEric Biggers2019-12-112-5/+1
| | | | | | | | | | | Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::decrypt is now redundant since it always equals crypto_skcipher_alg(tfm)->decrypt. Remove it and update crypto_skcipher_decrypt() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - remove crypto_skcipher::encryptEric Biggers2019-12-112-3/+1
| | | | | | | | | | | Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::encrypt is now redundant since it always equals crypto_skcipher_alg(tfm)->encrypt. Remove it and update crypto_skcipher_encrypt() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - remove crypto_skcipher::setkeyEric Biggers2019-12-112-9/+4
| | | | | | | | | | | Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::setkey now always points to skcipher_setkey(). Simplify by removing this function pointer and instead just making skcipher_setkey() be crypto_skcipher_setkey() directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - remove crypto_skcipher::keysizeEric Biggers2019-12-115-12/+12
| | | | | | | | | | | | | | | Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::keysize is now redundant since it always equals crypto_skcipher_alg(tfm)->max_keysize. Remove it and update crypto_skcipher_default_keysize() accordingly. Also rename crypto_skcipher_default_keysize() to crypto_skcipher_max_keysize() to clarify that it specifically returns the maximum key size, not some unspecified "default". Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - remove crypto_skcipher::ivsizeEric Biggers2019-12-112-3/+1
| | | | | | | | | | | Due to the removal of the blkcipher and ablkcipher algorithm types, crypto_skcipher::ivsize is now redundant since it always equals crypto_skcipher_alg(tfm)->ivsize. Remove it and update crypto_skcipher_ivsize() accordingly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - remove another reference to blkcipherEric Biggers2019-12-111-1/+1
| | | | | | | | Update a comment to refer to crypto_alloc_skcipher() rather than crypto_alloc_blkcipher() (the latter having been removed). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hisilicon - select CRYPTO_SKCIPHER, not CRYPTO_BLKCIPHEREric Biggers2019-12-111-1/+1
| | | | | | | | | Another instance of CRYPTO_BLKCIPHER made it in just after it was renamed to CRYPTO_SKCIPHER. Fix it. Fixes: 416d82204df4 ("crypto: hisilicon - add HiSilicon SEC V2 driver") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: pcrypt - Do not clear MAY_SLEEP flag in original requestHerbert Xu2019-12-111-1/+0
| | | | | | | | | | | We should not be modifying the original request's MAY_SLEEP flag upon completion. It makes no sense to do so anyway. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/ghash-neon - bump priority to 150Ard Biesheuvel2019-12-111-1/+1
| | | | | | | | | | The SIMD based GHASH implementation for arm64 is typically much faster than the generic one, and doesn't use any lookup tables, so it is clearly preferred when available. So bump the priority to reflect that. Fixes: 5a22b198cd527447 ("crypto: arm64/ghash - register PMULL variants ...") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>