summaryrefslogtreecommitdiffstats
path: root/crypto/shash.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner2019-05-301-6/+1
| | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: shash - remove shash_desc::flagsEric Biggers2019-04-251-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - remove useless crypto_yield() in shash_ahash_digest()Eric Biggers2019-04-251-1/+0
| | | | | | | | The crypto_yield() in shash_ahash_digest() occurs after the entire digest operation already happened, so there's no real point. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - fix missed optimization in shash_ahash_digest()Eric Biggers2019-04-181-1/+1
| | | | | | | | | | | shash_ahash_digest(), which is the ->digest() method for ahash tfms that use an shash algorithm, has an optimization where crypto_shash_digest() is called if the data is in a single page. But an off-by-one error prevented this path from being taken unless the user happened to provide extra data in the scatterlist. Fix it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - remove pointless checks of shash_alg::{export,import}Eric Biggers2019-01-181-4/+2
| | | | | | | | | | | | | | crypto_init_shash_ops_async() only gives the ahash tfm non-NULL ->export() and ->import() if the underlying shash alg has these non-NULL. This doesn't make sense because when an shash algorithm is registered, shash_prepare_alg() sets a default ->export() and ->import() if the implementor didn't provide them. And elsewhere it's assumed that all shash algs and ahash tfms have non-NULL ->export() and ->import(). Therefore, remove these unnecessary, always-true conditions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - require neither or both ->export() and ->import()Eric Biggers2019-01-181-0/+3
| | | | | | | | | | | | | Prevent registering shash algorithms that implement ->export() but not ->import(), or ->import() but not ->export(). Such cases don't make sense and could confuse the check that shash_prepare_alg() does for just ->export(). I don't believe this affects any existing algorithms; this is just preventing future mistakes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers2019-01-181-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - clean up report structure copyingEric Biggers2018-11-091-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There have been a pretty ridiculous number of issues with initializing the report structures that are copied to userspace by NETLINK_CRYPTO. Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion") replaced some strncpy()s with strlcpy()s, thereby introducing information leaks. Later two other people tried to replace other strncpy()s with strlcpy() too, which would have introduced even more information leaks: - https://lore.kernel.org/patchwork/patch/954991/ - https://patchwork.kernel.org/patch/10434351/ Commit cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") also uses the buggy strlcpy() approach and therefore leaks uninitialized memory to userspace. A fix was proposed, but it was originally incomplete. Seeing as how apparently no one can get this right with the current approach, change all the reporting functions to: - Start by memsetting the report structure to 0. This guarantees it's always initialized, regardless of what happens later. - Initialize all strings using strscpy(). This is safe after the memset, ensures null termination of long strings, avoids unnecessary work, and avoids the -Wstringop-truncation warnings from gcc. - Use sizeof(var) instead of sizeof(type). This is more robust against copy+paste errors. For simplicity, also reuse the -EMSGSIZE return value from nla_put(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Remove VLA usage in unaligned hashingKees Cook2018-09-041-11/+16
| | | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. Additionally, the __aligned_largest macro is removed since this helper was the only user. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove VLA usageKees Cook2018-09-041-3/+3
| | | | | | | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize()) by using the maximum allowable size (which is now more clearly captured in a macro), along with a few other cases. Similar limits are turned into macros as well. A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the largest digest size and that sizeof(struct sha3_state) (360) is the largest descriptor size. The corresponding maximums are reduced. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - prevent using keyed hashes without setting keyEric Biggers2018-01-121-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, almost none of the keyed hash algorithms check whether a key has been set before proceeding. Some algorithms are okay with this and will effectively just use a key of all 0's or some other bogus default. However, others will severely break, as demonstrated using "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash via a (potentially exploitable) stack buffer overflow. A while ago, this problem was solved for AF_ALG by pairing each hash transform with a 'has_key' bool. However, there are still other places in the kernel where userspace can specify an arbitrary hash algorithm by name, and the kernel uses it as unkeyed hash without checking whether it is really unkeyed. Examples of this include: - KEYCTL_DH_COMPUTE, via the KDF extension - dm-verity - dm-crypt, via the ESSIV support - dm-integrity, via the "internal hash" mode with no key given - drbd (Distributed Replicated Block Device) This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no privileges to call. Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the ->crt_flags of each hash transform that indicates whether the transform still needs to be keyed or not. Then, make the hash init, import, and digest functions return -ENOKEY if the key is still needed. The new flag also replaces the 'has_key' bool which algif_hash was previously using, thereby simplifying the algif_hash implementation. Reported-by: syzbot <syzkaller@googlegroups.com> Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hmac - require that the underlying hash algorithm is unkeyedEric Biggers2017-11-291-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because the HMAC template didn't check that its underlying hash algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))" through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC being used without having been keyed, resulting in sha3_update() being called without sha3_init(), causing a stack buffer overflow. This is a very old bug, but it seems to have only started causing real problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3) because the innermost hash's state is ->import()ed from a zeroed buffer, and it just so happens that other hash algorithms are fine with that, but SHA-3 is not. However, there could be arch or hardware-dependent hash algorithms also affected; I couldn't test everything. Fix the bug by introducing a function crypto_shash_alg_has_setkey() which tests whether a shash algorithm is keyed. Then update the HMAC template to require that its underlying hash algorithm is unkeyed. Here is a reproducer: #include <linux/if_alg.h> #include <sys/socket.h> int main() { int algfd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "hmac(hmac(sha3-512-generic))", }; char key[4096] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (const struct sockaddr *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); } Here was the KASAN report from syzbot: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline] BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044 CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:53 print_address_description+0x73/0x250 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report+0x25b/0x340 mm/kasan/report.c:409 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x137/0x190 mm/kasan/kasan.c:267 memcpy+0x37/0x50 mm/kasan/kasan.c:303 memcpy include/linux/string.h:341 [inline] sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 crypto_shash_update+0xcb/0x220 crypto/shash.c:109 shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 hmac_finup+0x182/0x330 crypto/hmac.c:152 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172 crypto_shash_digest+0xc4/0x120 crypto/shash.c:186 hmac_setkey+0x36a/0x690 crypto/hmac.c:66 crypto_shash_setkey+0xad/0x190 crypto/shash.c:64 shash_async_setkey+0x47/0x60 crypto/shash.c:207 crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200 hash_setkey+0x40/0x90 crypto/algif_hash.c:446 alg_setkey crypto/af_alg.c:221 [inline] alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254 SYSC_setsockopt net/socket.c:1851 [inline] SyS_setsockopt+0x189/0x360 net/socket.c:1830 entry_SYSCALL_64_fastpath+0x1f/0x96 Reported-by: syzbot <syzkaller@googlegroups.com> Cc: <stable@vger.kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Fix zero-length shash ahash digest crashHerbert Xu2017-10-101-3/+5
| | | | | | | | | | | | | The shash ahash digest adaptor function may crash if given a zero-length input together with a null SG list. This is because it tries to read the SG list before looking at the length. This patch fixes it by checking the length first. Cc: <stable@vger.kernel.org> Reported-by: Stephan Müller<smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Stephan Müller <smueller@chronox.de>
* crypto: shash - Fix a sleep-in-atomic bug in shash_setkey_unalignedJia-Ju Bai2017-10-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The SCTP program may sleep under a spinlock, and the function call path is: sctp_generate_t3_rtx_event (acquire the spinlock) sctp_do_sm sctp_side_effects sctp_cmd_interpreter sctp_make_init_ack sctp_pack_cookie crypto_shash_setkey shash_setkey_unaligned kmalloc(GFP_KERNEL) For the same reason, the orinoco driver may sleep in interrupt handler, and the function call path is: orinoco_rx_isr_tasklet orinoco_rx orinoco_mic crypto_shash_setkey shash_setkey_unaligned kmalloc(GFP_KERNEL) To fix it, GFP_KERNEL is replaced with GFP_ATOMIC. This bug is found by my static analysis tool and my code review. Signed-off-by: Jia-Ju Bai <baijiaju1990@163.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Replaced gcc specific attributes with macros from compiler.hGideon Israel Dsouza2017-01-121-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | Continuing from this commit: 52f5684c8e1e ("kernel: use macros from compiler.h instead of __attribute__((...))") I submitted 4 total patches. They are part of task I've taken up to increase compiler portability in the kernel. I've cleaned up the subsystems under /kernel /mm /block and /security, this patch targets /crypto. There is <linux/compiler.h> which provides macros for various gcc specific constructs. Eg: __weak for __attribute__((weak)). I've cleaned all instances of gcc specific attributes with the right macros for the crypto subsystem. I had to make one additional change into compiler-gcc.h for the case when one wants to use this: __attribute__((aligned) and not specify an alignment factor. From the gcc docs, this will result in the largest alignment for that data type on the target machine so I've named the macro __aligned_largest. Please advise if another name is more appropriate. Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2016-03-171-147/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "Here is the crypto update for 4.6: API: - Convert remaining crypto_hash users to shash or ahash, also convert blkcipher/ablkcipher users to skcipher. - Remove crypto_hash interface. - Remove crypto_pcomp interface. - Add crypto engine for async cipher drivers. - Add akcipher documentation. - Add skcipher documentation. Algorithms: - Rename crypto/crc32 to avoid name clash with lib/crc32. - Fix bug in keywrap where we zero the wrong pointer. Drivers: - Support T5/M5, T7/M7 SPARC CPUs in n2 hwrng driver. - Add PIC32 hwrng driver. - Support BCM6368 in bcm63xx hwrng driver. - Pack structs for 32-bit compat users in qat. - Use crypto engine in omap-aes. - Add support for sama5d2x SoCs in atmel-sha. - Make atmel-sha available again. - Make sahara hashing available again. - Make ccp hashing available again. - Make sha1-mb available again. - Add support for multiple devices in ccp. - Improve DMA performance in caam. - Add hashing support to rockchip" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits) crypto: qat - remove redundant arbiter configuration crypto: ux500 - fix checks of error code returned by devm_ioremap_resource() crypto: atmel - fix checks of error code returned by devm_ioremap_resource() crypto: qat - Change the definition of icp_qat_uof_regtype hwrng: exynos - use __maybe_unused to hide pm functions crypto: ccp - Add abstraction for device-specific calls crypto: ccp - CCP versioning support crypto: ccp - Support for multiple CCPs crypto: ccp - Remove check for x86 family and model crypto: ccp - memset request context to zero during import lib/mpi: use "static inline" instead of "extern inline" lib/mpi: avoid assembler warning hwrng: bcm63xx - fix non device tree compatibility crypto: testmgr - allow rfc3686 aes-ctr variants in fips mode. crypto: qat - The AE id should be less than the maximal AE number lib/mpi: Endianness fix crypto: rockchip - add hash support for crypto engine in rk3288 crypto: xts - fix compile errors crypto: doc - add skcipher API documentation crypto: doc - update AEAD AD handling ...
| * crypto: hash - Remove crypto_hash interfaceHerbert Xu2016-02-061-147/+0
| | | | | | | | | | | | | | This patch removes all traces of the crypto_hash interface, now that everyone has switched over to shash or ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: shash - Fix has_key settingHerbert Xu2016-01-271-4/+3
|/ | | | | | | | | | | | The has_key logic is wrong for shash algorithms as they always have a setkey function. So we should instead be testing against shash_no_setkey. Fixes: a5596d633278 ("crypto: hash - Add crypto_ahash_has_setkey") Cc: stable@vger.kernel.org Reported-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Stephan Mueller <smueller@chronox.de>
* crypto: hash - Add crypto_ahash_has_setkeyHerbert Xu2016-01-181-1/+3
| | | | | | | | This patch adds a way for ahash users to determine whether a key is required by a crypto_ahash transform. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Use crypto_alg_extsize helperHerbert Xu2015-04-211-6/+1
| | | | | | | This patch replaces crypto_shash_extsize function with crypto_alg_extsize. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: LLVMLinux: aligned-attribute.patchMark Charlebois2014-06-071-1/+2
| | | | | | | | | | | | __attribute__((aligned)) applies the default alignment for the largest scalar type for the target ABI. gcc allows it to be applied inline to a defined type. Clang only allows it to be applied to a type definition (PR11071). Making it into 2 lines makes it more readable and works with both compilers. Author: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com>
* crypto: user - fix info leaks in report APIMathias Krause2013-02-191-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Three errors resulting in kernel memory disclosure: 1/ The structures used for the netlink based crypto algorithm report API are located on the stack. As snprintf() does not fill the remainder of the buffer with null bytes, those stack bytes will be disclosed to users of the API. Switch to strncpy() to fix this. 2/ crypto_report_one() does not initialize all field of struct crypto_user_alg. Fix this to fix the heap info leak. 3/ For the module name we should copy only as many bytes as module_name() returns -- not as much as the destination buffer could hold. But the current code does not and therefore copies random data from behind the end of the module name, as the module name is always shorter than CRYPTO_MAX_ALG_NAME. Also switch to use strncpy() to copy the algorithm's name and driver_name. They are strings, after all. Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: add crypto_[un]register_shashes for [un]registering multiple shash ↵Jussi Kivilinna2012-08-011-0/+36
| | | | | | | | | | entries at once Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash crypto modules that register multiple algorithms. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Stop using NLA_PUT*().David S. Miller2012-04-021-3/+3
| | | | | | | These macros contain a hidden goto, and are thus extremely error prone and make code hard to audit. Signed-off-by: David S. Miller <davem@davemloft.net>
* crypto: remove the second argument of k[un]map_atomic()Cong Wang2012-03-201-4/+4
| | | | Signed-off-by: Cong Wang <amwang@redhat.com>
* crypto: algapi - Fix build problem with NET disabledHerbert Xu2011-11-101-0/+7
| | | | | | | | The report functions use NLA_PUT so we need to ensure that NET is enabled. Reported-by: Luis Henriques <henrix@camandro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Add userspace report for shash type algorithmsSteffen Klassert2011-10-211-0/+21
| | | | | Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Fix async import on shash algorithmHerbert Xu2010-11-041-1/+7
| | | | | | | | | | The function shash_async_import did not initialise the descriptor correctly prior to calling the underlying shash import function. This patch adds the required initialisation. Reported-by: Miloslav Trmac <mitr@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Remove usage of CRYPTO_MINALIGNHerbert Xu2010-05-191-1/+1
| | | | | | | The macro CRYPTO_MINALIGN is not meant to be used directly. This patch replaces it with crypto_tfm_ctx_alignment. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Test for the algorithms import function before exporting itSteffen Klassert2009-07-241-1/+1
| | | | | | | | | crypto_init_shash_ops_async() tests for setkey and not for import before exporting the algorithms import function to ahash. This patch fixes this. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Require all algorithms to support export/importHerbert Xu2009-07-221-8/+11
| | | | | | | | | | | | | | | | | This patch provides a default export/import function for all shash algorithms. It simply copies the descriptor context as is done by sha1_generic. This in essence means that all existing shash algorithms now support export/import. This is something that will be depended upon in implementations such as hmac. Therefore all new shash and ahash implementations must support export/import. For those that cannot obtain a partial result, padlock-sha's fallback model should be used so that a partial result is always available. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Fix async finup handling of null digestHerbert Xu2009-07-151-2/+7
| | | | | | | When shash_ahash_finup encounters a null request, we end up not calling the underlying final function. This patch fixes that. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ahash - Add unaligned handling and default operationsHerbert Xu2009-07-151-4/+48
| | | | | | | | | | This patch exports the finup operation where available and adds a default finup operation for ahash. The operations final, finup and digest also will now deal with unaligned result pointers by copying it. Finally export/import operations are will now be exported too. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Fix alignment in unaligned operationsHerbert Xu2009-07-141-2/+4
| | | | | | | | When we encounter an unaligned pointer we are supposed to copy it to a temporary aligned location. However the temporary buffer isn't aligned properly. This patch fixes that. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Zap unaligned buffersHerbert Xu2009-07-141-3/+11
| | | | | | | | | | | | Some unaligned buffers on the stack weren't zapped properly which may cause secret data to be leaked. This patch fixes them by doing a zero memset. It is also possible for us to place random kernel stack contents in the digest buffer if a digest operation fails. This is fixed by only copying if the operation succeeded. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ahash - Remove old_ahash_algHerbert Xu2009-07-141-2/+0
| | | | | | | Now that all ahash implementations have been converted to the new ahash type, we can remove old_ahash_alg and its associated support. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ahash - Convert to new style algorithmsHerbert Xu2009-07-141-6/+2
| | | | | | | | | This patch converts crypto_ahash to the new style. The old ahash algorithm type is retained until the existing ahash implementations are also converted. All ahash users will automatically get the new crypto_ahash type. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Remove frontend argument from extsize/init_tfmHerbert Xu2009-07-141-4/+2
| | | | | | | As the extsize and init_tfm functions belong to the frontend the frontend argument is superfluous. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Export async functionsHerbert Xu2009-07-141-20/+22
| | | | | | | This patch exports the async functions so that they can be reused by cryptd when it switches over to using shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Make descsize a run-time attributeHerbert Xu2009-07-141-11/+28
| | | | | | | This patch changes descsize to a run-time attribute so that implementations can change it in their init functions. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Move null setkey check to registration timeHerbert Xu2009-07-121-3/+8
| | | | | | | This patch moves the run-time null setkey check to shash_prepare_alg just like we did for finup/digest. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Move finup/digest null checks to registration timeHerbert Xu2009-07-111-4/+6
| | | | | | | This patch moves the run-time null finup/digest checks to the shash_prepare_alg function which is run at registration time. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Export/import hash state onlyHerbert Xu2009-07-111-11/+14
| | | | | | | | | | | | | This patch replaces the full descriptor export with an export of the partial hash state. This allows the use of a consistent export format across all implementations of a given algorithm. This is useful because a number of cases require the use of the partial hash state, e.g., PadLock can use the SHA1 hash state to get around the fact that it can only hash contiguous data chunks. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Propagate reinit return valueHerbert Xu2009-07-081-1/+1
| | | | | | | This patch fixes crypto_shash_import to propagate the value returned by reinit. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Use finup in default digestHerbert Xu2009-07-081-2/+1
| | | | | | This patch simplifies the default digest function by using finup. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Add shash_register_instanceHerbert Xu2009-07-081-1/+25
| | | | | | | | This patch adds shash_register_instance so that shash instances can be registered without bypassing the shash checks applied to normal algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Add shash_attr_alg2 helperHerbert Xu2009-07-081-0/+10
| | | | | | | This patch adds the helper shash_attr_alg2 which locates a shash algorithm based on the information in the given attribute. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Add spawn supportHerbert Xu2009-07-081-0/+9
| | | | | | | This patch adds the functions needed to create and use shash spawns, i.e., to use shash algorithms in a template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Add shash_instanceHerbert Xu2009-07-081-0/+7
| | | | | | | | | This patch adds shash_instance and the associated alloc/free functions. This is meant to be an instance that with a shash algorithm under it. Note that the instance itself doesn't have to be shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Fix unaligned calculation with short lengthYehuda Sadeh2009-03-271-0/+3
| | | | | | | When the total length is shorter than the calculated number of unaligned bytes, the call to shash->update breaks. For example, calling crc32c on unaligned buffer with length of 1 can result in a system crash. Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>