summaryrefslogtreecommitdiffstats
path: root/include/crypto/algapi.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* crypto: api - Add crypto_requires_off helperHerbert Xu2017-02-271-1/+6
| | | | | | | | | This patch adds crypto_requires_off which is an extension of crypto_requires_sync for similar bits such as NEED_FALLBACK. Cc: stable@vger.kernel.org #4.10 Suggested-by: Marcelo Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algapi - make crypto_xor() and crypto_inc() alignment agnosticArd Biesheuvel2017-02-111-2/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input explicitly, but only if the Kconfig symbol HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers. For crypto_inc(), this simply involves making the 4-byte stride conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that it typically operates on 16 byte buffers. For crypto_xor(), an algorithm is implemented that simply runs through the input using the largest strides possible if unaligned accesses are allowed. If they are not, an optimal sequence of memory accesses is emitted that takes the relative alignment of the input buffers into account, e.g., if the relative misalignment of dst and src is 4 bytes, the entire xor operation will be completed using 4 byte loads and stores (modulo unaligned bits at the start and end). Note that all expressions involving misalign are simply eliminated by the compiler when HAVE_EFFICIENT_UNALIGNED_ACCESS is defined. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: engine - move crypto engine to its own headerCorentin LABBE2016-09-071-70/+0
| | | | | | | | This patch move the whole crypto engine API to its own header crypto/engine.h. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Optimise away crypto_yield when hard preemption is onHerbert Xu2016-07-181-0/+2
| | | | | | | When hard preemption is enabled there is no need to explicitly call crypto_yield. This patch eliminates it if that is the case. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add crypto_inst_setnameHerbert Xu2016-07-011-0/+2
| | | | | | | | This patch adds the helper crypto_inst_setname because the current helper crypto_alloc_instance2 is no longer useful given that we now look up the algorithm after we allocate the instance object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove crypto_hash interfaceHerbert Xu2016-02-061-18/+0
| | | | | | | This patch removes all traces of the crypto_hash interface, now that everyone has switched over to shash or ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: engine - Introduce the block request crypto engine frameworkBaolin Wang2016-02-011-0/+70
| | | | | | | | | | | | | | | | | | | Now block cipher engines need to implement and maintain their own queue/thread for processing requests, moreover currently helpers provided for only the queue itself (in crypto_enqueue_request() and crypto_dequeue_request()) but they don't help with the mechanics of driving the hardware (things like running the request immediately, DMA map it or providing a thread to process the queue in) even though a lot of that code really shouldn't vary that much from device to device. Thus this patch provides a mechanism for pushing requests to the hardware as it becomes free that drivers could use. And this framework is patterned on the SPI code and has worked out well there. (https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/ drivers/spi/spi.c?id=ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0) Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Introduce crypto_queue_len() helper functionBaolin Wang2016-02-011-0/+4
| | | | | | | | This patch introduces crypto_queue_len() helper function to help to get the queue length in the crypto queue list now. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add instance free function to crypto_typeHerbert Xu2015-07-141-0/+2
| | | | | | | | | | | | Currently the task of freeing an instance is given to the crypto template. However, it has no type information on the instance so we have to resort to checking type information at runtime. This patch introduces a free function to crypto_type that will be used to free an instance. This can then be used to free an instance in a type-safe manner. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Remove unused __crypto_dequeue_requestHerbert Xu2015-07-141-1/+0
| | | | | | The function __crypto_dequeue_request is completely unused. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aead - Convert top level interface to new styleHerbert Xu2015-05-131-32/+1
| | | | | | | | | | | | | | | This patch converts the top-level aead interface to the new style. All user-level AEAD interface code have been moved into crypto/aead.h. The allocation/free functions have switched over to the new way of allocating tfms. This patch also removes the double indrection on setkey so the indirection now exists only at the alg level. Apart from these there are no user-visible changes. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add crypto_grab_spawn primitiveHerbert Xu2015-05-131-0/+2
| | | | | | | | | | | | | | This patch adds a new primitive crypto_grab_spawn which is meant to replace crypto_init_spawn and crypto_init_spawn2. Under the new scheme the user no longer has to worry about reference counting the alg object before it is subsumed by the spawn. It is pretty much an exact copy of crypto_grab_aead. Prior to calling this function spawn->frontend and spawn->inst must have been set. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Change crypto_unregister_instance argument typeHerbert Xu2015-04-031-1/+1
| | | | | | | | This patch makes crypto_unregister_instance take a crypto_instance instead of a crypto_alg. This allows us to remove a duplicate CRYPTO_ALG_INSTANCE check in crypto_unregister_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Move crypto_yield() to algapi.hMarek Vasut2014-06-201-0/+6
| | | | | | | | It makes no sense for crypto_yield() to be defined in scatterwalk.h , move it into algapi.h as it's an internal function to crypto API. Signed-off-by: Marek Vasut <marex@denx.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: allow blkcipher walks over AEAD dataArd Biesheuvel2014-03-101-0/+4
| | | | | | | | This adds the function blkcipher_aead_walk_virt_block, which allows the caller to use the blkcipher walk API to handle the input and output scatterlists. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: remove direct blkcipher_walk dependency on transformArd Biesheuvel2014-03-101-1/+4
| | | | | | | | | In order to allow other uses of the blkcipher walk API than the blkcipher algos themselves, this patch copies some of the transform data members to the walk struct so the transform is only accessed at walk init time. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crypto_memneq - add equality testing of memory regions w/o timing leaksJames Yonan2013-10-071-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When comparing MAC hashes, AEAD authentication tags, or other hash values in the context of authentication or integrity checking, it is important not to leak timing information to a potential attacker, i.e. when communication happens over a network. Bytewise memory comparisons (such as memcmp) are usually optimized so that they return a nonzero value as soon as a mismatch is found. E.g, on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch and up to ~850 cyc for a full match (cold). This early-return behavior can leak timing information as a side channel, allowing an attacker to iteratively guess the correct result. This patch adds a new method crypto_memneq ("memory not equal to each other") to the crypto API that compares memory areas of the same length in roughly "constant time" (cache misses could change the timing, but since they don't reveal information about the content of the strings being compared, they are effectively benign). Iow, best and worst case behaviour take the same amount of time to complete (in contrast to memcmp). Note that crypto_memneq (unlike memcmp) can only be used to test for equality or inequality, NOT for lexicographical order. This, however, is not an issue for its use-cases within the crypto API. We tried to locate all of the places in the crypto API where memcmp was being used for authentication or integrity checking, and convert them over to crypto_memneq. crypto_memneq is declared noinline, placed in its own source file, and compiled with optimizations that might increase code size disabled ("Os") because a smart compiler (or LTO) might notice that the return value is always compared against zero/nonzero, and might then reintroduce the same early-return optimization that we are trying to avoid. Using #pragma or __attribute__ optimization annotations of the code for disabling optimization was avoided as it seems to be considered broken or unmaintained for long time in GCC [1]. Therefore, we work around that by specifying the compile flag for memneq.o directly in the Makefile. We found that this seems to be most appropriate. As we use ("Os"), this patch also provides a loop-free "fast-path" for frequently used 16 byte digests. Similarly to kernel library string functions, leave an option for future even further optimized architecture specific assembler implementations. This was a joint work of James Yonan and Daniel Borkmann. Also thanks for feedback from Florian Weimer on this and earlier proposals [2]. [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html [2] https://lkml.org/lkml/2013/2/10/131 Signed-off-by: James Yonan <james@openvpn.net> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Florian Weimer <fw@deneb.enyo.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Unlink and free instances when deletedSteffen Klassert2011-11-091-0/+1
| | | | | | | | | | We leak the crypto instance when we unregister an instance with crypto_del_alg(). Therefore we introduce crypto_unregister_instance() to unlink the crypto instance from the template's instances list and to free the recources of the instance properly. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Add a report function pointer to crypto_typeSteffen Klassert2011-10-211-0/+2
| | | | | | | | | We add a report function pointer to struct crypto_type. This function pointer is used from the crypto userspace configuration API to report crypto algorithms to userspace. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Add ablkcipher_walk interfacesDavid S. Miller2010-05-191-0/+40
| | | | | | | | | | | | | | | | | | | | | | These are akin to the blkcipher_walk helpers. The main differences in the async variant are: 1) Only physical walking is supported. We can't hold on to kmap mappings across the async operation to support virtual ablkcipher_walk operations anyways. 2) Bounce buffers used for async more need to be persistent and freed at a later point in time when the async op completes. Therefore we maintain a list of writeback buffers and require that the ablkcipher_walk user call the 'complete' operation so we can copy the bounce buffers out to the real buffers and free up the bounce buffer chunks. These interfaces will be used by the new Niagara2 crypto driver. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove legacy hash/digest codeBenjamin Gilbert2009-10-191-1/+0
| | | | | | | | | 6941c3a0 disabled compilation of the legacy digest code but didn't actually remove it. Rectify this. Also, remove the crypto_hash_type extern declaration from algapi.h now that the struct is gone. Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2009-09-111-11/+26
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits) crypto: sha-s390 - Fix warnings in import function crypto: vmac - New hash algorithm for intel_txt support crypto: api - Do not displace newly registered algorithms crypto: ansi_cprng - Fix module initialization crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx crypto: fips - Depend on ansi_cprng crypto: blkcipher - Do not use eseqiv on stream ciphers crypto: ctr - Use chainiv on raw counter mode Revert crypto: fips - Select CPRNG crypto: rng - Fix typo crypto: talitos - add support for 36 bit addressing crypto: talitos - align locks on cache lines crypto: talitos - simplify hmac data size calculation crypto: mv_cesa - Add support for Orion5X crypto engine crypto: cryptd - Add support to access underlaying shash crypto: gcm - Use GHASH digest algorithm crypto: ghash - Add GHASH digest algorithm for GCM crypto: authenc - Convert to ahash crypto: api - Fix aligned ctx helper crypto: hmac - Prehash ipad/opad ...
| * crypto: api - Fix aligned ctx helperHerbert Xu2009-07-241-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The aligned ctx helper was using a bogus alignment value thas was one off the correct value. Fortunately the current users do not require anything beyond the natural alignment of the platform so this hasn't caused a problem. This patch fixes that and also removes the unnecessary minimum check since if the alignment is less than the natural alignment then the subsequent ALIGN operation should be a noop. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: cryptd - Switch to template create APIHerbert Xu2009-07-141-0/+3
| | | | | | | | | | | | | | | | This patch changes cryptd to use the template->create function instead of alloc in anticipation for the switch to new style ahash algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Remove frontend argument from extsize/init_tfmHerbert Xu2009-07-141-4/+2
| | | | | | | | | | | | | | As the extsize and init_tfm functions belong to the frontend the frontend argument is superfluous. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add crypto_attr_alg2 helperHerbert Xu2009-07-081-1/+10
| | | | | | | | | | | | | | | | This patch adds the helper crypto_attr_alg2 which is similar to crypto_attr_alg but takes an extra frontend argument. This is intended to be used by new style algorithm types such as shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add new style spawn supportHerbert Xu2009-07-081-0/+6
| | | | | | | | | | | | | | | | | | This patch modifies the spawn infrastructure to support new style algorithms like shash. In particular, this means storing the frontend type in the spawn and using crypto_create_tfm to allocate the tfm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add crypto_alloc_instance2Herbert Xu2009-07-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | This patch adds a new argument to crypto_alloc_instance which sets aside some space before the instance for use by algorithms such as shash that place type-specific data before crypto_alg. For compatibility the function has been renamed so that existing users aren't affected. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add new template create functionHerbert Xu2009-07-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | This patch introduces the template->create function intended to replace the existing alloc function. The intention is for create to handle the registration directly, whereas currently the caller of alloc has to handle the registration. This allows type-specific code to be run prior to registration. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL testHerbert Xu2009-08-291-0/+1
|/ | | | | | | | | | | | | | As struct skcipher_givcrypt_request includes struct crypto_request at a non-zero offset, testing for NULL after converting the pointer returned by crypto_dequeue_request does not work. This can result in IPsec crashes when the queue is depleted. This patch fixes it by doing the pointer conversion only when the return value is non-NULL. In particular, we create a new function __crypto_dequeue_request that does the pointer conversion. Reported-by: Brad Bosch <bradbosch@comcast.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Export shash through hashHerbert Xu2008-12-251-0/+5
| | | | | | | | This patch allows shash algorithms to be used through the old hash interface. This is a transitional measure so we can convert the underlying algorithms to shash before converting the users across. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Rebirth of crypto_alloc_tfmHerbert Xu2008-12-251-0/+10
| | | | | | | | | | | | | | | This patch reintroduces a completely revamped crypto_alloc_tfm. The biggest change is that we now take two crypto_type objects when allocating a tfm, a frontend and a backend. In fact this simply formalises what we've been doing behind the API's back. For example, as it stands crypto_alloc_ahash may use an actual ahash algorithm or a crypto_hash algorithm. Putting this in the API allows us to do this much more cleanly. The existing types will be converted across gradually. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Move type exit function into crypto_tfmHerbert Xu2008-12-251-1/+0
| | | | | | | | | | | | | | | | | | | | | | | The type exit function needs to undo any allocations done by the type init function. However, the type init function may differ depending on the upper-level type of the transform (e.g., a crypto_blkcipher instantiated as a crypto_ablkcipher). So we need to move the exit function out of the lower-level structure and into crypto_tfm itself. As it stands this is a no-op since nobody uses exit functions at all. However, all cases where a lower-level type is instantiated as a different upper-level type (such as blkcipher as ablkcipher) will be converted such that they allocate the underlying transform and use that instead of casting (e.g., crypto_ablkcipher casted into crypto_blkcipher). That will need to use a different exit function depending on the upper-level type. This patch also allows the type init/exit functions to call (or not) cra_init/cra_exit instead of always calling them from the top level. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] skcipher: Remove crypto_spawn_ablkcipherHerbert Xu2008-01-101-8/+0
| | | | | | | Now that gcm and authenc have been converted to crypto_spawn_skcipher, this patch removes the obsolete crypto_spawn_ablkcipher function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] skcipher: Add crypto_grab_skcipher interfaceHerbert Xu2008-01-101-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Note: From now on the collective of ablkcipher/blkcipher/givcipher will be known as skcipher, i.e., symmetric key cipher. The name blkcipher has always been much of a misnomer since it supports stream ciphers too. This patch adds the function crypto_grab_skcipher as a new way of getting an ablkcipher spawn. The problem is that previously we did this in two steps, first getting the algorithm and then calling crypto_init_spawn. This meant that each spawn user had to be aware of what type and mask to use for these two steps. This is difficult and also presents a problem when the type/mask changes as they're about to be for IV generators. The new interface does both steps together just like crypto_alloc_ablkcipher. As a side-effect this also allows us to be stronger on type enforcement for spawns. For now this is only done for ablkcipher but it's trivial to extend for other types. This patch also moves the type/mask logic for skcipher into the helpers crypto_skcipher_type and crypto_skcipher_mask. Finally this patch introduces the function crypto_require_sync to determine whether the user is specifically requesting a sync algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add crypto_attr_alg_nameHerbert Xu2008-01-101-0/+1
| | | | | | | | | This patch adds a new helper crypto_attr_alg_name which is basically the first half of crypto_attr_alg. That is, it returns an algorithm name parameter as a string without looking it up. The caller can then look it up immediately or defer it until later. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add crypto_inc and crypto_xorHerbert Xu2008-01-101-0/+4
| | | | | | | | | | | | With the addition of more stream ciphers we need to curb the proliferation of ad-hoc xor functions. This patch creates a generic pair of functions, crypto_inc and crypto_xor which does big-endian increment and exclusive or, respectively. For optimum performance, they both use u32 operations so alignment must be as that of u32 even though the arguments are of type u8 *. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ablkcipher: Add distinct ABLKCIPHER typeHerbert Xu2008-01-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Up until now we have ablkcipher algorithms have been identified as type BLKCIPHER with the ASYNC bit set. This is suboptimal because ablkcipher refers to two things. On the one hand it refers to the top-level ablkcipher interface with requests. On the other hand it refers to and algorithm type underneath. As it is you cannot request a synchronous block cipher algorithm with the ablkcipher interface on top. This is a problem because we want to be able to eventually phase out the blkcipher top-level interface. This patch fixes this by making ABLKCIPHER its own type, just as we have distinct types for HASH and DIGEST. The type it associated with the algorithm implementation only. Which top-level interface is used for synchronous block ciphers is then determined by the mask that's used. If it's a specific mask then the old blkcipher interface is given, otherwise we go with the new ablkcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Added blkcipher_walk_virt_blockHerbert Xu2007-10-111-0/+4
| | | | | | | | | This patch adds the helper blkcipher_walk_virt_block which is similar to blkcipher_walk_virt but uses a supplied block size instead of the block size of the block cipher. This is useful for CTR where the block size is 1 but we still want to walk by the block size of the underlying cipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] aead: Add authencHerbert Xu2007-10-111-1/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the authenc algorithm which constructs an AEAD algorithm from an asynchronous block cipher and a hash. The construction is done by concatenating the encrypted result from the cipher with the output from the hash, as is used by the IPsec ESP protocol. The authenc algorithm exists as a template with four parameters: authenc(auth, authsize, enc, enckeylen). The authentication algorithm, the authentication size (i.e., truncating the output of the authentication algorithm), the encryption algorithm, and the encryption key length. Both the size field and the key length field are in bytes. For example, AES-128 with SHA1-HMAC would be represented by authenc(hmac(sha1), 12, cbc(aes), 16) The key for the authenc algorithm is the concatenation of the keys for the authentication algorithm with the encryption algorithm. For the above example, if a key of length 36 bytes is given, then hmac(sha1) would receive the first 20 bytes while the last 16 would be given to cbc(aes). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ablkcipher: Remove queue pointer from common alg objectHerbert Xu2007-10-111-7/+7
| | | | | | | | Since not everyone needs a queue pointer and those who need it can always get it from the context anyway the queue pointer in the common alg object is redundant. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add aead crypto typeHerbert Xu2007-10-111-0/+6
| | | | | | | | | | | | | | | | | This patch adds crypto_aead which is the interface for AEAD (Authenticated Encryption with Associated Data) algorithms. AEAD algorithms perform authentication and encryption in one step. Traditionally users (such as IPsec) would use two different crypto algorithms to perform these. With AEAD this comes down to one algorithm and one operation. Of course if traditional algorithms were used we'd still be doing two operations underneath. However, real AEAD algorithms may allow the underlying operations to be optimised as well. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add crypto_ablkcipher_ctx_alignedSebastian Siewior2007-10-111-0/+5
| | | | | | | | | This is function does the same thing for ablkcipher that is done for blkcipher by crypto_blkcipher_ctx_aligned(): it returns an aligned address of the private ctx. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cryptd: Add software async crypto daemonHerbert Xu2007-05-021-0/+15
| | | | | | | | This patch adds the cryptd module which is a template that takes a synchronous software crypto algorithm and converts it to an asynchronous one by executing it in a kernel thread. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Do not remove users unless new algorithm matchesHerbert Xu2007-05-021-1/+2
| | | | | | | | | | | | | As it is whenever a new algorithm with the same name is registered users of the old algorithm will be removed so that they can take advantage of the new algorithm. This presents a problem when the new algorithm is not equivalent to the old algorithm. In particular, the new algorithm might only function on top of the existing one. Hence we should not remove users unless they can make use of the new algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add async blkcipher typeHerbert Xu2007-05-021-0/+58
| | | | | | | | This patch adds the mid-level interface for asynchronous block ciphers. It also includes a generic queueing mechanism that can be used by other asynchronous crypto operations in future. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] templates: Pass type/mask when creating instancesHerbert Xu2007-05-021-3/+5
| | | | | | | | | | | | | This patch passes the type/mask along when constructing instances of templates. This is in preparation for templates that may support multiple types of instances depending on what is requested. For example, the planned software async crypto driver will use this construct. For the moment this allows us to check whether the instance constructed is of the correct type and avoid returning success if the type does not match. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Allow multiple frontends per backendHerbert Xu2007-02-061-2/+2
| | | | | | | | This patch adds support for multiple frontend types for each backend algorithm by passing the type and mask through to the backend type init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add type-safe spawnsHerbert Xu2007-02-061-1/+19
| | | | | | This patch allows spawns of specific types (e.g., cipher) to be allocated. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] digest: Added user API for new hash typeHerbert Xu2006-09-211-0/+6
| | | | | | | | | | | | | | | | | The existing digest user interface is inadequate for support asynchronous operations. For one it doesn't return a value to indicate success or failure, nor does it take a per-operation descriptor which is essential for the issuing of requests while other requests are still outstanding. This patch is the first in a series of steps to remodel the interface for asynchronous operations. For the ease of transition the new interface will be known as "hash" while the old one will remain as "digest". This patch also changes sg_next to allow chaining. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>