summaryrefslogtreecommitdiffstats
path: root/crypto (follow)
Commit message (Collapse)AuthorAgeFilesLines
* [CRYPTO] des: Rename des to des-genericSebastian Siewior2007-10-112-1/+2
| | | | | | | | | | Loading the crypto algorithm by the alias instead of by module directly has the advantage that all possible implementations of this algorithm are loaded automatically and the crypto API can choose the best one depending on its priority. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Added blkcipher_walk_virt_blockHerbert Xu2007-10-111-10/+24
| | | | | | | | | This patch adds the helper blkcipher_walk_virt_block which is similar to blkcipher_walk_virt but uses a supplied block size instead of the block size of the block cipher. This is useful for CTR where the block size is 1 but we still want to walk by the block size of the underlying cipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Increase kmalloc amount to aligned block sizeHerbert Xu2007-10-111-1/+1
| | | | | | | | Now that the block size is no longer a multiple of the alignment, we need to increase the kmalloc amount in blkcipher_next_slow to use the aligned block size. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Explain the comparison on larval cra_nameHerbert Xu2007-10-111-0/+5
| | | | | | | | | | | | This patch adds a comment to explain why we compare the cra_driver_name of the algorithm being registered against the cra_name of a larval as opposed to the cra_driver_name of the larval. In fact larvals have only one name, cra_name which is the name that was requested by the user. The test here is simply trying to find out whether the algorithm being registered can or can not satisfy the larval. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Remove alignment restriction on block sizeHerbert Xu2007-10-112-8/+8
| | | | | | | | | | Previously we assumed for convenience that the block size is a multiple of the algorithm's required alignment. With the pending addition of CTR this will no longer be the case as the block size will be 1 due to it being a stream cipher. However, the alignment requirement will be that of the underlying implementation which will most likely be greater than 1. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] authenc: Kill spaces in algorithm namesHerbert Xu2007-10-111-2/+2
| | | | | | | We do not allow spaces in algorithm names or parameters. Thanks to Joy Latten for pointing this out. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cryptomgr: Fix parsing of recursive algorithmsHerbert Xu2007-10-111-1/+2
| | | | | | | | | | As Joy Latten points out, inner algorithm parameters will miss the closing bracket which will also cause the outer algorithm to terminate prematurely. This patch fixes that also kills the WARN_ON if the number of parameters exceed the maximum as that is a user error. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] xts: XTS blockcipher mode implementation without partial blocksRik Snel2007-10-116-0/+744
| | | | | | | | | | | | | | | XTS currently considered to be the successor of the LRW mode by the IEEE1619 workgroup. LRW was discarded, because it was not secure if the encyption key itself is encrypted with LRW. XTS does not have this problem. The implementation is pretty straightforward, a new function was added to gf128mul to handle GF(128) elements in ble format. Four testvectors from the specification http://grouper.ieee.org/groups/1619/email/pdf00086.pdf were added, and they verify on my system. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Use max() in blkcipher_get_spot() to state the intentionIngo Oeser2007-10-111-1/+1
| | | | | | | Use max in blkcipher_get_spot() instead of open coding it. Signed-off-by: Ingo Oeser <ioe-lkml@rameria.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Kill crypto_km_typesHerbert Xu2007-10-112-11/+8
| | | | | | | | | | | | | When scatterwalk is built as a module digest.c was broken because it requires the crypto_km_types structure which is in scatterwalk. This patch removes the crypto_km_types structure by encoding the logic into crypto_kmap_type directly. In fact, this even saves a few bytes of code (not to mention the data structure itself) on i386 which is about the only place where it's needed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] aead: Add authencHerbert Xu2007-10-114-3/+429
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the authenc algorithm which constructs an AEAD algorithm from an asynchronous block cipher and a hash. The construction is done by concatenating the encrypted result from the cipher with the output from the hash, as is used by the IPsec ESP protocol. The authenc algorithm exists as a template with four parameters: authenc(auth, authsize, enc, enckeylen). The authentication algorithm, the authentication size (i.e., truncating the output of the authentication algorithm), the encryption algorithm, and the encryption key length. Both the size field and the key length field are in bytes. For example, AES-128 with SHA1-HMAC would be represented by authenc(hmac(sha1), 12, cbc(aes), 16) The key for the authenc algorithm is the concatenation of the keys for the authentication algorithm with the encryption algorithm. For the above example, if a key of length 36 bytes is given, then hmac(sha1) would receive the first 20 bytes while the last 16 would be given to cbc(aes). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] scatterwalk: Add scatterwalk_map_and_copyHerbert Xu2007-10-112-0/+25
| | | | | | | | | This patch adds the function scatterwalk_map_and_copy which reads or writes a chunk of data from a scatterlist at a given offset. It will be used by authenc which would read/write the authentication data at the end of the cipher/plain text. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Move scatterwalk into algapiHerbert Xu2007-10-111-2/+2
| | | | | | | The scatterwalk code is only used by algorithms that can be built as a module. Therefore we can move it into algapi. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ablkcipher: Remove queue pointer from common alg objectHerbert Xu2007-10-112-8/+3
| | | | | | | | Since not everyone needs a queue pointer and those who need it can always get it from the context anyway the queue pointer in the common alg object is redundant. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add missing headers for setkey_unalignedHerbert Xu2007-10-114-7/+12
| | | | | | | | This patch ensures that kernel.h and slab.h are included for the setkey_unaligned function. It also breaks a couple of long lines. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add support for multiple template parametersHerbert Xu2007-10-112-27/+76
| | | | | | | | | | | | | | This patch adds support for having multiple parameters to a template, separated by a comma. It also adds support for integer parameters in addition to the current algorithm parameter type. This will be used by the authenc template which will have four parameters: the authentication algorithm, the encryption algorithm, the authentication size and the encryption key length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add aead crypto typeHerbert Xu2007-10-113-0/+106
| | | | | | | | | | | | | | | | | This patch adds crypto_aead which is the interface for AEAD (Authenticated Encryption with Associated Data) algorithms. AEAD algorithms perform authentication and encryption in one step. Traditionally users (such as IPsec) would use two different crypto algorithms to perform these. With AEAD this comes down to one algorithm and one operation. Of course if traditional algorithms were used we'd still be doing two operations underneath. However, real AEAD algorithms may allow the underlying operations to be optimised as well. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] seed: New cipher algorithmHye-Shik Chang2007-10-115-1/+591
| | | | | | | | | | | | | | This patch adds support for the SEED cipher (RFC4269). This patch have been used in few VPN appliance vendors in Korea for several years. And it was verified by KISA, who developed the algorithm itself. As its importance in Korean banking industry, it would be great if linux incorporates the support. Signed-off-by: Hye-Shik Chang <perky@FreeBSD.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] Kconfig: Remove "default m"sAdrian Bunk2007-10-111-3/+0
| | | | | | | | Other options requiring specific block cipher algorithms already have the appropriate select's. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* async_tx: fix dma_wait_for_async_txDan Williams2007-09-241-2/+10
| | | | | | | | | Fix dma_wait_for_async_tx to not loop forever in the case where a dependency chain is longer than two entries. This condition will not happen with current in-kernel drivers, but fix it for future drivers. Found-by: Saeed Bishara <saeed.bishara@gmail.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* [CRYPTO] blkcipher: Fix inverted test in blkcipher_get_spotHerbert Xu2007-09-101-1/+1
| | | | | | | | | The previous patch had the conditional inverted. This patch fixes it so that we return the original position if it does not straddle a page. Thanks to Bob Gilligan for spotting this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Fix handling of kmalloc page straddlingHerbert Xu2007-09-091-4/+7
| | | | | | | | | | | | | | The function blkcipher_get_spot tries to return a buffer of the specified length that does not straddle a page. It has an off-by-one bug so it may advance a page unnecessarily. What's worse, one of its callers doesn't provide a buffer that's sufficiently long for this operation. This patch fixes both problems. Thanks to Bob Gilligan for diagnosing this problem and providing a fix. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: fix writting into unallocated memory in setkey_alignedSebastian Siewior2007-08-064-4/+4
| | | | | | | | | setkey_unaligned() commited in ca7c39385ce1a7b44894a4b225a4608624e90730 overwrites unallocated memory in the following memset() because I used the wrong buffer length. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* async_tx: fix kmap_atomic usage in async_memcpyDan Williams2007-07-201-15/+4
| | | | | | | | | | | | | | | | | | | | Andrew Morton: [async_memcpy] is very wrong if both ASYNC_TX_KMAP_DST and ASYNC_TX_KMAP_SRC can ever be set. We'll end up using the same kmap slot for both src add dest and we get either corrupted data or a BUG. Evgeniy Polyakov: Btw, shouldn't it always be kmap_atomic() even if flag is not set. That pages are usual one returned by alloc_page(). So fix the usage of kmap_atomic and kill the ASYNC_TX_KMAP_DST and ASYNC_TX_KMAP_SRC flags. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Evgeniy Polyakov <johnpol@2ka.mipt.ru> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Make crypto API use seq_list_xxx helpersPavel Emelianov2007-07-161-14/+3
| | | | | | | | | Simple and stupid - just use the same code from another place in the kernel. Signed-off-by: Pavel Emelianov <xemul@openvz.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6David S. Miller2007-07-157-14/+125
|\ | | | | | | | | | | Conflicts: crypto/Kconfig
| * [CRYPTO] api: Allow ablkcipher with no queuesSebastian Siewior2007-07-111-2/+4
| | | | | | | | | | | | | | | | | | Evgeniy's hifn driver and probably mine don't use ablkcipher->queue at all. The show method of ablkcipher will access this field without checking if it is valid. Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Handle unaligned keys in setkeySebastian Siewior2007-07-114-4/+117
| | | | | | | | | | | | | | | | | | setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the requested alignment by the algorithm. This patch fixes it. The extra memory is allocated by kmalloc() with GFP_ATOMIC flag. Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] api: Wake up all waiters when larval completesHerbert Xu2007-07-112-3/+3
| | | | | | | | | | | | | | | | | | Right now when a larval matures or when it dies of an error we only wake up one waiter. This would cause other waiters to timeout unnecessarily. This patch changes it to use complete_all to wake up all waiters. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] Kconfig: Use menuconfig objectsJan Engelhardt2007-07-111-5/+1
| | | | | | | | | | | | | | | | | | Use menuconfigs instead of menus, so the whole menu can be disabled at once instead of going through all options. Signed-off-by: Jan Engelhardt <jengelh@gmx.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | async_tx: add the async_tx apiDan Williams2007-07-139-17/+1104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is written to the api can optimize for asynchronous operation and the api will fit the chain of operations to the available offload resources. I imagine that any piece of ADMA hardware would register with the 'async_*' subsystem, and a call to async_X would be routed as appropriate, or be run in-line. - Neil Brown async_tx exploits the capabilities of struct dma_async_tx_descriptor to provide an api of the following general format: struct dma_async_tx_descriptor * async_<operation>(..., struct dma_async_tx_descriptor *depend_tx, dma_async_tx_callback cb_fn, void *cb_param) { struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>); struct dma_device *device = chan ? chan->device : NULL; int int_en = cb_fn ? 1 : 0; struct dma_async_tx_descriptor *tx = device ? device->device_prep_dma_<operation>(chan, len, int_en) : NULL; if (tx) { /* run <operation> asynchronously */ ... tx->tx_set_dest(addr, tx, index); ... tx->tx_set_src(addr, tx, index); ... async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); } else { /* run <operation> synchronously */ ... <operation> ... async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); } return tx; } async_tx_find_channel() returns a capable channel from its pool. The channel pool is organized as a per-cpu array of channel pointers. The async_tx_rebalance() routine is tasked with managing these arrays. In the uniprocessor case async_tx_rebalance() tries to spread responsibility evenly over channels of similar capabilities. For example if there are two copy+xor channels, one will handle copy operations and the other will handle xor. In the SMP case async_tx_rebalance() attempts to spread the operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor channel0 while cpu1 gets copy channel 1 and xor channel 1. When a dependency is specified async_tx_find_channel defaults to keeping the operation on the same channel. A xor->copy->xor chain will stay on one channel if it supports both operation types, otherwise the transaction will transition between a copy and a xor resource. Currently the raid5 implementation in the MD raid456 driver has been converted to the async_tx api. A driver for the offload engines on the Intel Xscale series of I/O processors, iop-adma, is provided in a later commit. With the iop-adma driver and async_tx, raid456 is able to offload copy, xor, and xor-zero-sum operations to hardware engines. On iop342 tiobench showed higher throughput for sequential writes (20 - 30% improvement) and sequential reads to a degraded array (40 - 55% improvement). For the other cases performance was roughly equal, +/- a few percentage points. On a x86-smp platform the performance of the async_tx implementation (in synchronous mode) was also +/- a few percentage points of the original implementation. According to 'top' on iop342 CPU utilization drops from ~50% to ~15% during a 'resync' while the speed according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s. The tiobench command line used for testing was: tiobench --size 2048 --block 4096 --block 131072 --dir /mnt/raid --numruns 5 * iop342 had 1GB of memory available Details: * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making async_tx_find_channel a static inline routine that always returns NULL * when a callback is specified for a given transaction an interrupt will fire at operation completion time and the callback will occur in a tasklet. if the the channel does not support interrupts then a live polling wait will be performed * the api is written as a dmaengine client that requests all available channels * In support of dependencies the api implicitly schedules channel-switch interrupts. The interrupt triggers the cleanup tasklet which causes pending operations to be scheduled on the next channel * Xor engines treat an xor destination address differently than a software xor routine. To the software routine the destination address is an implied source, whereas engines treat it as a write-only destination. This patch modifies the xor_blocks routine to take a an explicit destination address to mirror the hardware. Changelog: * fixed a leftover debug print * don't allow callbacks in async_interrupt_cond * fixed xor_block changes * fixed usage of ASYNC_TX_XOR_DROP_DEST * drop dma mapping methods, suggested by Chris Leech * printk warning fixups from Andrew Morton * don't use inline in C files, Adrian Bunk * select the API when MD is enabled * BUG_ON xor source counts <= 1 * implicitly handle hardware concerns like channel switching and interrupts, Neil Brown * remove the per operation type list, and distribute operation capabilities evenly amongst the available channels * simplify async_tx_find_channel to optimize the fast path * introduce the channel_table_initialized flag to prevent early calls to the api * reorganize the code to mimic crypto * include mm.h as not all archs include it in dma-mapping.h * make the Kconfig options non-user visible, Adrian Bunk * move async_tx under crypto since it is meant as 'core' functionality, and the two may share algorithms in the future * move large inline functions into c files * checkpatch.pl fixes * gpl v2 only correction Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
* | xor: make 'xor_blocks' a library routine for use with async_txDan Williams2007-07-133-0/+168
|/ | | | | | | | | | | | | | | | | | The async_tx api tries to use a dma engine for an operation, but will fall back to an optimized software routine otherwise. Xor support is implemented using the raid5 xor routines. For organizational purposes this routine is moved to a common area. The following fixes are also made: * rename xor_block => xor_blocks, suggested by Adrian Bunk * ensure that xor.o initializes before md.o in the built-in case * checkpatch.pl fixes * mark calibrate_xor_blocks __init, Adrian Bunk Cc: Adrian Bunk <bunk@stusta.de> Cc: NeilBrown <neilb@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* [CRYPTO] cryptd: Fix problem with cryptd and the freezerRafael J. Wysocki2007-05-311-1/+3
| | | | | | | | Make sure that cryptd is marked as nonfreezable and does not hold up the freezer. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Read module pointer before freeing algorithmHerbert Xu2007-05-191-1/+3
| | | | | | | | | | | | | The function crypto_mod_put first frees the algorithm and then drops the reference to its module. Unfortunately we read the module pointer which after freeing the algorithm and that pointer sits inside the object that we just freed. So this patch reads the module pointer out before we free the object. Thanks to Luca Tettamanti for reporting this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] tcrypt: Add missing error checkHerbert Xu2007-05-181-1/+1
| | | | | | | The return value of crypto_hash_final isn't checked in test_hash_cycles. This patch corrects this. Thanks to Eric Sesterhenn for reporting this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Fix trivial typos in Kconfig* filesDavid Sterba2007-05-091-1/+1
| | | | | | | Fix several typos in help text in Kconfig* files. Signed-off-by: David Sterba <dave@jikos.cz> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [CRYPTO] cryptomgr: Fix use after freeHerbert Xu2007-05-091-4/+3
| | | | | | | | | | By the time kthread_run returns the param may have already been freed so writing the returned thread_struct pointer to param is wrong. In fact, we don't need it in param anyway so this patch simply puts it on the stack. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cryptd: Add software async crypto daemonHerbert Xu2007-05-023-0/+385
| | | | | | | | This patch adds the cryptd module which is a template that takes a synchronous software crypto algorithm and converts it to an asynchronous one by executing it in a kernel thread. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Do not remove users unless new algorithm matchesHerbert Xu2007-05-021-26/+39
| | | | | | | | | | | | | As it is whenever a new algorithm with the same name is registered users of the old algorithm will be removed so that they can take advantage of the new algorithm. This presents a problem when the new algorithm is not equivalent to the old algorithm. In particular, the new algorithm might only function on top of the existing one. Hence we should not remove users unless they can make use of the new algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cryptomgr: Fix parsing of nested templates Herbert Xu2007-05-021-13/+25
| | | | | | | This patch allows the use of nested templates by allowing the use of brackets inside a template parameter. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add async blkcipher typeHerbert Xu2007-05-024-0/+150
| | | | | | | | This patch adds the mid-level interface for asynchronous block ciphers. It also includes a generic queueing mechanism that can be used by other asynchronous crypto operations in future. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] templates: Pass type/mask when creating instancesHerbert Xu2007-05-028-34/+103
| | | | | | | | | | | | | This patch passes the type/mask along when constructing instances of templates. This is in preparation for templates that may support multiple types of instances depending on what is requested. For example, the planned software async crypto driver will use this construct. For the moment this allows us to check whether the instance constructed is of the correct type and avoid returning success if the type does not match. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] tcrypt: Use async blkcipher interfaceHerbert Xu2007-05-021-39/+82
| | | | | | | | This patch converts the tcrypt module to use the asynchronous block cipher interface. As all synchronous block ciphers can be used through the async interface, tcrypt is still able to test them. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add async block cipher interfaceHerbert Xu2007-05-021-5/+65
| | | | | | | | | | | | | | | | This patch adds the frontend interface for asynchronous block ciphers. In addition to the usual block cipher parameters, there is a callback function pointer and a data pointer. The callback will be invoked only if the encrypt/decrypt handlers return -EINPROGRESS. In other words, if the return value of zero the completion handler (or the equivalent code) needs to be invoked by the caller. The request structure is allocated and freed by the caller. Its size is determined by calling crypto_ablkcipher_reqsize(). The helpers ablkcipher_request_alloc/ablkcipher_request_free can be used to manage the memory for a request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Proc functions should be marked as unusedHerbert Xu2007-05-022-2/+2
| | | | | | | The proc functions were incorrectly marked as used rather than unused. They may be unused if proc is disabled. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [PATCH] Update my email address from jkmaline@cc.hut.fi to j@w1.fiJouni Malinen2007-04-281-2/+2
| | | | | | | | | | | | | After 13 years of use, it looks like my email address is finally going to disappear. While this is likely to drop the amount of incoming spam greatly ;-), it may also affect more appropriate messages, so let's update my email address in various places. In addition, Host AP mailing list is subscribers-only and linux-wireless can also be used for discussing issues related to this driver which is now shown in MAINTAINERS. Signed-off-by: Jouni Malinen <j@w1.fi> Signed-off-by: John W. Linville <linville@tuxdriver.com>
* [CRYPTO] api: Flush the current page right than the nextHerbert Xu2007-03-311-2/+6
| | | | | | | | On platforms where flush_dcache_page is needed we're currently flushing the next page right than the one we've just processed. This patch fixes the off-by-one error. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Use the right value when advancing scatterwalk_copychunksHerbert Xu2007-03-311-1/+1
| | | | | | | In the scatterwalk_copychunks loop, We should be advancing by len_this_page and not nbytes. The latter is the total length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] tcrypt: Fix error checking for comp allocationSebastian Siewior2007-03-201-1/+1
| | | | | | | | This patch fixes loading the tcrypt module while deflate isn't available at all (isn't build). Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: scatterwalk_copychunks() fails to advance through scatterlistJ. Bruce Fields2007-03-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the loop in scatterwalk_copychunks(), if walk->offset is zero, then scatterwalk_pagedone rounds that up to the nearest page boundary: walk->offset += PAGE_SIZE - 1; walk->offset &= PAGE_MASK; which is a no-op in this case, so we don't advance to the next element of the scatterlist array: if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); and we end up copying the same data twice. It appears that other callers of scatterwalk_{page}done first advance walk->offset, so I believe that's the correct thing to do here. This caused a bug in NFS when run with krb5p security, which would cause some writes to fail with permissions errors--for example, writes of less than 8 bytes (the des blocksize) at the start of a file. A git-bisect shows the bug was originally introduced by 5c64097aa0f6dc4f27718ef47ca9a12538d62860, first in 2.6.19-rc1. Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>