diff options
author | Eric Biggers <ebiggers@google.com> | 2023-10-27 22:30:17 +0200 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2023-11-01 05:58:42 +0100 |
commit | a312e07a65fb598ed239b940434392721385c722 (patch) | |
tree | 1626dbe670d719e911f330a16d06ca869aa2750c /crypto/adiantum.c | |
parent | crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct place (diff) | |
download | linux-a312e07a65fb598ed239b940434392721385c722.tar.xz linux-a312e07a65fb598ed239b940434392721385c722.zip |
crypto: adiantum - flush destination page before unmapping
Upon additional review, the new fast path in adiantum_finish() is
missing the call to flush_dcache_page() that scatterwalk_map_and_copy()
was doing. It's apparently debatable whether flush_dcache_page() is
actually needed, as per the discussion at
https://lore.kernel.org/lkml/YYP1lAq46NWzhOf0@casper.infradead.org/T/#u.
However, it appears that currently all the helper functions that write
to a page, such as scatterwalk_map_and_copy(), memcpy_to_page(), and
memzero_page(), do the dcache flush. So do it to be consistent.
Fixes: dadf5e56c967 ("crypto: adiantum - add fast path for single-page messages")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'crypto/adiantum.c')
-rw-r--r-- | crypto/adiantum.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/crypto/adiantum.c b/crypto/adiantum.c index 9ff3376f9ed3..60f3883b736a 100644 --- a/crypto/adiantum.c +++ b/crypto/adiantum.c @@ -300,7 +300,8 @@ static int adiantum_finish(struct skcipher_request *req) le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash); if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) { /* Fast path for single-page destination */ - void *virt = kmap_local_page(sg_page(dst)) + dst->offset; + struct page *page = sg_page(dst); + void *virt = kmap_local_page(page) + dst->offset; err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len, (u8 *)&digest); @@ -310,6 +311,7 @@ static int adiantum_finish(struct skcipher_request *req) } le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128)); + flush_dcache_page(page); kunmap_local(virt); } else { /* Slow path that works for any destination scatterlist */ |