diff options
author | David Rientjes <rientjes@google.com> | 2020-06-11 21:20:30 +0200 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2020-06-17 09:29:38 +0200 |
commit | 56fccf21d1961a06e2a0c96ce446ebf036651062 (patch) | |
tree | c47f6ddb7b3afd5e0ffefafad4af257cb200b486 /kernel/dma | |
parent | dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails (diff) | |
download | linux-56fccf21d1961a06e2a0c96ce446ebf036651062.tar.xz linux-56fccf21d1961a06e2a0c96ce446ebf036651062.zip |
dma-direct: check return value when encrypting or decrypting memory
__change_page_attr() can fail which will cause set_memory_encrypted() and
set_memory_decrypted() to return non-zero.
If the device requires unencrypted DMA memory and decryption fails, simply
free the memory and fail.
If attempting to re-encrypt in the failure path and that encryption fails,
there is no alternative other than to leak the memory.
Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'kernel/dma')
-rw-r--r-- | kernel/dma/direct.c | 19 |
1 files changed, 14 insertions, 5 deletions
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 80d33f215a2e..2f69bfdbe315 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -158,6 +158,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, { struct page *page; void *ret; + int err; size = PAGE_ALIGN(size); @@ -210,8 +211,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, } ret = page_address(page); - if (force_dma_unencrypted(dev)) - set_memory_decrypted((unsigned long)ret, 1 << get_order(size)); + if (force_dma_unencrypted(dev)) { + err = set_memory_decrypted((unsigned long)ret, + 1 << get_order(size)); + if (err) + goto out_free_pages; + } memset(ret, 0, size); @@ -230,9 +235,13 @@ done: return ret; out_encrypt_pages: - if (force_dma_unencrypted(dev)) - set_memory_encrypted((unsigned long)page_address(page), - 1 << get_order(size)); + if (force_dma_unencrypted(dev)) { + err = set_memory_encrypted((unsigned long)page_address(page), + 1 << get_order(size)); + /* If memory cannot be re-encrypted, it must be leaked */ + if (err) + return NULL; + } out_free_pages: dma_free_contiguous(dev, page, size); return NULL; |