diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2020-09-05 00:41:47 +0200 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2020-09-11 15:24:53 +0200 |
commit | a665aca89a411115e35ea937c2d3fb2ee4f5a701 (patch) | |
tree | 79c8a3e6ba1b0df6d1cfe1e1067a40b72bc1f942 /include/rdma | |
parent | RDMA/umem: Replace for_each_sg_dma_page with rdma_umem_for_each_dma_block (diff) | |
download | linux-a665aca89a411115e35ea937c2d3fb2ee4f5a701.tar.xz linux-a665aca89a411115e35ea937c2d3fb2ee4f5a701.zip |
RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()
ib_umem_num_pages() should only be used by things working with the SGL in
CPU pages directly.
Drivers building DMA lists should use the new ib_num_dma_blocks() which
returns the number of blocks rdma_umem_for_each_block() will return.
To make this general for DMA drivers requires a different implementation.
Computing DMA block count based on umem->address only works if the
requested page size is < PAGE_SIZE and/or the IOVA == umem->address.
Instead the number of DMA pages should be computed in the IOVA address
space, not umem->address. Thus the IOVA has to be stored inside the umem
so it can be used for these calculations.
For now set it to umem->address by default and fix it up if
ib_umem_find_best_pgsz() was called. This allows drivers to be converted
to ib_umem_num_dma_blocks() safely.
Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'include/rdma')
-rw-r--r-- | include/rdma/ib_umem.h | 15 |
1 files changed, 12 insertions, 3 deletions
diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index b880512ba95f..cf001d4e0a4f 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -17,6 +17,7 @@ struct ib_umem_odp; struct ib_umem { struct ib_device *ibdev; struct mm_struct *owning_mm; + u64 iova; size_t length; unsigned long address; u32 writable : 1; @@ -33,11 +34,17 @@ static inline int ib_umem_offset(struct ib_umem *umem) return umem->address & ~PAGE_MASK; } +static inline size_t ib_umem_num_dma_blocks(struct ib_umem *umem, + unsigned long pgsz) +{ + return (size_t)((ALIGN(umem->iova + umem->length, pgsz) - + ALIGN_DOWN(umem->iova, pgsz))) / + pgsz; +} + static inline size_t ib_umem_num_pages(struct ib_umem *umem) { - return (ALIGN(umem->address + umem->length, PAGE_SIZE) - - ALIGN_DOWN(umem->address, PAGE_SIZE)) >> - PAGE_SHIFT; + return ib_umem_num_dma_blocks(umem, PAGE_SIZE); } static inline void __rdma_umem_block_iter_start(struct ib_block_iter *biter, @@ -55,6 +62,8 @@ static inline void __rdma_umem_block_iter_start(struct ib_block_iter *biter, * pgsz must be <= PAGE_SIZE or computed by ib_umem_find_best_pgsz(). The * returned DMA blocks will be aligned to pgsz and span the range: * ALIGN_DOWN(umem->address, pgsz) to ALIGN(umem->address + umem->length, pgsz) + * + * Performs exactly ib_umem_num_dma_blocks() iterations. */ #define rdma_umem_for_each_dma_block(umem, biter, pgsz) \ for (__rdma_umem_block_iter_start(biter, umem, pgsz); \ |