diff options
author | Shiraz Saleem <shiraz.saleem@intel.com> | 2019-03-28 17:49:44 +0100 |
---|---|---|
committer | Jason Gunthorpe <jgg@mellanox.com> | 2019-03-28 18:13:27 +0100 |
commit | 5f818d676ac455bbc812ffaaf5bf780be5465114 (patch) | |
tree | 771a34951dad757965e4d609398f5eff95a1e99c /drivers/infiniband/hw/cxgb4 | |
parent | RDMA/bnxt_re: Use correct sizing on buffers holding page DMA addresses (diff) | |
download | linux-5f818d676ac455bbc812ffaaf5bf780be5465114.tar.xz linux-5f818d676ac455bbc812ffaaf5bf780be5465114.zip |
RDMA/cxbg: Use correct sizing on buffers holding page DMA addresses
The PBL array that hold the page DMA address is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.
Use ib_umem_num_pages() to size this array.
Cc: Potnuri Bharat Teja <bharat@chelsio.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'drivers/infiniband/hw/cxgb4')
-rw-r--r-- | drivers/infiniband/hw/cxgb4/mem.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index de6697fdffa7..81f5b5b026b1 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -542,7 +542,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, shift = PAGE_SHIFT; - n = mhp->umem->nmap; + n = ib_umem_num_pages(mhp->umem); err = alloc_pbl(mhp, n); if (err) goto err_umem_release; |