summaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorDennis Dalessandro <dennis.dalessandro@intel.com>2016-10-10 15:14:45 +0200
committerDoug Ledford <dledford@redhat.com>2016-11-15 22:16:40 +0100
commite1fafdcbe0e3e769c6a83317dd845bc99b4fe61d (patch)
tree63e6e64282e79d02feafcd956e2573bdea7d5208 /drivers
parentLinux 4.9-rc3 (diff)
downloadlinux-e1fafdcbe0e3e769c6a83317dd845bc99b4fe61d.tar.xz
linux-e1fafdcbe0e3e769c6a83317dd845bc99b4fe61d.zip
IB/rdmavt: rdmavt can handle non aligned page maps
The initial code for rdmavt carried with it a restriction that was a vestige from the qib driver, that to dma map a page it had to be less than a page size. This is not the case on modern hardware, both qib and hfi1 will be just fine with unaligned map requests. This fixes a 4.8 regression where by an IPoIB transfer of > PAGE_SIZE will hang because the dma map page call always fails. This was introduced after commit 5faba5469522 ("IB/ipoib: Report SG feature regardless of HW UD CSUM capability") added the capability to use SG by default. Rather than override this, the HW supports it, so allow SG. Cc: Stable <stable@vger.kernel.org> # 4.8 Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/infiniband/sw/rdmavt/dma.c3
1 files changed, 0 insertions, 3 deletions
diff --git a/drivers/infiniband/sw/rdmavt/dma.c b/drivers/infiniband/sw/rdmavt/dma.c
index 01f71caa3ac4..f2cefb0d9180 100644
--- a/drivers/infiniband/sw/rdmavt/dma.c
+++ b/drivers/infiniband/sw/rdmavt/dma.c
@@ -90,9 +90,6 @@ static u64 rvt_dma_map_page(struct ib_device *dev, struct page *page,
if (WARN_ON(!valid_dma_direction(direction)))
return BAD_DMA_ADDRESS;
- if (offset + size > PAGE_SIZE)
- return BAD_DMA_ADDRESS;
-
addr = (u64)page_address(page);
if (addr)
addr += offset;