summaryrefslogtreecommitdiffstats
path: root/drivers/pci
diff options
context:
space:
mode:
authorChris Wright <chrisw@sous-sol.org>2011-05-28 20:15:04 +0200
committerDavid Woodhouse <David.Woodhouse@intel.com>2011-06-01 13:47:40 +0200
commit1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb (patch)
treec06f2bcc3b5aa5d0cb0869815e6247afc736ea7a /drivers/pci
parentintel-iommu: Speed up processing of the identity_mapping function (diff)
downloadlinux-1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb.tar.xz
linux-1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb.zip
intel-iommu: Dont cache iova above 32bit
Mike Travis and Mike Habeck reported an issue where iova allocation would return a range that was larger than a device's dma mask. https://lkml.org/lkml/2011/3/29/423 The dmar initialization code will reserve all PCI MMIO regions and copy those reservations into a domain specific iova tree. It is possible for one of those regions to be above the dma mask of a device. It is typical to allocate iovas with a 32bit mask (despite device's dma mask possibly being larger) and cache the result until it exhausts the lower 32bit address space. Freeing the iova range that is >= the last iova in the lower 32bit range when there is still an iova above the 32bit range will corrupt the cached iova by pointing it to a region that is above 32bit. If that region is also larger than the device's dma mask, a subsequent allocation will return an unusable iova and cause dma failure. Simply don't cache an iova that is above the 32bit caching boundary. Reported-by: Mike Travis <travis@sgi.com> Reported-by: Mike Habeck <habeck@sgi.com> Cc: stable@kernel.org Acked-by: Mike Travis <travis@sgi.com> Tested-by: Mike Habeck <habeck@sgi.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Diffstat (limited to '')
-rw-r--r--drivers/pci/iova.c12
1 files changed, 10 insertions, 2 deletions
diff --git a/drivers/pci/iova.c b/drivers/pci/iova.c
index 9606e599a475..c5c274ab5c5a 100644
--- a/drivers/pci/iova.c
+++ b/drivers/pci/iova.c
@@ -63,8 +63,16 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
curr = iovad->cached32_node;
cached_iova = container_of(curr, struct iova, node);
- if (free->pfn_lo >= cached_iova->pfn_lo)
- iovad->cached32_node = rb_next(&free->node);
+ if (free->pfn_lo >= cached_iova->pfn_lo) {
+ struct rb_node *node = rb_next(&free->node);
+ struct iova *iova = container_of(node, struct iova, node);
+
+ /* only cache if it's below 32bit pfn */
+ if (node && iova->pfn_lo < iovad->dma_32bit_pfn)
+ iovad->cached32_node = node;
+ else
+ iovad->cached32_node = NULL;
+ }
}
/* Computes the padding size required, to make the