summaryrefslogtreecommitdiffstats
path: root/mm/migrate.c
diff options
context:
space:
mode:
authorRalph Campbell <rcampbell@nvidia.com>2020-12-15 04:12:55 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2020-12-15 21:13:45 +0100
commit5e5dda81a0dfb82de1757ab878d9ffd2339c9b2a (patch)
tree4f7773b185953aaf1ba0d2f73f8caaf3b0f8d59d /mm/migrate.c
parentmm/migrate.c: fix comment spelling (diff)
downloadlinux-5e5dda81a0dfb82de1757ab878d9ffd2339c9b2a.tar.xz
linux-5e5dda81a0dfb82de1757ab878d9ffd2339c9b2a.zip
mm/migrate.c: optimize migrate_vma_pages() mmu notifier
When migrating a zero page or pte_none() anonymous page to device private memory, migrate_vma_setup() will initialize the src[] array with a NULL PFN. This lets the device driver allocate device private memory and clear it instead of DMAing a page of zeros over the device bus. Since the source page didn't exist at the time, no struct page was locked nor a migration PTE inserted into the CPU page tables. The actual PTE insertion happens in migrate_vma_pages() when it tries to insert the device private struct page PTE into the CPU page tables. migrate_vma_pages() has to call the mmu notifiers again since another device could fault on the same page before the page table locks are acquired. Allow device drivers to optimize the invalidation similar to migrate_vma_setup() by calling mmu_notifier_range_init() which sets struct mmu_notifier_range event type to MMU_NOTIFY_MIGRATE and the migrate_pgmap_owner field. Link: https://lkml.kernel.org/r/20201021191335.10916-1-rcampbell@nvidia.com Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--mm/migrate.c9
1 files changed, 4 insertions, 5 deletions
diff --git a/mm/migrate.c b/mm/migrate.c
index 1a39d552afeb..c98b04e0db4e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -3001,11 +3001,10 @@ void migrate_vma_pages(struct migrate_vma *migrate)
if (!notified) {
notified = true;
- mmu_notifier_range_init(&range,
- MMU_NOTIFY_CLEAR, 0,
- NULL,
- migrate->vma->vm_mm,
- addr, migrate->end);
+ mmu_notifier_range_init_migrate(&range, 0,
+ migrate->vma, migrate->vma->vm_mm,
+ addr, migrate->end,
+ migrate->pgmap_owner);
mmu_notifier_invalidate_range_start(&range);
}
migrate_vma_insert_page(migrate, addr, newpage,