diff options
author | Philip Yang <Philip.Yang@amd.com> | 2019-05-23 22:32:31 +0200 |
---|---|---|
committer | Jason Gunthorpe <jgg@mellanox.com> | 2019-06-06 21:31:41 +0200 |
commit | 789c2af88f24d1db983aae49b5c4561e6e02ff5b (patch) | |
tree | f48479dc50289b0380b239bcc60e0fcd216bc559 /mm/hmm.c | |
parent | mm/hmm: clean up some coding style and comments (diff) | |
download | linux-789c2af88f24d1db983aae49b5c4561e6e02ff5b.tar.xz linux-789c2af88f24d1db983aae49b5c4561e6e02ff5b.zip |
mm/hmm: support automatic NUMA balancing
While the page is migrating by NUMA balancing, HMM failed to detect this
condition and still return the old page. Application will use the new page
migrated, but driver pass the old page physical address to GPU, this crash
the application later.
Use pte_protnone(pte) to return this condition and then hmm_vma_do_fault
will allocate new page.
Signed-off-by: Philip Yang <Philip.Yang@amd.com>
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'mm/hmm.c')
-rw-r--r-- | mm/hmm.c | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -548,7 +548,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte) { - if (pte_none(pte) || !pte_present(pte)) + if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte)) return 0; return pte_write(pte) ? range->flags[HMM_PFN_VALID] | range->flags[HMM_PFN_WRITE] : |