diff options
author | Qi Zheng <zhengqi.arch@bytedance.com> | 2024-02-22 09:08:15 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-03-05 01:40:33 +0100 |
commit | d7a08838ab74652f2b53fee9763f0178278c3a4b (patch) | |
tree | 737fe757950b68b4f38aa14d4ac33aa368bc3d30 /mm | |
parent | mm, vmscan: prevent infinite loop for costly GFP_NOIO | __GFP_RETRY_MAYFAIL a... (diff) | |
download | linux-d7a08838ab74652f2b53fee9763f0178278c3a4b.tar.xz linux-d7a08838ab74652f2b53fee9763f0178278c3a4b.zip |
mm: userfaultfd: fix unexpected change to src_folio when UFFDIO_MOVE fails
After ptep_clear_flush(), if we find that src_folio is pinned we will fail
UFFDIO_MOVE and put src_folio back to src_pte entry, but the change to
src_folio->{mapping,index} is not restored in this process. This is not
what we expected, so fix it.
This can cause the rmap for that page to be invalid, possibly resulting
in memory corruption. At least swapout+migration would no longer work,
because we might fail to locate the mappings of that folio.
Link: https://lkml.kernel.org/r/20240222080815.46291-1-zhengqi.arch@bytedance.com
Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI")
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/userfaultfd.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7cf7d4384259..313f1c42768a 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -914,9 +914,6 @@ static int move_present_pte(struct mm_struct *mm, goto out; } - folio_move_anon_rmap(src_folio, dst_vma); - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); - orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte); /* Folio got pinned from under us. Put it back and fail the move. */ if (folio_maybe_dma_pinned(src_folio)) { @@ -925,6 +922,9 @@ static int move_present_pte(struct mm_struct *mm, goto out; } + folio_move_anon_rmap(src_folio, dst_vma); + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot); /* Follow mremap() behavior and treat the entry dirty after the move */ orig_dst_pte = pte_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma); |