summaryrefslogtreecommitdiffstats
path: root/mm/khugepaged.c
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2020-08-07 08:26:15 +0200
committerLinus Torvalds <torvalds@linux-foundation.org>2020-08-07 20:33:29 +0200
commit723a80dafed5c95889d48baab9aa433a6ffa0b4e (patch)
treec23eada9309cfbd925c8dba079429b837f851176 /mm/khugepaged.c
parentmm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible (diff)
downloadlinux-723a80dafed5c95889d48baab9aa433a6ffa0b4e.tar.xz
linux-723a80dafed5c95889d48baab9aa433a6ffa0b4e.zip
khugepaged: collapse_pte_mapped_thp() flush the right range
pmdp_collapse_flush() should be given the start address at which the huge page is mapped, haddr: it was given addr, which at that point has been used as a local variable, incremented to the end address of the extent. Found by source inspection while chasing a hugepage locking bug, which I then could not explain by this. At first I thought this was very bad; then saw that all of the page translations that were not flushed would actually still point to the right pages afterwards, so harmless; then realized that I know nothing of how different architectures and models cache intermediate paging structures, so maybe it matters after all - particularly since the page table concerned is immediately freed. Much easier to fix than to think about. Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP") Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Song Liu <songliubraving@fb.com> Cc: <stable@vger.kernel.org> [5.4+] Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008021204390.27773@eggly.anvils Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--mm/khugepaged.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 700f5160f3e4..28aae7f45c63 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1502,7 +1502,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
/* step 4: collapse pmd */
ptl = pmd_lock(vma->vm_mm, pmd);
- _pmd = pmdp_collapse_flush(vma, addr, pmd);
+ _pmd = pmdp_collapse_flush(vma, haddr, pmd);
spin_unlock(ptl);
mm_dec_nr_ptes(mm);
pte_free(mm, pmd_pgtable(_pmd));