diff options
author | Hugh Dickins <hughd@google.com> | 2018-11-30 23:10:35 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-11-30 23:56:15 +0100 |
commit | 2af8ff291848cc4b1cce24b6c943394eb2c761e8 (patch) | |
tree | b5ccc610b8c64029605ac253aa602a6a58e8446f /mm/khugepaged.c | |
parent | mm/khugepaged: fix crashes due to misaccounted holes (diff) | |
download | linux-2af8ff291848cc4b1cce24b6c943394eb2c761e8.tar.xz linux-2af8ff291848cc4b1cce24b6c943394eb2c761e8.zip |
mm/khugepaged: collapse_shmem() remember to clear holes
Huge tmpfs testing reminds us that there is no __GFP_ZERO in the gfp
flags khugepaged uses to allocate a huge page - in all common cases it
would just be a waste of effort - so collapse_shmem() must remember to
clear out any holes that it instantiates.
The obvious place to do so, where they are put into the page cache tree,
is not a good choice: because interrupts are disabled there. Leave it
until further down, once success is assured, where the other pages are
copied (before setting PageUptodate).
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261525080.2275@eggly.anvils
Fixes: f3f0e1d2150b2 ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: <stable@vger.kernel.org> [4.8+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r-- | mm/khugepaged.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 65e82f665c7c..1c402d33547e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1467,7 +1467,12 @@ xa_unlocked: * Replacing old pages with new one has succeeded, now we * need to copy the content and free the old pages. */ + index = start; list_for_each_entry_safe(page, tmp, &pagelist, lru) { + while (index < page->index) { + clear_highpage(new_page + (index % HPAGE_PMD_NR)); + index++; + } copy_highpage(new_page + (page->index % HPAGE_PMD_NR), page); list_del(&page->lru); @@ -1477,6 +1482,11 @@ xa_unlocked: ClearPageActive(page); ClearPageUnevictable(page); put_page(page); + index++; + } + while (index < end) { + clear_highpage(new_page + (index % HPAGE_PMD_NR)); + index++; } local_irq_disable(); |