diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-01-28 20:29:43 +0100 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-03-21 18:01:32 +0100 |
commit | 4b8554c527f3cfa183f6c06d231a9387873205a0 (patch) | |
tree | cba1023980f8eaca5ae0f9c917056179113a1516 /mm/huge_memory.c | |
parent | mm/rmap: Convert try_to_unmap() to take a folio (diff) | |
download | linux-4b8554c527f3cfa183f6c06d231a9387873205a0.tar.xz linux-4b8554c527f3cfa183f6c06d231a9387873205a0.zip |
mm/rmap: Convert try_to_migrate() to folios
Convert the callers to pass a folio and the try_to_migrate_one()
worker to use a folio throughout. Fixes an assumption that a
folio must be <= PMD size.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/huge_memory.c')
-rw-r--r-- | mm/huge_memory.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index de684427f79c..7df1934d6528 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2262,8 +2262,8 @@ static void unmap_page(struct page *page) * pages can simply be left unmapped, then faulted back on demand. * If that is ever changed (perhaps for mlock), update remap_page(). */ - if (PageAnon(page)) - try_to_migrate(page, ttu_flags); + if (folio_test_anon(folio)) + try_to_migrate(folio, ttu_flags); else try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK); |