summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorYosry Ahmed <yosryahmed@google.com>2024-06-11 04:45:16 +0200
committerAndrew Morton <akpm@linux-foundation.org>2024-07-04 04:30:09 +0200
commitc63f210d4891f5b1b1057a0d7c91d2b0d15431d1 (patch)
treed4c7fae8c523c2d3aecc725c671cc5bbbdfbbbe0 /mm
parentmm: zswap: add zswap_never_enabled() (diff)
downloadlinux-c63f210d4891f5b1b1057a0d7c91d2b0d15431d1.tar.xz
linux-c63f210d4891f5b1b1057a0d7c91d2b0d15431d1.zip
mm: zswap: handle incorrect attempts to load large folios
Zswap does not support storing or loading large folios. Until proper support is added, attempts to load large folios from zswap are a bug. For example, if a swapin fault observes that contiguous PTEs are pointing to contiguous swap entries and tries to swap them in as a large folio, swap_read_folio() will pass in a large folio to zswap_load(), but zswap_load() will only effectively load the first page in the folio. If the first page is not in zswap, the folio will be read from disk, even though other pages may be in zswap. In both cases, this will lead to silent data corruption. Proper support needs to be added before large folio swapins and zswap can work together. Looking at callers of swap_read_folio(), it seems like they are either allocated from __read_swap_cache_async() or do_swap_page() in the SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so everything is fine for now. However, there is ongoing work to add to support large folio swapins [1]. To make sure new development does not break zswap (or get broken by zswap), add minimal handling of incorrect loads of large folios to zswap. First, move the call folio_mark_uptodate() inside zswap_load(). If a large folio load is attempted, and zswap was ever enabled on the system, return 'true' without calling folio_mark_uptodate(). This will prevent the folio from being read from disk, and will emit an IO error because the folio is not uptodate (e.g. do_swap_fault() will return VM_FAULT_SIGBUS). It may not be reliable recovery in all cases, but it is better than nothing. This was tested by hacking the allocation in __read_swap_cache_async() to use order 2 and __GFP_COMP. In the future, to handle this correctly, the swapin code should: (a) Fall back to order-0 swapins if zswap was ever used on the machine, because compressed pages remain in zswap after it is disabled. (b) Add proper support to swapin large folios from zswap (fully or partially). Probably start with (a) then followup with (b). [1]https://lore.kernel.org/linux-mm/20240304081348.197341-6-21cnbao@gmail.com/ Link: https://lkml.kernel.org/r/20240611024516.1375191-3-yosryahmed@google.com Signed-off-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Barry Song <baohua@kernel.org> Cc: Barry Song <baohua@kernel.org> Cc: Chengming Zhou <chengming.zhou@linux.dev> Cc: Chris Li <chrisl@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Nhat Pham <nphamcs@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_io.c1
-rw-r--r--mm/zswap.c12
2 files changed, 12 insertions, 1 deletions
diff --git a/mm/page_io.c b/mm/page_io.c
index 488ecacef84f..6c1c1828bb88 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -515,7 +515,6 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
delayacct_swapin_start();
if (zswap_load(folio)) {
- folio_mark_uptodate(folio);
folio_unlock(folio);
} else if (data_race(sis->flags & SWP_FS_OPS)) {
swap_read_folio_fs(folio, plug);
diff --git a/mm/zswap.c b/mm/zswap.c
index 9d4e54282b5f..a546c01602aa 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1567,6 +1567,17 @@ bool zswap_load(struct folio *folio)
return false;
/*
+ * Large folios should not be swapped in while zswap is being used, as
+ * they are not properly handled. Zswap does not properly load large
+ * folios, and a large folio may only be partially in zswap.
+ *
+ * Return true without marking the folio uptodate so that an IO error is
+ * emitted (e.g. do_swap_page() will sigbus).
+ */
+ if (WARN_ON_ONCE(folio_test_large(folio)))
+ return true;
+
+ /*
* When reading into the swapcache, invalidate our entry. The
* swapcache can be the authoritative owner of the page and
* its mappings, and the pressure that results from having two
@@ -1600,6 +1611,7 @@ bool zswap_load(struct folio *folio)
folio_mark_dirty(folio);
}
+ folio_mark_uptodate(folio);
return true;
}