diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2023-07-15 06:23:40 +0200 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-08-21 22:37:26 +0200 |
commit | 34f4c198bfbe86612c368eb122002787acecaa93 (patch) | |
tree | 8960b9d5118a738e5dac62b632e68f5266b1d807 /mm | |
parent | mm: kill frontswap (diff) | |
download | linux-34f4c198bfbe86612c368eb122002787acecaa93.tar.xz linux-34f4c198bfbe86612c368eb122002787acecaa93.zip |
zswap: make zswap_store() take a folio
Patch series "Followup folio conversions for zswap".
With frontswap killed, it's worth converting the zswap_load() and
zswap_store() functions to take a folio instead of a page pointer. They
aren't converted to support large folios, but there are a lot of
unnecessary calls to compound_head() that are removed by these patches.
This patch (of 4):
Only convert a few easy parts of this function to use the folio passed in;
convert back to struct page for the majority of it. This does remove a
few hidden calls to compound_head().
Link: https://lkml.kernel.org/r/20230715042343.434588-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20230715042343.434588-3-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Vitaly Wool <vitaly.wool@konsulko.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page_io.c | 2 | ||||
-rw-r--r-- | mm/zswap.c | 13 |
2 files changed, 8 insertions, 7 deletions
diff --git a/mm/page_io.c b/mm/page_io.c index 5d0baba3578b..ac89685b562b 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -195,7 +195,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) folio_unlock(folio); return ret; } - if (zswap_store(&folio->page)) { + if (zswap_store(folio)) { folio_start_writeback(folio); folio_unlock(folio); folio_end_writeback(folio); diff --git a/mm/zswap.c b/mm/zswap.c index be1b6417ef5c..df3054e6a3a9 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1223,11 +1223,12 @@ static void zswap_fill_page(void *ptr, unsigned long value) memset_l(page, value, PAGE_SIZE / sizeof(unsigned long)); } -bool zswap_store(struct page *page) +bool zswap_store(struct folio *folio) { - swp_entry_t swp = { .val = page_private(page), }; + swp_entry_t swp = folio_swap_entry(folio); int type = swp_type(swp); pgoff_t offset = swp_offset(swp); + struct page *page = &folio->page; struct zswap_tree *tree = zswap_trees[type]; struct zswap_entry *entry, *dupentry; struct scatterlist input, output; @@ -1242,11 +1243,11 @@ bool zswap_store(struct page *page) gfp_t gfp; int ret; - VM_WARN_ON_ONCE(!PageLocked(page)); - VM_WARN_ON_ONCE(!PageSwapCache(page)); + VM_WARN_ON_ONCE(!folio_test_locked(folio)); + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); - /* THP isn't supported */ - if (PageTransHuge(page)) + /* Large folios aren't supported */ + if (folio_test_large(folio)) return false; if (!zswap_enabled || !tree) |