diff options
author | Yosry Ahmed <yosryahmed@google.com> | 2024-04-13 04:24:04 +0200 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-05-06 02:53:37 +0200 |
commit | 4ea3fa9dd2e9c58920d73b1b2cd6bcc6f14ae0e0 (patch) | |
tree | 03b7405f011474dfac11123929de59e92620309b /mm/zswap.c | |
parent | userfaultfd: remove WRITE_ONCE when setting folio->index during UFFDIO_MOVE (diff) | |
download | linux-4ea3fa9dd2e9c58920d73b1b2cd6bcc6f14ae0e0.tar.xz linux-4ea3fa9dd2e9c58920d73b1b2cd6bcc6f14ae0e0.zip |
mm: zswap: always shrink in zswap_store() if zswap_pool_reached_full
Patch series "zswap same-filled and limit checking cleanups", v3.
Miscellaneous cleanups for limit checking and same-filled handling in the
store path. This series was broken out of the "zswap: store zero-filled
pages more efficiently" series [1]. It contains the cleanups and drops
the main functional changes.
[1]https://lore.kernel.org/lkml/20240325235018.2028408-1-yosryahmed@google.com/
This patch (of 4):
The cleanup code in zswap_store() is not pretty, particularly the 'shrink'
label at the bottom that ends up jumping between cleanup labels.
Instead of having a dedicated label to shrink the pool, just use
zswap_pool_reached_full directly to figure out if the pool needs
shrinking. zswap_pool_reached_full should be true if and only if the pool
needs shrinking.
The only caveat is that the value of zswap_pool_reached_full may be
changed by concurrent zswap_store() calls between checking the limit and
testing zswap_pool_reached_full in the cleanup code. This is fine
because:
- If zswap_pool_reached_full was true during limit checking then became
false during the cleanup code, then someone else already took care of
shrinking the pool and there is no need to queue the worker. That
would be a good change.
- If zswap_pool_reached_full was false during limit checking then became
true during the cleanup code, then someone else hit the limit
meanwhile. In this case, both threads will try to queue the worker,
but it never gets queued more than once anyway. Also, calling
queue_work() multiple times when the limit is hit could already happen
today, so this isn't a significant change in any way.
Link: https://lkml.kernel.org/r/20240413022407.785696-1-yosryahmed@google.com
Link: https://lkml.kernel.org/r/20240413022407.785696-2-yosryahmed@google.com
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/zswap.c')
-rw-r--r-- | mm/zswap.c | 10 |
1 files changed, 4 insertions, 6 deletions
diff --git a/mm/zswap.c b/mm/zswap.c index e6cf6282f5fd..d753ad9d4591 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1436,12 +1436,12 @@ bool zswap_store(struct folio *folio) if (cur_pages >= max_pages) { zswap_pool_limit_hit++; zswap_pool_reached_full = true; - goto shrink; + goto reject; } if (zswap_pool_reached_full) { if (cur_pages > zswap_accept_thr_pages()) - goto shrink; + goto reject; else zswap_pool_reached_full = false; } @@ -1547,6 +1547,8 @@ freepage: zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); + if (zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); check_old: /* * If the zswap store fails or zswap is disabled, we must invalidate the @@ -1557,10 +1559,6 @@ check_old: if (entry) zswap_entry_free(entry); return false; - -shrink: - queue_work(shrink_wq, &zswap_shrink_work); - goto reject; } bool zswap_load(struct folio *folio) |