summaryrefslogtreecommitdiffstats
path: root/mm/swap_state.c
diff options
context:
space:
mode:
authorWei Yang <richard.weiyang@gmail.com>2020-04-02 06:06:26 +0200
committerLinus Torvalds <torvalds@linux-foundation.org>2020-04-02 18:35:28 +0200
commitcb77445132aed64ecea286edfbcfbc000b8f1f73 (patch)
treef313bfcec28300512ec35755a9b158e6309853b9 /mm/swap_state.c
parentmm: swap: use smp_mb__after_atomic() to order LRU bit set (diff)
downloadlinux-cb77445132aed64ecea286edfbcfbc000b8f1f73.tar.xz
linux-cb77445132aed64ecea286edfbcfbc000b8f1f73.zip
mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache
add_to_swap_cache() and delete_from_swap_cache() are counterparts, while currently they use different ways to count pages. It doesn't break anything because we only have two sizes for PageAnon, but this is confusing and not good practice. This patch corrects it by making both functions use hpage_nr_pages(). Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Link: http://lkml.kernel.org/r/20200315012920.2687-1-richard.weiyang@gmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap_state.c')
-rw-r--r--mm/swap_state.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 8e7ce9a9bc5e..ebed37bbf7a3 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -116,7 +116,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
struct address_space *address_space = swap_address_space(entry);
pgoff_t idx = swp_offset(entry);
XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page));
- unsigned long i, nr = compound_nr(page);
+ unsigned long i, nr = hpage_nr_pages(page);
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapCache(page), page);