diff options
author | Miaohe Lin <linmiaohe@huawei.com> | 2022-06-25 11:28:13 +0200 |
---|---|---|
committer | akpm <akpm@linux-foundation.org> | 2022-07-04 03:08:51 +0200 |
commit | 2f55f070e5b80f130f5b161931ca91ce9cb2e625 (patch) | |
tree | 5d8fe137d61c4a2933e98cb383506977bce92703 /mm/khugepaged.c | |
parent | mm/khugepaged: trivial typo and codestyle cleanup (diff) | |
download | linux-2f55f070e5b80f130f5b161931ca91ce9cb2e625.tar.xz linux-2f55f070e5b80f130f5b161931ca91ce9cb2e625.zip |
mm/khugepaged: minor cleanup for collapse_file
nr_none is always 0 for non-shmem case because the page can be read from
the backend store. So when nr_none ! = 0, it must be in is_shmem case.
Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
cpu cycles.
Link: https://lkml.kernel.org/r/20220625092816.4856-5-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: NeilBrown <neilb@suse.de>
Cc: Peter Xu <peterx@redhat.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/khugepaged.c')
-rw-r--r-- | mm/khugepaged.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e237c5ec59bb..35f87bd2af28 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1881,8 +1881,8 @@ out_unlock: if (nr_none) { __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none); - if (is_shmem) - __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); + /* nr_none is always 0 for non-shmem. */ + __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none); } /* Join all the small entries into a single multi-index entry */ @@ -1946,10 +1946,10 @@ xa_unlocked: /* Something went wrong: roll back page cache changes */ xas_lock_irq(&xas); - mapping->nrpages -= nr_none; - - if (is_shmem) + if (nr_none) { + mapping->nrpages -= nr_none; shmem_uncharge(mapping->host, nr_none); + } xas_set(&xas, start); xas_for_each(&xas, page, end - 1) { |