diff options
author | Bui Quang Minh <minhquangbui99@gmail.com> | 2022-08-21 17:40:55 +0200 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2022-09-12 05:26:00 +0200 |
commit | 32d772708009eb90f8eeed6ec8f76e06f07e41e9 (patch) | |
tree | d6e4c30d01b906e54264ead0ef5a2473a525edeb /mm/page_counter.c | |
parent | mm: pagewalk: add api documentation for walk_page_range_novma() (diff) | |
download | linux-32d772708009eb90f8eeed6ec8f76e06f07e41e9.tar.xz linux-32d772708009eb90f8eeed6ec8f76e06f07e41e9.zip |
mm: skip retry when new limit is not below old one in page_counter_set_max
In page_counter_set_max, we want to make sure the new limit is not below
the concurrently-changing counter value. We read the counter and check
that the limit is not below the counter before the swap. After the swap,
we read the counter again and retry in case the counter is incremented as
this may violate the requirement. Even though the page_counter_try_charge
can see the old limit, it is guaranteed that the counter is not above the
old limit after the increment. So in case the new limit is not below the
old limit, the counter is guaranteed to be not above the new limit too.
We can skip the retry in this case to optimize a little bit.
Link: https://lkml.kernel.org/r/20220821154055.109635-1-minhquangbui99@gmail.com
Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/page_counter.c')
-rw-r--r-- | mm/page_counter.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page_counter.c b/mm/page_counter.c index eb156ff5d603..8a0cc24b60dd 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -193,7 +193,7 @@ int page_counter_set_max(struct page_counter *counter, unsigned long nr_pages) old = xchg(&counter->max, nr_pages); - if (page_counter_read(counter) <= usage) + if (page_counter_read(counter) <= usage || nr_pages >= old) return 0; counter->max = old; |