diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2013-09-11 23:22:33 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-09-12 00:58:00 +0200 |
commit | 5b40998ae35cf64561868370e6c9f3d3e94b6bf7 (patch) | |
tree | 24f7142d850df3512392004501fc9db7573fc031 | |
parent | mm: munlock: bypass per-cpu pvec for putback_lru_page (diff) | |
download | linux-5b40998ae35cf64561868370e6c9f3d3e94b6bf7.tar.xz linux-5b40998ae35cf64561868370e6c9f3d3e94b6bf7.zip |
mm: munlock: remove redundant get_page/put_page pair on the fast path
The performance of the fast path in munlock_vma_range() can be further
improved by avoiding atomic ops of a redundant get_page()/put_page() pair.
When calling get_page() during page isolation, we already have the pin
from follow_page_mask(). This pin will be then returned by
__pagevec_lru_add(), after which we do not reference the pages anymore.
After this patch, an 8% speedup was measured for munlocking a 56GB large
memory area with THP disabled.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Jörn Engel <joern@logfs.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/mlock.c | 26 |
1 files changed, 14 insertions, 12 deletions
diff --git a/mm/mlock.c b/mm/mlock.c index abdc612b042d..19a934dce5d6 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -303,8 +303,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) if (PageLRU(page)) { lruvec = mem_cgroup_page_lruvec(page, zone); lru = page_lru(page); - - get_page(page); + /* + * We already have pin from follow_page_mask() + * so we can spare the get_page() here. + */ ClearPageLRU(page); del_page_from_lru_list(page, lruvec, lru); } else { @@ -336,25 +338,25 @@ skip_munlock: lock_page(page); if (!__putback_lru_fast_prepare(page, &pvec_putback, &pgrescued)) { - /* Slow path */ + /* + * Slow path. We don't want to lose the last + * pin before unlock_page() + */ + get_page(page); /* for putback_lru_page() */ __munlock_isolated_page(page); unlock_page(page); + put_page(page); /* from follow_page_mask() */ } } } - /* Phase 3: page putback for pages that qualified for the fast path */ + /* + * Phase 3: page putback for pages that qualified for the fast path + * This will also call put_page() to return pin from follow_page_mask() + */ if (pagevec_count(&pvec_putback)) __putback_lru_fast(&pvec_putback, pgrescued); - /* Phase 4: put_page to return pin from follow_page_mask() */ - for (i = 0; i < nr; i++) { - struct page *page = pvec->pages[i]; - - if (page) - put_page(page); - } - pagevec_reinit(pvec); } |