diff options
author | Mel Gorman <mgorman@techsingularity.net> | 2022-03-22 22:43:30 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-22 23:57:06 +0100 |
commit | ca7b59b1de72450b3e696bada3506a519ac5455c (patch) | |
tree | fd3815eb79f1f6f5023d936227040f51fdf4822e /mm/page_alloc.c | |
parent | mm/pages_alloc.c: don't create ZONE_MOVABLE beyond the end of a node (diff) | |
download | linux-ca7b59b1de72450b3e696bada3506a519ac5455c.tar.xz linux-ca7b59b1de72450b3e696bada3506a519ac5455c.zip |
mm/page_alloc: fetch the correct pcp buddy during bulk free
Patch series "Follow-up on high-order PCP caching", v2.
Commit 44042b449872 ("mm/page_alloc: allow high-order pages to be stored
on the per-cpu lists") was primarily aimed at reducing the cost of SLUB
cache refills of high-order pages in two ways. Firstly, zone lock
acquisitions was reduced and secondly, there were fewer buddy list
modifications. This is a follow-up series fixing some issues that
became apparant after merging.
Patch 1 is a functional fix. It's harmless but inefficient.
Patches 2-5 reduce the overhead of bulk freeing of PCP pages. While the
overhead is small, it's cumulative and noticable when truncating large
files. The changelog for patch 4 includes results of a microbench that
deletes large sparse files with data in page cache. Sparse files were
used to eliminate filesystem overhead.
Patch 6 addresses issues with high-order PCP pages being stored on PCP
lists for too long. Pages freed on a CPU potentially may not be quickly
reused and in some cases this can increase cache miss rates. Details
are included in the changelog.
This patch (of 6):
free_pcppages_bulk() prefetches buddies about to be freed but the order
must also be passed in as PCP lists store multiple orders.
Link: https://lkml.kernel.org/r/20220217002227.5739-1-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20220217002227.5739-2-mgorman@techsingularity.net
Fixes: 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists")
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Aaron Lu <aaron.lu@intel.com>
Tested-by: Aaron Lu <aaron.lu@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 38388020793f..35c399d51530 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1429,10 +1429,10 @@ static bool bulkfree_pcp_prepare(struct page *page) } #endif /* CONFIG_DEBUG_VM */ -static inline void prefetch_buddy(struct page *page) +static inline void prefetch_buddy(struct page *page, unsigned int order) { unsigned long pfn = page_to_pfn(page); - unsigned long buddy_pfn = __find_buddy_pfn(pfn, 0); + unsigned long buddy_pfn = __find_buddy_pfn(pfn, order); struct page *buddy = page + (buddy_pfn - pfn); prefetch(buddy); @@ -1509,7 +1509,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, * prefetch buddy for the first pcp->batch nr of pages. */ if (prefetch_nr) { - prefetch_buddy(page); + prefetch_buddy(page, order); prefetch_nr--; } } while (count > 0 && --batch_free && !list_empty(list)); |