diff options
author | Wei Yang <richard.weiyang@gmail.com> | 2019-03-06 00:44:01 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-03-06 06:07:15 +0100 |
commit | c52e75935f8ded2bd4a75eb08e914bd96802725b (patch) | |
tree | 9db8980c704de96c2c31f55cf5acf88a176e6d3c | |
parent | arm64/mm: enable HugeTLB migration for contiguous bit HugeTLB pages (diff) | |
download | linux-c52e75935f8ded2bd4a75eb08e914bd96802725b.tar.xz linux-c52e75935f8ded2bd4a75eb08e914bd96802725b.zip |
mm: remove extra drain pages on pcp list
In the current implementation, there are two places to isolate a range
of page: __offline_pages() and alloc_contig_range(). During this
procedure, it will drain pages on pcp list.
Below is a brief call flow:
__offline_pages()/alloc_contig_range()
start_isolate_page_range()
set_migratetype_isolate()
drain_all_pages()
drain_all_pages() <--- A
This snippet shows the current logic is isolate and drain pcp list for
each pageblock and drain pcp list again for the whole range.
start_isolate_page_range is responsible for isolating the given pfn
range. One part of that job is to make sure that also pages that are on
the allocator pcp lists are properly isolated. Otherwise they could be
reused and the range wouldn't be completely isolated until the memory is
freed back. While there is no strict guarantee here because pages might
get allocated at any time before drain_all_pages is called there doesn't
seem to be any strong demand for such a guarantee.
In any case, draining is already done at the isolation level and there
is no need to do it again later by start_isolate_page_range callers
(memory hotplug and CMA allocator currently). Therefore remove
pointless draining in existing callers to make the code more clear and
functionally correct.
[mhocko@suse.com: provide a clearer changelog for the last two paragraphs]
Link: http://lkml.kernel.org/r/20190105233141.2329-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/memory_hotplug.c | 1 | ||||
-rw-r--r-- | mm/page_alloc.c | 1 |
2 files changed, 0 insertions, 2 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b3d3c64d15df..eb1a28469b66 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1625,7 +1625,6 @@ static int __ref __offline_pages(unsigned long start_pfn, cond_resched(); lru_add_drain_all(); - drain_all_pages(zone); pfn = scan_movable_pages(pfn, end_pfn); if (pfn) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 034b8b6043a3..8afb6f007f68 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8196,7 +8196,6 @@ int alloc_contig_range(unsigned long start, unsigned long end, */ lru_add_drain_all(); - drain_all_pages(cc.zone); order = 0; outer_start = start; |