summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2024-03-20 19:02:10 +0100
committerAndrew Morton <akpm@linux-foundation.org>2024-04-26 05:56:03 +0200
commit2dd482ba627de15d67f0c0ed445133c8ae9b201b (patch)
tree60b488d7a5e7909a9f7be9254c3375875cdcd93b /mm/page_alloc.c
parentmm: page_alloc: move free pages when converting block during isolation (diff)
downloadlinux-2dd482ba627de15d67f0c0ed445133c8ae9b201b.tar.xz
linux-2dd482ba627de15d67f0c0ed445133c8ae9b201b.zip
mm: page_alloc: fix move_freepages_block() range error
When a block is partially outside the zone of the cursor page, the function cuts the range to the pivot page instead of the zone start. This can leave large parts of the block behind, which encourages incompatible page mixing down the line (ask for one type, get another), and thus long-term fragmentation. This triggers reliably on the first block in the DMA zone, whose start_pfn is 1. The block is stolen, but everything before the pivot page (which was often hundreds of pages) is left on the old list. Link: https://lkml.kernel.org/r/20240320180429.678181-6-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--mm/page_alloc.c10
1 files changed, 8 insertions, 2 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dcacb86efd29..0e223d5b94fa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1650,9 +1650,15 @@ int move_freepages_block(struct zone *zone, struct page *page,
start_pfn = pageblock_start_pfn(pfn);
end_pfn = pageblock_end_pfn(pfn) - 1;
- /* Do not cross zone boundaries */
+ /*
+ * The caller only has the lock for @zone, don't touch ranges
+ * that straddle into other zones. While we could move part of
+ * the range that's inside the zone, this call is usually
+ * accompanied by other operations such as migratetype updates
+ * which also should be locked.
+ */
if (!zone_spans_pfn(zone, start_pfn))
- start_pfn = pfn;
+ return 0;
if (!zone_spans_pfn(zone, end_pfn))
return 0;