summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorMel Gorman <mgorman@techsingularity.net>2016-07-29 00:46:29 +0200
committerLinus Torvalds <torvalds@linux-foundation.org>2016-07-29 01:07:41 +0200
commit52e9f87ae8be96a863e44c7d8d7f482fb279dddd (patch)
treeafef7abcb34b4a72e3a16c885fe2fc375440a303 /mm/page_alloc.c
parentmm, vmscan: only wakeup kswapd once per node for the requested classzone (diff)
downloadlinux-52e9f87ae8be96a863e44c7d8d7f482fb279dddd.tar.xz
linux-52e9f87ae8be96a863e44c7d8d7f482fb279dddd.zip
mm, page_alloc: wake kswapd based on the highest eligible zone
The ac_classzone_idx is used as the basis for waking kswapd and that is based on the preferred zoneref. If the preferred zoneref's first zone is lower than what is available on other nodes, it's possible that kswapd is woken on a zone with only higher, but still eligible, zones. As classzone_idx is strictly adhered to now, it causes a problem because eligible pages are skipped. For example, node 0 has only DMA32 and node 1 has only NORMAL. An allocating context running on node 0 may wake kswapd on node 1 telling it to skip all NORMAL pages. Link: http://lkml.kernel.org/r/1467970510-21195-23-git-send-email-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r--mm/page_alloc.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a34d9fcf1339..f2c56a13b065 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3372,7 +3372,7 @@ static void wake_all_kswapds(unsigned int order, const struct alloc_context *ac)
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
ac->high_zoneidx, ac->nodemask) {
if (last_pgdat != zone->zone_pgdat)
- wakeup_kswapd(zone, order, ac_classzone_idx(ac));
+ wakeup_kswapd(zone, order, ac->high_zoneidx);
last_pgdat = zone->zone_pgdat;
}
}