summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorJiang Liu <jiang.liu@huawei.com>2012-08-01 01:43:32 +0200
committerLinus Torvalds <torvalds@linux-foundation.org>2012-08-01 03:42:44 +0200
commit340175b7d14d5617559d0c1a54fa0ea204d9edcd (patch)
tree7b2fb51d5bf1e54bf258a058fea347ee2d75ac7b /mm
parentmm/hotplug: correctly add new zone to all other nodes' zone lists (diff)
downloadlinux-340175b7d14d5617559d0c1a54fa0ea204d9edcd.tar.xz
linux-340175b7d14d5617559d0c1a54fa0ea204d9edcd.zip
mm/hotplug: free zone->pageset when a zone becomes empty
When a zone becomes empty after memory offlining, free zone->pageset. Otherwise it will cause memory leak when adding memory to the empty zone again because build_all_zonelists() will allocate zone->pageset for an empty zone. Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Wei Wang <Bessel.Wang@huawei.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: Keping Chen <chenkeping@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory_hotplug.c3
-rw-r--r--mm/page_alloc.c13
2 files changed, 16 insertions, 0 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 597d371329d3..3ad25f9d1fc1 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -966,6 +966,9 @@ repeat:
init_per_zone_wmark_min();
+ if (!populated_zone(zone))
+ zone_pcp_reset(zone);
+
if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
kswapd_stop(node);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9ad6866ac49c..9c9a31665a78 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5894,6 +5894,19 @@ void free_contig_range(unsigned long pfn, unsigned nr_pages)
#endif
#ifdef CONFIG_MEMORY_HOTREMOVE
+void zone_pcp_reset(struct zone *zone)
+{
+ unsigned long flags;
+
+ /* avoid races with drain_pages() */
+ local_irq_save(flags);
+ if (zone->pageset != &boot_pageset) {
+ free_percpu(zone->pageset);
+ zone->pageset = &boot_pageset;
+ }
+ local_irq_restore(flags);
+}
+
/*
* All pages in the range must be isolated before calling this.
*/