summaryrefslogtreecommitdiffstats
path: root/mm/mempolicy.c
diff options
context:
space:
mode:
authorRik van Riel <riel@redhat.com>2013-10-07 12:29:08 +0200
committerIngo Molnar <mingo@kernel.org>2013-10-09 12:40:36 +0200
commit6fe6b2d6dabf392aceb3ad3a5e859b46a04465c6 (patch)
treedb4493950d94c418edcce093bd698e79ec1dca1a /mm/mempolicy.c
parentsched/numa: Set preferred NUMA node based on number of private faults (diff)
downloadlinux-6fe6b2d6dabf392aceb3ad3a5e859b46a04465c6.tar.xz
linux-6fe6b2d6dabf392aceb3ad3a5e859b46a04465c6.zip
sched/numa: Do not migrate memory immediately after switching node
The load balancer can move tasks between nodes and does not take NUMA locality into account. With automatic NUMA balancing this may result in the tasks working set being migrated to the new node. However, as the fault buffer will still store faults from the old node the schduler may decide to reset the preferred node and migrate the task back resulting in more migrations. The ideal would be that the scheduler did not migrate tasks with a heavy memory footprint but this may result nodes being overloaded. We could also discard the fault information on task migration but this would still cause all the tasks working set to be migrated. This patch simply avoids migrating the memory for a short time after a task is migrated. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-31-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm/mempolicy.c')
-rw-r--r--mm/mempolicy.c12
1 files changed, 12 insertions, 0 deletions
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index aff1f1ed3dc5..196d8da2b657 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2378,6 +2378,18 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
last_nidpid = page_nidpid_xchg_last(page, this_nidpid);
if (!nidpid_pid_unset(last_nidpid) && nidpid_to_nid(last_nidpid) != polnid)
goto out;
+
+#ifdef CONFIG_NUMA_BALANCING
+ /*
+ * If the scheduler has just moved us away from our
+ * preferred node, do not bother migrating pages yet.
+ * This way a short and temporary process migration will
+ * not cause excessive memory migration.
+ */
+ if (polnid != current->numa_preferred_nid &&
+ !current->numa_migrate_seq)
+ goto out;
+#endif
}
if (curnid != polnid)