diff options
author | Mel Gorman <mgorman@suse.de> | 2013-10-07 12:29:05 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-10-09 12:40:32 +0200 |
commit | 1bc115d87dffd1c43bdc3c9c9d1e3a51c195d18e (patch) | |
tree | 56a26b4f4fe089e3dd1df5a26877d1e4c0114d35 /mm/huge_memory.c | |
parent | sched/numa: Check current->mm before allocating NUMA faults (diff) | |
download | linux-1bc115d87dffd1c43bdc3c9c9d1e3a51c195d18e.tar.xz linux-1bc115d87dffd1c43bdc3c9c9d1e3a51c195d18e.zip |
mm: numa: Scan pages with elevated page_mapcount
Currently automatic NUMA balancing is unable to distinguish between false
shared versus private pages except by ignoring pages with an elevated
page_mapcount entirely. This avoids shared pages bouncing between the
nodes whose task is using them but that is ignored quite a lot of data.
This patch kicks away the training wheels in preparation for adding support
for identifying shared/private pages is now in place. The ordering is so
that the impact of the shared/private detection can be easily measured. Note
that the patch does not migrate shared, file-backed within vmas marked
VM_EXEC as these are generally shared library pages. Migrating such pages
is not beneficial as there is an expectation they are read-shared between
caches and iTLB and iCache pressure is generally low.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-28-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm/huge_memory.c')
-rw-r--r-- | mm/huge_memory.c | 12 |
1 files changed, 5 insertions, 7 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 914216733e0a..2a28c2c6c165 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1484,14 +1484,12 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, struct page *page = pmd_page(*pmd); /* - * Only check non-shared pages. Do not trap faults - * against the zero page. The read-only data is likely - * to be read-cached on the local CPU cache and it is - * less useful to know about local vs remote hits on - * the zero page. + * Do not trap faults against the zero page. The + * read-only data is likely to be read-cached on the + * local CPU cache and it is less useful to know about + * local vs remote hits on the zero page. */ - if (page_mapcount(page) == 1 && - !is_huge_zero_page(page) && + if (!is_huge_zero_page(page) && !pmd_numa(*pmd)) { entry = pmdp_get_and_clear(mm, addr, pmd); entry = pmd_mknuma(entry); |