diff options
author | Mel Gorman <mgorman@techsingularity.net> | 2016-07-29 00:46:23 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-29 01:07:41 +0200 |
commit | c4a25635b60d08853a3e4eaae3ab34419a36cfa2 (patch) | |
tree | 22fc50885a47c64be6e6cd2a8908025512eb1984 /mm/vmscan.c | |
parent | mm: move most file-based accounting to the node (diff) | |
download | linux-c4a25635b60d08853a3e4eaae3ab34419a36cfa2.tar.xz linux-c4a25635b60d08853a3e4eaae3ab34419a36cfa2.zip |
mm: move vmscan writes and file write accounting to the node
As reclaim is now node-based, it follows that page write activity due to
page reclaim should also be accounted for on the node. For consistency,
also account page writes and page dirtying on a per-node basis.
After this patch, there are a few remaining zone counters that may appear
strange but are fine. NUMA stats are still per-zone as this is a
user-space interface that tools consume. NR_MLOCK, NR_SLAB_*,
NR_PAGETABLE, NR_KERNEL_STACK and NR_BOUNCE are all allocations that
potentially pin low memory and cannot trivially be reclaimed on demand.
This information is still useful for debugging a page allocation failure
warning.
Link: http://lkml.kernel.org/r/1467970510-21195-21-git-send-email-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index b797afec3057..9b61a55b6e38 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -612,7 +612,7 @@ static pageout_t pageout(struct page *page, struct address_space *mapping, ClearPageReclaim(page); } trace_mm_vmscan_writepage(page); - inc_zone_page_state(page, NR_VMSCAN_WRITE); + inc_node_page_state(page, NR_VMSCAN_WRITE); return PAGE_SUCCESS; } @@ -1117,7 +1117,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, * except we already have the page isolated * and know it's dirty */ - inc_zone_page_state(page, NR_VMSCAN_IMMEDIATE); + inc_node_page_state(page, NR_VMSCAN_IMMEDIATE); SetPageReclaim(page); goto keep_locked; |