diff options
author | Shakeel Butt <shakeelb@google.com> | 2021-01-24 06:01:11 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-01-24 18:20:52 +0100 |
commit | 8a8792f600abacd7e1b9bb667759dca1c153f64c (patch) | |
tree | 8a4eaa4b0c3a884503b7ade0b999ccbfe70de250 /mm/migrate.c | |
parent | mm: memcg/slab: optimize objcg stock draining (diff) | |
download | linux-8a8792f600abacd7e1b9bb667759dca1c153f64c.tar.xz linux-8a8792f600abacd7e1b9bb667759dca1c153f64c.zip |
mm: memcg: fix memcg file_dirty numa stat
The kernel updates the per-node NR_FILE_DIRTY stats on page migration
but not the memcg numa stats.
That was not an issue until recently the commit 5f9a4f4a7096 ("mm:
memcontrol: add the missing numa_stat interface for cgroup v2") exposed
numa stats for the memcg.
So fix the file_dirty per-memcg numa stat.
Link: https://lkml.kernel.org/r/20210108155813.2914586-1-shakeelb@google.com
Fixes: 5f9a4f4a7096 ("mm: memcontrol: add the missing numa_stat interface for cgroup v2")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Yang Shi <shy828301@gmail.com>
Reviewed-by: Roman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index ee5e612b4cd8..613794f6a433 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -500,9 +500,9 @@ int migrate_page_move_mapping(struct address_space *mapping, __inc_lruvec_state(new_lruvec, NR_SHMEM); } if (dirty && mapping_can_writeback(mapping)) { - __dec_node_state(oldzone->zone_pgdat, NR_FILE_DIRTY); + __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); - __inc_node_state(newzone->zone_pgdat, NR_FILE_DIRTY); + __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); } } |