diff options
author | Yin Fengwei <fengwei.yin@intel.com> | 2023-08-02 17:14:05 +0200 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-08-25 01:20:26 +0200 |
commit | 617c28ecab22d98a3809370eb6cb50fa24b7bfe1 (patch) | |
tree | 931b478360241bfea733004884c9bd04a619ea48 /mm/memcontrol.c | |
parent | mm: convert do_set_pte() to set_pte_range() (diff) | |
download | linux-617c28ecab22d98a3809370eb6cb50fa24b7bfe1.tar.xz linux-617c28ecab22d98a3809370eb6cb50fa24b7bfe1.zip |
filemap: batch PTE mappings
Call set_pte_range() once per contiguous range of the folio instead of
once per page. This batches the updates to mm counters and the rmap.
With a will-it-scale.page_fault3 like app (change file write fault testing
to read fault testing. Trying to upstream it to will-it-scale at [1]) got
15% performance gain on a 48C/96T Cascade Lake test box with 96 processes
running against xfs.
Perf data collected before/after the change:
18.73%--page_add_file_rmap
|
--11.60%--__mod_lruvec_page_state
|
|--7.40%--__mod_memcg_lruvec_state
| |
| --5.58%--cgroup_rstat_updated
|
--2.53%--__mod_lruvec_state
|
--1.48%--__mod_node_page_state
9.93%--page_add_file_rmap_range
|
--2.67%--__mod_lruvec_page_state
|
|--1.95%--__mod_memcg_lruvec_state
| |
| --1.57%--cgroup_rstat_updated
|
--0.61%--__mod_lruvec_state
|
--0.54%--__mod_node_page_state
The running time of __mode_lruvec_page_state() is reduced about 9%.
[1]: https://github.com/antonblanchard/will-it-scale/pull/37
Link: https://lkml.kernel.org/r/20230802151406.3735276-38-willy@infradead.org
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/memcontrol.c')
0 files changed, 0 insertions, 0 deletions