diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2016-01-15 00:19:23 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-01-15 01:00:49 +0100 |
commit | 48131e03ca4ed71d73fbe55c311a258c6fa2a090 (patch) | |
tree | 2957b6c97152f31bd97e0943f36ab03ea49b6a19 /fs/proc | |
parent | mm, proc: reduce cost of /proc/pid/smaps for shmem mappings (diff) | |
download | linux-48131e03ca4ed71d73fbe55c311a258c6fa2a090.tar.xz linux-48131e03ca4ed71d73fbe55c311a258c6fa2a090.zip |
mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings
Following the previous patch, further reduction of /proc/pid/smaps cost
is possible for private writable shmem mappings with unpopulated areas
where the page walk invokes the .pte_hole function. We can use radix
tree iterator for each such area instead of calling find_get_entry() in
a loop. This is possible at the extra maintenance cost of introducing
another shmem function shmem_partial_swap_usage().
To demonstrate the diference, I have measured this on a process that
creates a private writable 2GB mapping of a partially swapped out
/dev/shm/file (which cannot employ the optimizations from the prvious
patch) and doesn't populate it at all. I time how long does it take to
cat /proc/pid/smaps of this process 100 times.
Before this patch:
real 0m3.831s
user 0m0.180s
sys 0m3.212s
After this patch:
real 0m1.176s
user 0m0.180s
sys 0m0.684s
The time is similar to the case where a radix tree iterator is employed
on the whole mapping.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/proc')
-rw-r--r-- | fs/proc/task_mmu.c | 42 |
1 files changed, 13 insertions, 29 deletions
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5830b2e129ed..8a03759bda38 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -488,42 +488,16 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, } #ifdef CONFIG_SHMEM -static unsigned long smaps_shmem_swap(struct vm_area_struct *vma, - unsigned long addr) -{ - struct page *page; - - page = find_get_entry(vma->vm_file->f_mapping, - linear_page_index(vma, addr)); - if (!page) - return 0; - - if (radix_tree_exceptional_entry(page)) - return PAGE_SIZE; - - page_cache_release(page); - return 0; - -} - static int smaps_pte_hole(unsigned long addr, unsigned long end, struct mm_walk *walk) { struct mem_size_stats *mss = walk->private; - while (addr < end) { - mss->swap += smaps_shmem_swap(walk->vma, addr); - addr += PAGE_SIZE; - } + mss->swap += shmem_partial_swap_usage( + walk->vma->vm_file->f_mapping, addr, end); return 0; } -#else -static unsigned long smaps_shmem_swap(struct vm_area_struct *vma, - unsigned long addr) -{ - return 0; -} #endif static void smaps_pte_entry(pte_t *pte, unsigned long addr, @@ -555,7 +529,17 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr, page = migration_entry_to_page(swpent); } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap && pte_none(*pte))) { - mss->swap += smaps_shmem_swap(vma, addr); + page = find_get_entry(vma->vm_file->f_mapping, + linear_page_index(vma, addr)); + if (!page) + return; + + if (radix_tree_exceptional_entry(page)) + mss->swap += PAGE_SIZE; + else + page_cache_release(page); + + return; } if (!page) |