diff options
author | Nicholas Piggin <npiggin@gmail.com> | 2018-03-07 02:37:12 +0100 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2018-03-13 13:43:06 +0100 |
commit | 5709f7cfd8305252dc327206bd674ad65ca4d77f (patch) | |
tree | 3bff567b21774e60411dbc9722211e121983f9c6 /arch/powerpc/include/asm/mmu-8xx.h | |
parent | powerpc/mm/slice: pass pointers to struct slice_mask where possible (diff) | |
download | linux-5709f7cfd8305252dc327206bd674ad65ca4d77f.tar.xz linux-5709f7cfd8305252dc327206bd674ad65ca4d77f.zip |
powerpc/mm/slice: implement a slice mask cache
Calculating the slice mask can become a signifcant overhead for
get_unmapped_area. This patch adds a struct slice_mask for
each page size in the mm_context, and keeps these in synch with
the slices psize arrays and slb_addr_limit.
On Book3S/64 this adds 288 bytes to the mm_context_t for the
slice mask caches.
On POWER8, this increases vfork+exec+exit performance by 9.9%
and reduces time to mmap+munmap a 64kB page by 28%.
Reduces time to mmap+munmap by about 10% on 8xx.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/include/asm/mmu-8xx.h')
-rw-r--r-- | arch/powerpc/include/asm/mmu-8xx.h | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/mmu-8xx.h b/arch/powerpc/include/asm/mmu-8xx.h index d3d7e79140c6..4f547752ae79 100644 --- a/arch/powerpc/include/asm/mmu-8xx.h +++ b/arch/powerpc/include/asm/mmu-8xx.h @@ -192,6 +192,11 @@ #endif #ifndef __ASSEMBLY__ +struct slice_mask { + u64 low_slices; + DECLARE_BITMAP(high_slices, 0); +}; + typedef struct { unsigned int id; unsigned int active; @@ -201,6 +206,11 @@ typedef struct { unsigned char low_slices_psize[SLICE_ARRAY_SIZE]; unsigned char high_slices_psize[0]; unsigned long slb_addr_limit; + struct slice_mask mask_base_psize; /* 4k or 16k */ +# ifdef CONFIG_HUGETLB_PAGE + struct slice_mask mask_512k; + struct slice_mask mask_8m; +# endif #endif } mm_context_t; |