diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> | 2018-09-20 10:33:58 +0200 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2018-10-14 09:04:09 +0200 |
commit | 4ffe713b7587b14695c9bec26a000fc88ef54895 (patch) | |
tree | a7803f13e97fc59501c8e53e8de946591c67e34a /arch/powerpc/mm | |
parent | powerpc/mm/hash: Rename get_ea_context to get_user_context (diff) | |
download | linux-4ffe713b7587b14695c9bec26a000fc88ef54895.tar.xz linux-4ffe713b7587b14695c9bec26a000fc88ef54895.zip |
powerpc/mm: Increase the max addressable memory to 2PB
Currently we limit the max addressable memory to 128TB. This patch increase the
limit to 2PB. We can have devices like nvdimm which adds memory above 512TB
limit.
We still don't support regular system ram above 512TB. One of the challenge with
that is the percpu allocator, that allocates per node memory and use the max
distance between them as the percpu offsets. This means with large gap in
address space ( system ram above 1PB) we will run out of vmalloc space to map
the percpu allocation.
In order to support addressable memory above 512TB, kernel should be able to
linear map this range. To do that with hash translation we now add 4 context
to kernel linear map region. Our per context addressable range is 512TB. We
still keep VMALLOC and VMEMMAP region to old size. SLB miss handlers is updated
to validate these limit.
We also limit this update to SPARSEMEM_VMEMMAP and SPARSEMEM_EXTREME
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/mm')
-rw-r--r-- | arch/powerpc/mm/slb.c | 20 |
1 files changed, 15 insertions, 5 deletions
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c index 4fe5cb5052b6..c3fdf2969d9f 100644 --- a/arch/powerpc/mm/slb.c +++ b/arch/powerpc/mm/slb.c @@ -693,16 +693,27 @@ static long slb_allocate_kernel(unsigned long ea, unsigned long id) unsigned long flags; int ssize; - if ((ea & ~REGION_MASK) >= (1ULL << MAX_EA_BITS_PER_CONTEXT)) - return -EFAULT; - if (id == KERNEL_REGION_ID) { + + /* We only support upto MAX_PHYSMEM_BITS */ + if ((ea & ~REGION_MASK) > (1UL << MAX_PHYSMEM_BITS)) + return -EFAULT; + flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_linear_psize].sllp; + #ifdef CONFIG_SPARSEMEM_VMEMMAP } else if (id == VMEMMAP_REGION_ID) { + + if ((ea & ~REGION_MASK) >= (1ULL << MAX_EA_BITS_PER_CONTEXT)) + return -EFAULT; + flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_vmemmap_psize].sllp; #endif } else if (id == VMALLOC_REGION_ID) { + + if ((ea & ~REGION_MASK) >= (1ULL << MAX_EA_BITS_PER_CONTEXT)) + return -EFAULT; + if (ea < H_VMALLOC_END) flags = get_paca()->vmalloc_sllp; else @@ -715,8 +726,7 @@ static long slb_allocate_kernel(unsigned long ea, unsigned long id) if (!mmu_has_feature(MMU_FTR_1T_SEGMENT)) ssize = MMU_SEGSIZE_256M; - context = id - KERNEL_REGION_CONTEXT_OFFSET; - + context = get_kernel_context(ea); return slb_insert_entry(ea, context, flags, ssize, true); } |