diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> | 2022-09-08 09:24:40 +0200 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2022-09-30 10:35:52 +0200 |
commit | 7dd3a7b90bca2c12e2146a47d63cf69a2f5d7e89 (patch) | |
tree | aec694404790e0a8c93f94f25bd0d9a884d37359 /arch/powerpc | |
parent | powerpc/mm: Always update max/min_low_pfn in mem_topology_setup() (diff) | |
download | linux-7dd3a7b90bca2c12e2146a47d63cf69a2f5d7e89.tar.xz linux-7dd3a7b90bca2c12e2146a47d63cf69a2f5d7e89.zip |
powerpc/mm: Fix UBSAN warning reported on hugetlb
Powerpc architecture supports 16GB hugetlb pages with hash translation.
For 4K page size, this is implemented as a hugepage directory entry at
PGD level and for 64K it is implemented as a huge page pte at PUD level
With 16GB hugetlb size, offset within a page is greater than 32 bits.
Hence switch to use unsigned long type when using hugepd_shift.
In order to keep things simpler, we make sure we always use unsigned
long type when using hugepd_shift() even though all the hugetlb page
size won't require that.
The hugetlb_free_p*d_range changes are all related to nohash usage where
we can have multiple pgd entries pointing to the same hugepd entries.
Hence on book3s64 where we can have > 4GB hugetlb page size we will
always find more < next even if we compute the value of more correctly.
Hence there is no functional change in this patch except that it fixes
the below warning.
UBSAN: shift-out-of-bounds in arch/powerpc/mm/hugetlbpage.c:499:21
shift exponent 34 is too large for 32-bit type 'int'
CPU: 39 PID: 1673 Comm: a.out Not tainted 6.0.0-rc2-00327-gee88a56e8517-dirty #1
Call Trace:
dump_stack_lvl+0x98/0xe0 (unreliable)
ubsan_epilogue+0x18/0x70
__ubsan_handle_shift_out_of_bounds+0x1bc/0x390
hugetlb_free_pgd_range+0x5d8/0x600
free_pgtables+0x114/0x290
exit_mmap+0x150/0x550
mmput+0xcc/0x210
do_exit+0x420/0xdd0
do_group_exit+0x4c/0xd0
sys_exit_group+0x24/0x30
system_call_exception+0x250/0x600
system_call_common+0xec/0x250
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Drop generic change to be sent separately, change 1ULL to 1UL]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220908072440.258301-1-aneesh.kumar@linux.ibm.com
Diffstat (limited to 'arch/powerpc')
-rw-r--r-- | arch/powerpc/mm/hugetlbpage.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 8c3ea5300ac3..5852a86d990d 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -392,7 +392,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud, * single hugepage, but all of them point to * the same kmem cache that holds the hugepte. */ - more = addr + (1 << hugepd_shift(*(hugepd_t *)pmd)); + more = addr + (1UL << hugepd_shift(*(hugepd_t *)pmd)); if (more > next) next = more; @@ -434,7 +434,7 @@ static void hugetlb_free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, * single hugepage, but all of them point to * the same kmem cache that holds the hugepte. */ - more = addr + (1 << hugepd_shift(*(hugepd_t *)pud)); + more = addr + (1UL << hugepd_shift(*(hugepd_t *)pud)); if (more > next) next = more; @@ -496,7 +496,7 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb, * for a single hugepage, but all of them point to the * same kmem cache that holds the hugepte. */ - more = addr + (1 << hugepd_shift(*(hugepd_t *)pgd)); + more = addr + (1UL << hugepd_shift(*(hugepd_t *)pgd)); if (more > next) next = more; |