diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2023-12-13 09:40:32 +0100 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2024-02-09 11:56:12 +0100 |
commit | 3567fa63cb5680d3e1e8375c547a0e305c8a0ff5 (patch) | |
tree | 3f212eb24c06bac296572694e80943bbd462f0dc /arch/arm64/kernel/pi/kaslr_early.c | |
parent | arm64: mm: Reclaim unused vmemmap region for vmalloc use (diff) | |
download | linux-3567fa63cb5680d3e1e8375c547a0e305c8a0ff5.tar.xz linux-3567fa63cb5680d3e1e8375c547a0e305c8a0ff5.zip |
arm64: kaslr: Adjust randomization range dynamically
Currently, we base the KASLR randomization range on a rough estimate of
the available space in the upper VA region: the lower 1/4th has the
module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O
ranges, and so we pick a random location in the remaining space in the
middle.
Once we enable support for 5-level paging with 4k pages, this no longer
works: the vmemmap region, being dimensioned to cover a 52-bit linear
region, takes up so much space in the upper VA region (the size of which
is based on a 48-bit VA space for compatibility with non-LVA hardware)
that the region above the vmalloc region takes up more than a quarter of
the available space.
So instead of a heuristic, let's derive the randomization range from the
actual boundaries of the vmalloc region.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20231213084024.2367360-16-ardb@google.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Diffstat (limited to '')
-rw-r--r-- | arch/arm64/kernel/pi/kaslr_early.c | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c index 17bff6e399e4..b9e0bb4bc6a9 100644 --- a/arch/arm64/kernel/pi/kaslr_early.c +++ b/arch/arm64/kernel/pi/kaslr_early.c @@ -14,6 +14,7 @@ #include <asm/archrandom.h> #include <asm/memory.h> +#include <asm/pgtable.h> /* taken from lib/string.c */ static char *__strstr(const char *s1, const char *s2) @@ -87,7 +88,7 @@ static u64 get_kaslr_seed(void *fdt) asmlinkage u64 kaslr_early_init(void *fdt) { - u64 seed; + u64 seed, range; if (is_kaslr_disabled_cmdline(fdt)) return 0; @@ -102,9 +103,9 @@ asmlinkage u64 kaslr_early_init(void *fdt) /* * OK, so we are proceeding with KASLR enabled. Calculate a suitable * kernel image offset from the seed. Let's place the kernel in the - * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of - * the lower and upper quarters to avoid colliding with other - * allocations. + * 'middle' half of the VMALLOC area, and stay clear of the lower and + * upper quarters to avoid colliding with other allocations. */ - return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 0)); + range = (VMALLOC_END - KIMAGE_VADDR) / 2; + return range / 2 + (((__uint128_t)range * seed) >> 64); } |