summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/image.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* arm64/efi: Move variable assignments after SECTIONSKees Cook2019-08-141-42/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that LLVM's linker does not correctly handle variable assignments involving section positions that are updated during the SECTIONS parsing. Commit aa69fb62bea1 ("arm64/efi: Mark __efistub_stext_offset as an absolute symbol explicitly") ran into this too, but found a different workaround. However, this was not enough, as other variables were also miscalculated which manifested as boot failures under UEFI where __efistub__end was not taking the correct _end value (they should be the same): $ ld.lld -EL -maarch64elf --no-undefined -X -shared \ -Bsymbolic -z notext -z norelro --no-apply-dynamic-relocs \ -o vmlinux.lld -T poc.lds --whole-archive vmlinux.o && \ readelf -Ws vmlinux.lld | egrep '\b(__efistub_|)_end\b' 368272: ffff000002218000 0 NOTYPE LOCAL HIDDEN 38 __efistub__end 368322: ffff000012318000 0 NOTYPE GLOBAL DEFAULT 38 _end $ aarch64-linux-gnu-ld.bfd -EL -maarch64elf --no-undefined -X -shared \ -Bsymbolic -z notext -z norelro --no-apply-dynamic-relocs \ -o vmlinux.bfd -T poc.lds --whole-archive vmlinux.o && \ readelf -Ws vmlinux.bfd | egrep '\b(__efistub_|)_end\b' 338124: ffff000012318000 0 NOTYPE LOCAL DEFAULT ABS __efistub__end 383812: ffff000012318000 0 NOTYPE GLOBAL DEFAULT 15325 _end To work around this, all of the __efistub_-prefixed variable assignments need to be moved after the linker script's SECTIONS entry. As it turns out, this also solves the problem fixed in commit aa69fb62bea1, so those changes are reverted here. Link: https://github.com/ClangBuiltLinux/linux/issues/634 Link: https://bugs.llvm.org/show_bug.cgi?id=42990 Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
* Merge tag 'arm64-fixes' of ↵Linus Torvalds2019-07-031-1/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "Fix a build failure with the LLVM linker and a module allocation failure when KASLR is active: - Fix module allocation when running with KASLR enabled - Fix broken build due to bug in LLVM linker (ld.lld)" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64/efi: Mark __efistub_stext_offset as an absolute symbol explicitly arm64: kaslr: keep modules inside module region when KASAN is enabled
| * arm64/efi: Mark __efistub_stext_offset as an absolute symbol explicitlyNathan Chancellor2019-06-261-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After r363059 and r363928 in LLVM, a build using ld.lld as the linker with CONFIG_RANDOMIZE_BASE enabled fails like so: ld.lld: error: relocation R_AARCH64_ABS32 cannot be used against symbol __efistub_stext_offset; recompile with -fPIC Fangrui and Peter figured out that ld.lld is incorrectly considering __efistub_stext_offset as a relative symbol because of the order in which symbols are evaluated. _text is treated as an absolute symbol and stext is a relative symbol, making __efistub_stext_offset a relative symbol. Adding ABSOLUTE will force ld.lld to evalute this expression in the right context and does not change ld.bfd's behavior. ld.lld will need to be fixed but the developers do not see a quick or simple fix without some research (see the linked issue for further explanation). Add this simple workaround so that ld.lld can continue to link kernels. Link: https://github.com/ClangBuiltLinux/linux/issues/561 Link: https://github.com/llvm/llvm-project/commit/025a815d75d2356f2944136269aa5874721ec236 Link: https://github.com/llvm/llvm-project/commit/249fde85832c33f8b06c6b4ac65d1c4b96d23b83 Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Debugged-by: Fangrui Song <maskray@google.com> Debugged-by: Peter Smith <peter.smith@linaro.org> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> [will: add comment] Signed-off-by: Will Deacon <will@kernel.org>
* | treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner2019-06-191-12/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Merge branch 'for-next/kexec' into aarch64/for-next/coreWill Deacon2018-12-101-8/+13
|\ | | | | | | Merge in kexec_file_load() support from Akashi Takahiro.
| * arm64: add image head flag definitionsAKASHI Takahiro2018-12-061-8/+13
| | | | | | | | | | | | | | | | | | | | Those image head's flags will be used later by kexec_file loader. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: drop linker script hack to hide __efistub_ symbolsArd Biesheuvel2018-11-301-28/+18
|/ | | | | | | | | | | Commit 1212f7a16af4 ("scripts/kallsyms: filter arm64's __efistub_ symbols") updated the kallsyms code to filter out symbols with the __efistub_ prefix explicitly, so we no longer require the hack in our linker script to emit them as absolute symbols. Cc: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/efi: Make strrchr() available to the EFI namespaceRob Herring2018-03-051-0/+1
| | | | | | | | | | | | libfdt gained a new dependency on strrchr, so make it available to the EFI namespace before we update libfdt. Thanks to Ard for providing this fix. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Rob Herring <robh@kernel.org>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2016-05-171-0/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: - virt_to_page/page_address optimisations - support for NUMA systems described using device-tree - support for hibernate/suspend-to-disk - proper support for maxcpus= command line parameter - detection and graceful handling of AArch64-only CPUs - miscellaneous cleanups and non-critical fixes * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits) arm64: do not enforce strict 16 byte alignment to stack pointer arm64: kernel: Fix incorrect brk randomization arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str arm64: secondary_start_kernel: Remove unnecessary barrier arm64: Ensure pmd_present() returns false after pmd_mknotpresent() arm64: Replace hard-coded values in the pmd/pud_bad() macros arm64: Implement pmdp_set_access_flags() for hardware AF/DBM arm64: Fix typo in the pmdp_huge_get_and_clear() definition arm64: mm: remove unnecessary EXPORT_SYMBOL_GPL arm64: always use STRICT_MM_TYPECHECKS arm64: kvm: Fix kvm teardown for systems using the extended idmap arm64: kaslr: increase randomization granularity arm64: kconfig: drop CONFIG_RTC_LIB dependency arm64: make ARCH_SUPPORTS_DEBUG_PAGEALLOC depend on !HIBERNATION arm64: hibernate: Refuse to hibernate if the boot cpu is offline arm64: kernel: Add support for hibernate/suspend-to-disk PM / Hibernate: Call flush_icache_range() on pages restored in-place arm64: Add new asm macro copy_page arm64: Promote KERNEL_START/KERNEL_END definitions to a header file arm64: kernel: Include _AC definition in page.h ...
| * arm64: don't map TEXT_OFFSET bytes below the kernel if we can avoid itArd Biesheuvel2016-04-261-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For historical reasons, the kernel Image must be loaded into physical memory at a 512 KB offset above a 2 MB aligned base address. The region between the base address and the start of the kernel Image has no significance to the kernel itself, but it is currently mapped explicitly into the early kernel VMA range for all translation granules. In some cases (i.e., 4 KB granule), this is unavoidable, due to the 2 MB granularity of the early kernel mappings. However, in other cases, e.g., when running with larger page sizes, or in the future, with more granular KASLR, there is no reason to map it explicitly like we do currently. So update the logic so that the region is mapped only if that happens as a side effect of rounding the start address of the kernel to swapper block size, and leave it unmapped otherwise. Since the symbol kernel_img_size now simply resolves to the memory footprint of the kernel Image, we can drop its definition from image.h and opencode its calculation. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: move early boot code to the .init segmentArd Biesheuvel2016-04-141-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Apart from the arm64/linux and EFI header data structures, there is nothing in the .head.text section that must reside at the beginning of the Image. So let's move it to the .init section where it belongs. Note that this involves some minor tweaking of the EFI header, primarily because the address of 'stext' no longer coincides with the start of the .text section. It also requires a couple of relocated symbol references to be slightly rewritten or their definition moved to the linker script. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64/efi/libstub: Make screen_info accessible to the UEFI stubArd Biesheuvel2016-04-281-0/+1
|/ | | | | | | | | | | | | | | | | | | | Unlike on 32-bit ARM, where we need to pass the stub's version of struct screen_info to the kernel proper via a configuration table, on 64-bit ARM it simply involves making the core kernel's copy of struct screen_info visible to the stub by exposing an __efistub_ alias for it. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <bp@alien8.de> Cc: David Herrmann <dh.herrmann@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Jones <pjones@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1461614832-17633-21-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2016-03-181-18/+27
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "Here are the main arm64 updates for 4.6. There are some relatively intrusive changes to support KASLR, the reworking of the kernel virtual memory layout and initial page table creation. Summary: - Initial page table creation reworked to avoid breaking large block mappings (huge pages) into smaller ones. The ARM architecture requires break-before-make in such cases to avoid TLB conflicts but that's not always possible on live page tables - Kernel virtual memory layout: the kernel image is no longer linked to the bottom of the linear mapping (PAGE_OFFSET) but at the bottom of the vmalloc space, allowing the kernel to be loaded (nearly) anywhere in physical RAM - Kernel ASLR: position independent kernel Image and modules being randomly mapped in the vmalloc space with the randomness is provided by UEFI (efi_get_random_bytes() patches merged via the arm64 tree, acked by Matt Fleming) - Implement relative exception tables for arm64, required by KASLR (initial code for ARCH_HAS_RELATIVE_EXTABLE added to lib/extable.c but actual x86 conversion to deferred to 4.7 because of the merge dependencies) - Support for the User Access Override feature of ARMv8.2: this allows uaccess functions (get_user etc.) to be implemented using LDTR/STTR instructions. Such instructions, when run by the kernel, perform unprivileged accesses adding an extra level of protection. The set_fs() macro is used to "upgrade" such instruction to privileged accesses via the UAO bit - Half-precision floating point support (part of ARMv8.2) - Optimisations for CPUs with or without a hardware prefetcher (using run-time code patching) - copy_page performance improvement to deal with 128 bytes at a time - Sanity checks on the CPU capabilities (via CPUID) to prevent incompatible secondary CPUs from being brought up (e.g. weird big.LITTLE configurations) - valid_user_regs() reworked for better sanity check of the sigcontext information (restored pstate information) - ACPI parking protocol implementation - CONFIG_DEBUG_RODATA enabled by default - VDSO code marked as read-only - DEBUG_PAGEALLOC support - ARCH_HAS_UBSAN_SANITIZE_ALL enabled - Erratum workaround Cavium ThunderX SoC - set_pte_at() fix for PROT_NONE mappings - Code clean-ups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (99 commits) arm64: kasan: Fix zero shadow mapping overriding kernel image shadow arm64: kasan: Use actual memory node when populating the kernel image shadow arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission arm64: Fix misspellings in comments. arm64: efi: add missing frame pointer assignment arm64: make mrs_s prefixing implicit in read_cpuid arm64: enable CONFIG_DEBUG_RODATA by default arm64: Rework valid_user_regs arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly arm64: KVM: Move kvm_call_hyp back to its original localtion arm64: mm: treat memstart_addr as a signed quantity arm64: mm: list kernel sections in order arm64: lse: deal with clobbered IP registers after branch via PLT arm64: mm: dump: Use VA_START directly instead of private LOWEST_ADDR arm64: kconfig: add submenu for 8.2 architectural features arm64: kernel: acpi: fix ioremap in ACPI parking protocol cpu_postboot arm64: Add support for Half precision floating point arm64: Remove fixmap include fragility arm64: Add workaround for Cavium erratum 27456 arm64: mm: Mark .rodata as RO ...
| * arm64: avoid R_AARCH64_ABS64 relocations for Image header fieldsArd Biesheuvel2016-02-241-13/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unfortunately, the current way of using the linker to emit build time constants into the Image header will no longer work once we switch to the use of PIE executables. The reason is that such constants are emitted into the binary using R_AARCH64_ABS64 relocations, which are resolved at runtime, not at build time, and the places targeted by those relocations will contain zeroes before that. So refactor the endian swapping linker script constant generation code so that it emits the upper and lower 32-bit words separately. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * arm64: allow kernel Image to be loaded anywhere in physical memoryArd Biesheuvel2016-02-181-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image. As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Special care needs to be taken for dealing with memory limits passed via mem=, since the generic implementation clips memory top down, which may clip the kernel image itself if it is loaded high up in memory. To deal with this case, we simply add back the memory covering the kernel image, which may result in more memory to be retained than was passed as a mem= parameter. Since mem= should not be considered a production feature, a panic notifier handler is installed that dumps the memory limit at panic time if one was set. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64/efi: Make strnlen() available to the EFI namespaceThierry Reding2016-02-161-0/+1
|/ | | | | | | | | | | | | Changes introduced in the upstream version of libfdt pulled in by commit 91feabc2e224 ("scripts/dtc: Update to upstream commit b06e55c88b9b") use the strnlen() function, which isn't currently available to the EFI name- space. Add it to the EFI namespace to avoid a linker error. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Rob Herring <robh@kernel.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: hide __efistub_ aliases from kallsymsArd Biesheuvel2016-01-251-15/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e8f3010f7326 ("arm64/efi: isolate EFI stub from the kernel proper") isolated the EFI stub code from the kernel proper by prefixing all of its symbols with __efistub_, and selectively allowing access to core kernel symbols from the stub by emitting __efistub_ aliases for functions and variables that the stub can access legally. As an unintended side effect, these aliases are emitted into the kallsyms symbol table, which means they may turn up in backtraces, e.g., ... PC is at __efistub_memset+0x108/0x200 LR is at fixup_init+0x3c/0x48 ... [<ffffff8008328608>] __efistub_memset+0x108/0x200 [<ffffff8008094dcc>] free_initmem+0x2c/0x40 [<ffffff8008645198>] kernel_init+0x20/0xe0 [<ffffff8008085cd0>] ret_from_fork+0x10/0x40 The backtrace in question has nothing to do with the EFI stub, but simply returns one of the several aliases of memset() that have been recorded in the kallsyms table. This is undesirable, since it may suggest to people who are not aware of this that the issue they are seeing is somehow EFI related. So hide the __efistub_ aliases from kallsyms, by emitting them as absolute linker symbols explicitly. The distinction between those and section relative symbols is completely irrelevant to these definitions, and to the final link we are performing when these definitions are being taken into account (the distinction is only relevant to symbols defined inside a section definition when performing a partial link), and so the resulting values are identical to the original ones. Since absolute symbols are ignored by kallsyms, this will result in these values to be omitted from its symbol table. After this patch, the backtrace generated from the same address looks like this: ... PC is at __memset+0x108/0x200 LR is at fixup_init+0x3c/0x48 ... [<ffffff8008328608>] __memset+0x108/0x200 [<ffffff8008094dcc>] free_initmem+0x2c/0x40 [<ffffff8008645198>] kernel_init+0x20/0xe0 [<ffffff8008085cd0>] ret_from_fork+0x10/0x40 Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: Add page size to the kernel image headerArd Biesheuvel2015-10-191-1/+4
| | | | | | | | | | | | | | This patch adds the page size to the arm64 kernel image header so that one can infer the PAGESIZE used by the kernel. This will be helpful to diagnose failures to boot the kernel with page size not supported by the CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: add KASAN supportAndrey Ryabinin2015-10-121-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/efi: isolate EFI stub from the kernel properArd Biesheuvel2015-10-121-0/+27
| | | | | | | | | | | | | | | | | | Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Update the Image headerMark Rutland2014-07-101-0/+62
Currently the kernel Image is stripped of everything past the initial stack, and at runtime the memory is initialised and used by the kernel. This makes the effective minimum memory footprint of the kernel larger than the size of the loaded binary, though bootloaders have no mechanism to identify how large this minimum memory footprint is. This makes it difficult to choose safe locations to place both the kernel and other binaries required at boot (DTB, initrd, etc), such that the kernel won't clobber said binaries or other reserved memory during initialisation. Additionally when big endian support was added the image load offset was overlooked, and is currently of an arbitrary endianness, which makes it difficult for bootloaders to make use of it. It seems that bootloaders aren't respecting the image load offset at present anyway, and are assuming that offset 0x80000 will always be correct. This patch adds an effective image size to the kernel header which describes the amount of memory from the start of the kernel Image binary which the kernel expects to use before detecting memory and handling any memory reservations. This can be used by bootloaders to choose suitable locations to load the kernel and/or other binaries such that the kernel will not clobber any memory unexpectedly. As before, memory reservations are required to prevent the kernel from clobbering these locations later. Both the image load offset and the effective image size are forced to be little-endian regardless of the native endianness of the kernel to enable bootloaders to load a kernel of arbitrary endianness. Bootloaders which wish to make use of the load offset can inspect the effective image size field for a non-zero value to determine if the offset is of a known endianness. To enable software to determine the endinanness of the kernel as may be required for certain use-cases, a new flags field (also little-endian) is added to the kernel header to export this information. The documentation is updated to clarify these details. To discourage future assumptions regarding the value of text_offset, the value at this point in time is removed from the main flow of the documentation (though kept as a compatibility note). Some minor formatting issues in the documentation are also corrected. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Tom Rini <trini@ti.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Kevin Hilman <kevin.hilman@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>