summaryrefslogtreecommitdiffstats
path: root/arch/arm64/include/asm/linkage.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* arm64: Extend support for CONFIG_FUNCTION_ALIGNMENTMark Rutland2023-01-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On arm64 we don't align assembly function in the same way as C functions. This somewhat limits the utility of CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B for testing, and adds noise when testing that we're correctly aligning functions as will be necessary for ftrace in subsequent patches. Follow the example of x86, and align assembly functions in the same way as C functions. Selecting FUNCTION_ALIGNMENT_4B ensures CONFIG_FUCTION_ALIGNMENT will be a minimum of 4 bytes, matching the minimum alignment that __ALIGN and __ALIGN_STR provide prior to this patch. I've tested this by selecting CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B=y, building and booting a kernel, and looking for misaligned text symbols: Before, v6.2-rc3: # uname -rm 6.2.0-rc3 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 5009 Before, v6.2-rc3 + fixed __cold: # uname -rm 6.2.0-rc3-00001-g2a2bedf8bfa9 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 919 Before, v6.2-rc3 + fixed __cold + fixed ACPICA: # uname -rm 6.2.0-rc3-00002-g267bddc38572 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 323 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep acpi | wc -l 0 After: # uname -rm 6.2.0-rc3-00003-g71db61ee3ea1 aarch64 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | wc -l 112 Considering the remaining 112 unaligned text symbols: * 20 are non-function KVM NVHE assembly symbols, which are never instrumented by ftrace: # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep __kvm_nvhe | wc -l 20 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep __kvm_nvhe ffffbe6483f73784 t __kvm_nvhe___invalid ffffbe6483f73788 t __kvm_nvhe___do_hyp_init ffffbe6483f73ab0 t __kvm_nvhe_reset ffffbe6483f73b8c T __kvm_nvhe___hyp_idmap_text_end ffffbe6483f73b8c T __kvm_nvhe___hyp_text_start ffffbe6483f77864 t __kvm_nvhe___host_enter_restore_full ffffbe6483f77874 t __kvm_nvhe___host_enter_for_panic ffffbe6483f778a4 t __kvm_nvhe___host_enter_without_restoring ffffbe6483f81178 T __kvm_nvhe___guest_exit_panic ffffbe6483f811c8 T __kvm_nvhe___guest_exit ffffbe6483f81354 t __kvm_nvhe_abort_guest_exit_start ffffbe6483f81358 t __kvm_nvhe_abort_guest_exit_end ffffbe6483f81830 t __kvm_nvhe_wa_epilogue ffffbe6483f81844 t __kvm_nvhe_el1_trap ffffbe6483f81864 t __kvm_nvhe_el1_fiq ffffbe6483f81864 t __kvm_nvhe_el1_irq ffffbe6483f81884 t __kvm_nvhe_el1_error ffffbe6483f818a4 t __kvm_nvhe_el2_sync ffffbe6483f81920 t __kvm_nvhe_el2_error ffffbe6483f865c8 T __kvm_nvhe___start___kvm_ex_table * 53 are position-independent functions only used during early boot, which are built with '-Os', but are never instrumented by ftrace: # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep __pi | wc -l 53 We *could* drop '-Os' when building these for consistency, but that is not necessary to ensure that ftrace works correctly. * The remaining 39 are non-function symbols, and 3 runtime BPF functions, which are never instrumented by ftrace: # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep -v __kvm_nvhe | grep -v __pi | wc -l 39 # grep ' [Tt] ' /proc/kallsyms | grep -iv '[048c]0 [Tt] ' | grep -v __kvm_nvhe | grep -v __pi ffffbe6482e1009c T __irqentry_text_end ffffbe6482e10358 T __softirqentry_text_end ffffbe6482e1435c T __entry_text_end ffffbe6482e825f8 T __guest_exit_panic ffffbe6482e82648 T __guest_exit ffffbe6482e827d4 t abort_guest_exit_start ffffbe6482e827d8 t abort_guest_exit_end ffffbe6482e83030 t wa_epilogue ffffbe6482e83044 t el1_trap ffffbe6482e83064 t el1_fiq ffffbe6482e83064 t el1_irq ffffbe6482e83084 t el1_error ffffbe6482e830a4 t el2_sync ffffbe6482e83120 t el2_error ffffbe6482e93550 T sha256_block_neon ffffbe64830f3ae0 t e843419@01cc_00002a0c_3104 ffffbe648378bd90 t e843419@09b3_0000d7cb_bc4 ffffbe6483bdab20 t e843419@0c66_000116e2_34c8 ffffbe6483f62c94 T __noinstr_text_end ffffbe6483f70a18 T __sched_text_end ffffbe6483f70b2c T __cpuidle_text_end ffffbe6483f722d4 T __lock_text_end ffffbe6483f73b8c T __hyp_idmap_text_end ffffbe6483f73b8c T __hyp_text_start ffffbe6483f865c8 T __start___kvm_ex_table ffffbe6483f870d0 t init_el1 ffffbe6483f870f8 t init_el2 ffffbe6483f87324 t pen ffffbe6483f87b48 T __idmap_text_end ffffbe64848eb010 T __hibernate_exit_text_start ffffbe64848eb124 T __hibernate_exit_text_end ffffbe64848eb124 T __relocate_new_kernel_start ffffbe64848eb260 T __relocate_new_kernel_end ffffbe648498a8e8 T _einittext ffffbe648498a8e8 T __exittext_begin ffffbe6484999d84 T __exittext_end ffff8000080756b4 t bpf_prog_6deef7357e7b4530 [bpf] ffff80000808dd78 t bpf_prog_6deef7357e7b4530 [bpf] ffff80000809d684 t bpf_prog_6deef7357e7b4530 [bpf] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Florent Revest <revest@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230123134603.1064407-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Add types to indirect called assembly functionsSami Tolvanen2022-09-261-0/+4
| | | | | | | | | | | | | | | | With CONFIG_CFI_CLANG, assembly functions indirectly called from C code must be annotated with type identifiers to pass CFI checking. Use SYM_TYPED_FUNC_START for the indirectly called functions, and ensure we emit `bti c` also with SYM_TYPED_FUNC_START. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-10-samitolvanen@google.com
* arm64: clean up symbol aliasingMark Rutland2022-02-221-24/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those to simplify and more consistently define function aliases across arch/arm64. Aliases are now defined in terms of a canonical function name. For position-independent functions I've made the __pi_<func> name the canonical name, and defined other alises in terms of this. The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name, and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro also obscures the fact that the __pi_<func> fymbol is global and the <func> symbol is weak. For clarity, I have removed these macros and used SYM_FUNC_{START,END}() directly with the __pi_<func> name. For example: SYM_FUNC_START_WEAK_PI(func) ... asm insns ... SYM_FUNC_END_PI(func) EXPORT_SYMBOL(func) ... becomes: SYM_FUNC_START(__pi_func) ... asm insns ... SYM_FUNC_END(__pi_func) SYM_FUNC_ALIAS_WEAK(func, __pi_func) EXPORT_SYMBOL(func) For clarity, where there are multiple annotations such as EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For example, where a function has a name and an alias which are both exported, this is organised as: SYM_FUNC_START(func) ... asm insns ... SYM_FUNC_END(func) EXPORT_SYMBOL(func) SYM_FUNC_ALIAS(alias, func) EXPORT_SYMBOL(alias) For consistency with the other string functions, I've defined strrchr as a position-independent function, as it can safely be used as such even though we have no users today. As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are removed. The common versions will be removed by a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Mark Brown <broonie@kernel.org> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220216162229.1076788-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
* arm64: Ensure that the 'bti' macro is defined where linkage.h is includedCatalin Marinas2021-12-171-0/+4
| | | | | | | | | Not all .S files include asm/assembler.h, however the SYM_FUNC_* definitions invoke the 'bti' macro. Include asm/assembler.h in asm/linkage.h. Fixes: 9be34be87cc8 ("arm64: Add macro version of the BTI instruction") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Use BTI C directly and unconditionallyMark Brown2021-12-141-16/+6
| | | | | | | | | | | | | | | | | Now we have a macro for BTI C that looks like a regular instruction change all the users of the current BTI_C macro to just emit a BTI C directly and remove the macro. This does mean that we now unconditionally BTI annotate all assembly functions, meaning that they are worse in this respect than code generated by the compiler. The overhead should be minimal for implementations with a reasonable HINT implementation. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20211214152714.2380849-4-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Unconditionally override SYM_FUNC macrosMark Brown2021-12-141-5/+11
| | | | | | | | | | | | | Currently we only override the SYM_FUNC macros when we need to insert BTI C into them, do this unconditionally to make it more likely that we'll notice bugs in our override. Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20211214152714.2380849-3-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Add macro version of the BTI instructionMark Brown2021-12-141-6/+1
| | | | | | | | | | | | | | | BTI is only available from v8.5 so we need to encode it using HINT in generic code and for older toolchains. Add an assembler macro based on one written by Mark Rutland which lets us use the mnemonic and update the existing users. Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20211214152714.2380849-2-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Add assembly annotations for weak-PI-alias madnessRobin Murphy2021-06-011-0/+8
| | | | | | | | | | Add yet another set of assembly symbol annotations, this time for the borderline-absurd situation of a function aliasing to a weak symbol which itself also wants a position-independent alias. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/75545b3c4129b20b887474bb58a9cf302bf2132b.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
* arm64: Don't insert a BTI instruction at inner labelsJean-Philippe Brucker2020-06-241-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some ftrace features are broken since commit 714a8d02ca4d ("arm64: asm: Override SYM_FUNC_START when building the kernel with BTI"). For example the function_graph tracer: $ echo function_graph > /sys/kernel/debug/tracing/current_tracer [ 36.107016] WARNING: CPU: 0 PID: 115 at kernel/trace/ftrace.c:2691 ftrace_modify_all_code+0xc8/0x14c When ftrace_modify_graph_caller() attempts to write a branch at ftrace_graph_call, it finds the "BTI J" instruction inserted by SYM_INNER_LABEL() instead of a NOP, and aborts. It turns out we don't currently need the BTI landing pads inserted by SYM_INNER_LABEL: * ftrace_call and ftrace_graph_call are only used for runtime patching of the active tracer. The patched code is not reached from a branch. * install_el2_stub is reached from a CBZ instruction, which doesn't change PSTATE.BTYPE. * __guest_exit is reached from B instructions in the hyp-entry vectors, which aren't subject to BTI checks either. Remove the BTI annotation from SYM_INNER_LABEL. Fixes: 714a8d02ca4d ("arm64: asm: Override SYM_FUNC_START when building the kernel with BTI") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200624112253.1602786-1-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
* arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instructionWill Deacon2020-05-211-3/+3
| | | | | | | | | | | | | | | | | | | | For better or worse, GDB relies on the exact instruction sequence in the VDSO sigreturn trampoline in order to unwind from signals correctly. Commit c91db232da48 ("arm64: vdso: Convert to modern assembler annotations") unfortunately added a BTI C instruction to the start of __kernel_rt_sigreturn, which breaks this check. Thankfully, it's also not required, since the trampoline is called from a RET instruction when returning from the signal handler Remove the unnecessary BTI C instruction from __kernel_rt_sigreturn, and do the same for the 32-bit VDSO as well for good measure. Cc: Daniel Kiss <daniel.kiss@arm.com> Cc: Tamas Zsoldos <tamas.zsoldos@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Fixes: c91db232da48 ("arm64: vdso: Convert to modern assembler annotations") Signed-off-by: Will Deacon <will@kernel.org>
* arm64: asm: Override SYM_FUNC_START when building the kernel with BTIMark Brown2020-05-071-0/+46
| | | | | | | | | | | | When the kernel is built for BTI override SYM_FUNC_START and related macros to add a BTI landing pad to the start of all global functions, ensuring that they are BTI safe. The ; at the end of the BTI_x macros is for the benefit of the macro-generated functions in xen-hypercall.S. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
* arm64: asm: Add new-style position independent function annotationsMark Brown2020-01-081-0/+16
| | | | | | | | | | | | | | | | | | As part of an effort to make the annotations in assembly code clearer and more consistent new macros have been introduced, including replacements for ENTRY() and ENDPROC(). On arm64 we have ENDPIPROC(), a custom version of ENDPROC() which is used for code that will need to run in position independent environments like EFI, it creates an alias for the function with the prefix __pi_ and then emits the standard ENDPROC. Add new-style macros to replace this which expand to the standard SYM_FUNC_*() and SYM_FUNC_ALIAS_*(), resulting in the same object code. These are added in linkage.h for consistency with where the generic assembler code has its macros. Signed-off-by: Mark Brown <broonie@kernel.org> [will: Rename 'WEAK' macro, use ';' instead of ASM_NL, deprecate ENDPIPROC] Signed-off-by: Will Deacon <will@kernel.org>
* arm64: relax assembly code alignment from 16 byte to 4 byteMasahiro Yamada2017-09-181-2/+2
| | | | | | | | Aarch64 instructions must be word aligned. The current 16 byte alignment is more than enough. Relax it into 4 byte alignment. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix alignment padding in assembly codeMarc Zyngier2012-10-201-0/+7
An interesting effect of using the generic version of linkage.h is that the padding is defined in terms of x86 NOPs, which can have even more interesting effects when the assembly code looks like this: ENTRY(func1) mov x0, xzr ENDPROC(func1) // fall through ENTRY(func2) mov x0, #1 ret ENDPROC(func2) Admittedly, the code is not very nice. But having code from another architecture doesn't look completely sane either. The fix is to add arm64's version of linkage.h, which causes the insertion of proper AArch64 NOPs. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>