diff options
author | Madhavan T. Venkataraman <madvenka@linux.microsoft.com> | 2021-11-29 15:28:44 +0100 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2021-12-10 15:06:03 +0100 |
commit | ed876d35a1dc7f9efcfc51dd06843b5b8af08ad1 (patch) | |
tree | efa66b3e4c73fc186bf9595d45a6fe409811ed6a /arch/arm64 | |
parent | arm64: Mark __switch_to() as __sched (diff) | |
download | linux-ed876d35a1dc7f9efcfc51dd06843b5b8af08ad1.tar.xz linux-ed876d35a1dc7f9efcfc51dd06843b5b8af08ad1.zip |
arm64: Make perf_callchain_kernel() use arch_stack_walk()
To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
substantially rework arm64's unwinding code. As part of this, we want to
minimize the set of unwind interfaces we expose, and avoid open-coding
of unwind logic outside of stacktrace.c.
Currently perf_callchain_kernel() walks the stack of an interrupted
context by calling start_backtrace() with the context's PC and FP, and
iterating unwind steps using walk_stackframe(). This is functionally
equivalent to calling arch_stack_walk() with the interrupted context's
pt_regs, which will start with the PC and FP from the regs.
Make perf_callchain_kernel() use arch_stack_walk(). This simplifies
perf_callchain_kernel(), and in future will alow us to make
walk_stackframe() private to stacktrace.c.
At the same time, we update the callchain_trace() callback to check the
return value of perf_callchain_store(), which indicates whether there is
space for any further entries. When a non-zero value is returned,
further calls will be ignored, and are redundant, so we can stop the
unwind at this point.
We also remove the stale and confusing comment for callchain_trace.
There should be no functional change as a result of this patch.
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
[Mark: elaborate commit message, remove comment, fix includes]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20211129142849.3056714-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64')
-rw-r--r-- | arch/arm64/kernel/perf_callchain.c | 15 |
1 files changed, 3 insertions, 12 deletions
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index 4a72c2727309..e9b7d99f4e3a 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -5,10 +5,10 @@ * Copyright (C) 2015 ARM Limited */ #include <linux/perf_event.h> +#include <linux/stacktrace.h> #include <linux/uaccess.h> #include <asm/pointer_auth.h> -#include <asm/stacktrace.h> struct frame_tail { struct frame_tail __user *fp; @@ -132,30 +132,21 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, } } -/* - * Gets called by walk_stackframe() for every stackframe. This will be called - * whist unwinding the stackframe and is like a subroutine return so we use - * the PC. - */ static bool callchain_trace(void *data, unsigned long pc) { struct perf_callchain_entry_ctx *entry = data; - perf_callchain_store(entry, pc); - return true; + return perf_callchain_store(entry, pc) == 0; } void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { - struct stackframe frame; - if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) { /* We don't support guest os callchain now */ return; } - start_backtrace(&frame, regs->regs[29], regs->pc); - walk_stackframe(current, &frame, callchain_trace, entry); + arch_stack_walk(callchain_trace, entry, current, regs); } unsigned long perf_instruction_pointer(struct pt_regs *regs) |