diff options
author | Steven Rostedt (VMware) <rostedt@goodmis.org> | 2018-11-19 14:07:12 +0100 |
---|---|---|
committer | Steven Rostedt (VMware) <rostedt@goodmis.org> | 2018-11-28 02:31:54 +0100 |
commit | 39eb456dacb543de90d3bc6a8e0ac5cf51ac475e (patch) | |
tree | 790493fbeb31636acf0149ea92b146dc03f7c90d /kernel/bpf/queue_stack_maps.c | |
parent | function_graph: Make ftrace_push_return_trace() static (diff) | |
download | linux-39eb456dacb543de90d3bc6a8e0ac5cf51ac475e.tar.xz linux-39eb456dacb543de90d3bc6a8e0ac5cf51ac475e.zip |
function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stack
Currently, the depth of the ret_stack is determined by curr_ret_stack index.
The issue is that there's a race between setting of the curr_ret_stack and
calling of the callback attached to the return of the function.
Commit 03274a3ffb44 ("tracing/fgraph: Adjust fgraph depth before calling
trace return callback") moved the calling of the callback to after the
setting of the curr_ret_stack, even stating that it was safe to do so, when
in fact, it was the reason there was a barrier() there (yes, I should have
commented that barrier()).
Not only does the curr_ret_stack keep track of the current call graph depth,
it also keeps the ret_stack content from being overwritten by new data.
The function profiler, uses the "subtime" variable of ret_stack structure
and by moving the curr_ret_stack, it allows for interrupts to use the same
structure it was using, corrupting the data, and breaking the profiler.
To fix this, there needs to be two variables to handle the call stack depth
and the pointer to where the ret_stack is being used, as they need to change
at two different locations.
Cc: stable@kernel.org
Fixes: 03274a3ffb449 ("tracing/fgraph: Adjust fgraph depth before calling trace return callback")
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Diffstat (limited to 'kernel/bpf/queue_stack_maps.c')
0 files changed, 0 insertions, 0 deletions