diff options
author | Steven Rostedt <srostedt@redhat.com> | 2009-06-02 20:01:19 +0200 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2009-06-02 20:42:17 +0200 |
commit | 26c01624a2a40f8a4ddf6449b65c9b1c418d0e72 (patch) | |
tree | eeff81aa0fa56ba1f2c180d4ec6e64cb31af898e /kernel/trace/ftrace.c | |
parent | function-graph: enable the stack after initialization of other variables (diff) | |
download | linux-26c01624a2a40f8a4ddf6449b65c9b1c418d0e72.tar.xz linux-26c01624a2a40f8a4ddf6449b65c9b1c418d0e72.zip |
function-graph: add memory barriers for accessing task's ret_stack
The code that handles the tasks ret_stack allocation for every task
assumes that only an interrupt can cause issues (even though interrupts
are disabled).
In reality, the code is allocating the ret_stack for tasks that may be
running on other CPUs and there are not efficient memory barriers to
handle this case.
[ Impact: prevent crash due to using of uninitialized ret_stack variables ]
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel/trace/ftrace.c')
-rw-r--r-- | kernel/trace/ftrace.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 20e066065eb3..1664d3f33d38 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2580,12 +2580,12 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) } if (t->ret_stack == NULL) { - t->curr_ret_stack = -1; - /* Make sure IRQs see the -1 first: */ - barrier(); - t->ret_stack = ret_stack_list[start++]; atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); + t->curr_ret_stack = -1; + /* Make sure the tasks see the -1 first: */ + smp_wmb(); + t->ret_stack = ret_stack_list[start++]; } } while_each_thread(g, t); |