diff options
author | Cong Wang <cwang@twopensource.com> | 2014-09-03 00:27:20 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-09-09 06:53:42 +0200 |
commit | 3577af70a2ce4853d58e57d832e687d739281479 (patch) | |
tree | ed22848116d767d9be78cdb43a668dbec181efc3 /kernel/events | |
parent | kprobes/x86: Free 'optinsn' cache when range check fails (diff) | |
download | linux-3577af70a2ce4853d58e57d832e687d739281479.tar.xz linux-3577af70a2ce4853d58e57d832e687d739281479.zip |
perf: Fix a race condition in perf_remove_from_context()
We saw a kernel soft lockup in perf_remove_from_context(),
it looks like the `perf` process, when exiting, could not go
out of the retry loop. Meanwhile, the target process was forking
a child. So either the target process should execute the smp
function call to deactive the event (if it was running) or it should
do a context switch which deactives the event.
It seems we optimize out a context switch in perf_event_context_sched_out(),
and what's more important, we still test an obsolete task pointer when
retrying, so no one actually would deactive that event in this situation.
Fix it directly by reloading the task pointer in perf_remove_from_context().
This should cure the above soft lockup.
Signed-off-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/1409696840-843-1-git-send-email-xiyou.wangcong@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/events')
-rw-r--r-- | kernel/events/core.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index f9c1ed002dbc..d640a8b4dcbc 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1524,6 +1524,11 @@ retry: */ if (ctx->is_active) { raw_spin_unlock_irq(&ctx->lock); + /* + * Reload the task pointer, it might have been changed by + * a concurrent perf_event_context_sched_out(). + */ + task = ctx->task; goto retry; } @@ -1967,6 +1972,11 @@ retry: */ if (ctx->is_active) { raw_spin_unlock_irq(&ctx->lock); + /* + * Reload the task pointer, it might have been changed by + * a concurrent perf_event_context_sched_out(). + */ + task = ctx->task; goto retry; } |