diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2024-07-04 19:03:40 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2024-07-09 13:26:36 +0200 |
commit | 16b9569df9d2ab07eeee075cb7895e9d3e08e8f0 (patch) | |
tree | f90b1ab059207cb3715b832d260c2f4d3064eaf3 /kernel/events | |
parent | perf: Move swevent_htable::recursion into task_struct. (diff) | |
download | linux-16b9569df9d2ab07eeee075cb7895e9d3e08e8f0.tar.xz linux-16b9569df9d2ab07eeee075cb7895e9d3e08e8f0.zip |
perf: Don't disable preemption in perf_pending_task().
perf_pending_task() is invoked in task context and disables preemption
because perf_swevent_get_recursion_context() used to access per-CPU
variables. The other reason is to create a RCU read section while
accessing the perf_event.
The recursion counter is no longer a per-CPU accounter so disabling
preemption is no longer required. The RCU section is needed and must be
created explicit.
Replace the preemption-disable section with a explicit RCU-read section.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20240704170424.1466941-7-bigeasy@linutronix.de
Diffstat (limited to 'kernel/events')
-rw-r--r-- | kernel/events/core.c | 11 |
1 files changed, 5 insertions, 6 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index b5232257bc83..96e03d6b52d1 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5208,10 +5208,9 @@ static void perf_pending_task_sync(struct perf_event *event) } /* - * All accesses related to the event are within the same - * non-preemptible section in perf_pending_task(). The RCU - * grace period before the event is freed will make sure all - * those accesses are complete by then. + * All accesses related to the event are within the same RCU section in + * perf_pending_task(). The RCU grace period before the event is freed + * will make sure all those accesses are complete by then. */ rcuwait_wait_event(&event->pending_work_wait, !event->pending_work, TASK_UNINTERRUPTIBLE); } @@ -6831,7 +6830,7 @@ static void perf_pending_task(struct callback_head *head) * critical section as the ->pending_work reset. See comment in * perf_pending_task_sync(). */ - preempt_disable_notrace(); + rcu_read_lock(); /* * If we 'fail' here, that's OK, it means recursion is already disabled * and we won't recurse 'further'. @@ -6844,10 +6843,10 @@ static void perf_pending_task(struct callback_head *head) local_dec(&event->ctx->nr_pending); rcuwait_wake_up(&event->pending_work_wait); } + rcu_read_unlock(); if (rctx >= 0) perf_swevent_put_recursion_context(rctx); - preempt_enable_notrace(); } #ifdef CONFIG_GUEST_PERF_EVENTS |