diff options
author | Frederic Weisbecker <frederic@kernel.org> | 2022-03-14 14:37:38 +0100 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2022-04-21 01:51:11 +0200 |
commit | 70ae7b0ce03347fab35d6d8df81e1165d7ea8045 (patch) | |
tree | d8e176e3a2de6d7eed80527af6bac59de704d751 /kernel/rcu/tree.c | |
parent | rcu: Print number of online CPUs in RCU CPU stall-warning messages (diff) | |
download | linux-70ae7b0ce03347fab35d6d8df81e1165d7ea8045.tar.xz linux-70ae7b0ce03347fab35d6d8df81e1165d7ea8045.zip |
rcu: Fix preemption mode check on synchronize_rcu[_expedited]()
An early check on synchronize_rcu[_expedited]() tries to determine if
the current CPU is in UP mode on an SMP no-preempt kernel, in which case
there is no need to start a grace period since the current assumed
quiescent state is all we need.
However the preemption mode doesn't take into account the boot selected
preemption mode under CONFIG_PREEMPT_DYNAMIC=y, missing a possible
early return if the running flavour is "none" or "voluntary".
Use the shiny new preempt mode accessors to fix this. However,
avoid invoking them during early boot because doing so triggers a
WARN_ON_ONCE().
[ paulmck: Update for mainlined API. ]
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'kernel/rcu/tree.c')
-rw-r--r-- | kernel/rcu/tree.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 29669070348e..d3caa82b9954 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3741,7 +3741,9 @@ static int rcu_blocking_is_gp(void) { int ret; - if (IS_ENABLED(CONFIG_PREEMPTION)) + // Invoking preempt_model_*() too early gets a splat. + if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE || + preempt_model_full() || preempt_model_rt()) return rcu_scheduler_active == RCU_SCHEDULER_INACTIVE; might_sleep(); /* Check for RCU read-side critical section. */ preempt_disable(); |