diff options
author | Paul E. McKenney <paulmck@kernel.org> | 2019-11-28 01:36:45 +0100 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2019-12-09 21:32:59 +0100 |
commit | df1e849ae4559544ff00ff5052eefe2479750539 (patch) | |
tree | be1bac3d41c0d6daec68af70e174ff0896599430 /Documentation/RCU | |
parent | rcu: Replace synchronize_sched_expedited_wait() "_sched" with "_rcu" (diff) | |
download | linux-df1e849ae4559544ff00ff5052eefe2479750539.tar.xz linux-df1e849ae4559544ff00ff5052eefe2479750539.zip |
rcu: Enable tick for nohz_full CPUs slow to provide expedited QS
An expedited grace period can be stalled by a nohz_full CPU looping
in kernel context. This possibility is currently handled by some
carefully crafted checks in rcu_read_unlock_special() that enlist help
from ksoftirqd when permitted by the scheduler. However, it is exactly
these checks that require the scheduler avoid holding any of its rq or
pi locks across rcu_read_unlock() without also having held them across
the entire RCU read-side critical section.
It would therefore be very nice if expedited grace periods could
handle nohz_full CPUs looping in kernel context without such checks.
This commit therefore adds code to the expedited grace period's wait
and cleanup code that forces the scheduler-clock interrupt on for CPUs
that fail to quickly supply a quiescent state. "Quickly" is currently
a hard-coded single-jiffy delay.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'Documentation/RCU')
0 files changed, 0 insertions, 0 deletions