diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2014-08-05 14:23:35 +0200 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2014-09-08 01:27:32 +0200 |
commit | 01a81330344b09028881c953a51d1106a9e63518 (patch) | |
tree | 73428522f81fab6c4f5797d68a3930eeede67b52 | |
parent | rcu: Make rcu_tasks_kthread()'s GP-wait loop allow preemption (diff) | |
download | linux-01a81330344b09028881c953a51d1106a9e63518.tar.xz linux-01a81330344b09028881c953a51d1106a9e63518.zip |
rcu: Remove redundant preempt_disable() from rcu_note_voluntary_context_switch()
In theory, synchronize_sched() requires a read-side critical section
to order against. In practice, preemption can be thought of as
being disabled across every machine instruction, at least for those
machine instructions that are not in the idle loop and not on offline
CPUs. So this commit removes the redundant preempt_disable() from
rcu_note_voluntary_context_switch().
Please note that the single instruction in question is the store of
zero to ->rcu_tasks_holdout. The "if" is simply a performance optimization
that avoids unnecessary stores. To see this, keep in mind that both
the "if" condition and the store are in a quiescent state. Therefore,
even if the task is preempted for a full grace period (presumably due
to its having done a context switch beforehand), the store will be
recording a legitimate quiescent state.
Reported-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Conflicts:
include/linux/rcupdate.h
-rw-r--r-- | include/linux/rcupdate.h | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index a3123f53a4ce..132e1e34cdca 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -326,10 +326,8 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev, extern struct srcu_struct tasks_rcu_exit_srcu; #define rcu_note_voluntary_context_switch(t) \ do { \ - preempt_disable(); /* Exclude synchronize_sched(); */ \ if (ACCESS_ONCE((t)->rcu_tasks_holdout)) \ ACCESS_ONCE((t)->rcu_tasks_holdout) = false; \ - preempt_enable(); \ } while (0) #else /* #ifdef CONFIG_TASKS_RCU */ #define TASKS_RCU(x) do { } while (0) |