diff options
author | Peter Zijlstra <peterz@infradead.org> | 2016-05-24 13:17:12 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-06-14 11:55:14 +0200 |
commit | 33ac279677dcc2441cb93d8cb9cf7a74df62814d (patch) | |
tree | bcf918587b2425d1294b780b31ab67761e068e26 /ipc/sem.c | |
parent | locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire() (diff) | |
download | linux-33ac279677dcc2441cb93d8cb9cf7a74df62814d.tar.xz linux-33ac279677dcc2441cb93d8cb9cf7a74df62814d.zip |
locking/barriers: Introduce smp_acquire__after_ctrl_dep()
Introduce smp_acquire__after_ctrl_dep(), this construct is not
uncommon, but the lack of this barrier is.
Use it to better express smp_rmb() uses in WRITE_ONCE(), the IPC
semaphore code and the qspinlock code.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'ipc/sem.c')
-rw-r--r-- | ipc/sem.c | 14 |
1 files changed, 2 insertions, 12 deletions
diff --git a/ipc/sem.c b/ipc/sem.c index b3757ea0694b..84dff3df11a4 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -260,16 +260,6 @@ static void sem_rcu_free(struct rcu_head *head) } /* - * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they - * are only control barriers. - * The code must pair with spin_unlock(&sem->lock) or - * spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient. - * - * smp_rmb() is sufficient, as writes cannot pass the control barrier. - */ -#define ipc_smp_acquire__after_spin_is_unlocked() smp_rmb() - -/* * Wait until all currently ongoing simple ops have completed. * Caller must own sem_perm.lock. * New simple ops cannot start, because simple ops first check @@ -292,7 +282,7 @@ static void sem_wait_array(struct sem_array *sma) sem = sma->sem_base + i; spin_unlock_wait(&sem->lock); } - ipc_smp_acquire__after_spin_is_unlocked(); + smp_acquire__after_ctrl_dep(); } /* @@ -350,7 +340,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops, * complex_count++; * spin_unlock(sem_perm.lock); */ - ipc_smp_acquire__after_spin_is_unlocked(); + smp_acquire__after_ctrl_dep(); /* * Now repeat the test of complex_count: |