diff options
author | Waiman Long <longman@redhat.com> | 2021-04-26 20:50:17 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-05-06 15:33:49 +0200 |
commit | 28ce0e70ecc30cc7d558a0304e6b816d70848f9a (patch) | |
tree | cf69b286807b78a02a43efbca299420f57cdef1b | |
parent | smp: Fix smp_call_function_single_async prototype (diff) | |
download | linux-28ce0e70ecc30cc7d558a0304e6b816d70848f9a.tar.xz linux-28ce0e70ecc30cc7d558a0304e6b816d70848f9a.zip |
locking/qrwlock: Cleanup queued_write_lock_slowpath()
Make the code more readable by replacing the atomic_cmpxchg_acquire()
by an equivalent atomic_try_cmpxchg_acquire() and change atomic_add()
to atomic_or().
For architectures that use qrwlock, I do not find one that has an
atomic_add() defined but not an atomic_or(). I guess it should be fine
by changing atomic_add() to atomic_or().
Note that the previous use of atomic_add() isn't wrong as only one
writer that is the wait_lock owner can set the waiting flag and the
flag will be cleared later on when acquiring the write lock.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/20210426185017.19815-1-longman@redhat.com
-rw-r--r-- | kernel/locking/qrwlock.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index b94f3831e963..ec36b73f4733 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -66,12 +66,12 @@ void queued_write_lock_slowpath(struct qrwlock *lock) arch_spin_lock(&lock->wait_lock); /* Try to acquire the lock directly if no reader is present */ - if (!atomic_read(&lock->cnts) && - (atomic_cmpxchg_acquire(&lock->cnts, 0, _QW_LOCKED) == 0)) + if (!(cnts = atomic_read(&lock->cnts)) && + atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED)) goto unlock; /* Set the waiting flag to notify readers that a writer is pending */ - atomic_add(_QW_WAITING, &lock->cnts); + atomic_or(_QW_WAITING, &lock->cnts); /* When no more readers or writers, set the locked flag */ do { |