diff options
author | Andrea Parri <andrea.parri@amarulasolutions.com> | 2018-05-15 01:01:29 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-05-15 08:11:15 +0200 |
commit | 1362ae43c503a4e333ab6948fc4c6e0e794e1558 (patch) | |
tree | 13b7e8dc16be2111d352a59d4594cd57a67b7d6a /include/asm-generic | |
parent | locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked() (diff) | |
download | linux-1362ae43c503a4e333ab6948fc4c6e0e794e1558.tar.xz linux-1362ae43c503a4e333ab6948fc4c6e0e794e1558.zip |
locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()
Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.
Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: stern@rowland.harvard.edu
Link: http://lkml.kernel.org/r/1526338889-7003-3-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/asm-generic')
-rw-r--r-- | include/asm-generic/qspinlock.h | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index a8ed0a352d75..9cc457597ddf 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -26,7 +26,6 @@ * @lock: Pointer to queued spinlock structure * Return: 1 if it is locked, 0 otherwise */ -#ifndef queued_spin_is_locked static __always_inline int queued_spin_is_locked(struct qspinlock *lock) { /* @@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) */ return atomic_read(&lock->val); } -#endif /** * queued_spin_value_unlocked - is the spinlock structure unlocked? |