diff options
author | Michel Lespinasse <walken@google.com> | 2010-08-10 02:21:19 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2010-08-10 05:45:11 +0200 |
commit | 424acaaeb3a3932d64a9b4bd59df6cf72c22d8f3 (patch) | |
tree | c3c55028aa6eff578bc6d4d984796c7ea1379061 /lib/rwsem.c | |
parent | rwsem: let RWSEM_WAITING_BIAS represent any number of waiting threads (diff) | |
download | linux-424acaaeb3a3932d64a9b4bd59df6cf72c22d8f3.tar.xz linux-424acaaeb3a3932d64a9b4bd59df6cf72c22d8f3.zip |
rwsem: wake queued readers when writer blocks on active read lock
This change addresses the following situation:
- Thread A acquires the rwsem for read
- Thread B tries to acquire the rwsem for write, notices there is already
an active owner for the rwsem.
- Thread C tries to acquire the rwsem for read, notices that thread B already
tried to acquire it.
- Thread C grabs the spinlock and queues itself on the wait queue.
- Thread B grabs the spinlock and queues itself behind C. At this point A is
the only remaining active owner on the rwsem.
In this situation thread B could notice that it was the last active writer
on the rwsem, and decide to wake C to let it proceed in parallel with A
since they both only want the rwsem for read.
Signed-off-by: Michel Lespinasse <walken@google.com>
Acked-by: David Howells <dhowells@redhat.com>
Cc: Mike Waychison <mikew@google.com>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: Ying Han <yinghan@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'lib/rwsem.c')
-rw-r--r-- | lib/rwsem.c | 19 |
1 files changed, 15 insertions, 4 deletions
diff --git a/lib/rwsem.c b/lib/rwsem.c index a3e68bf5932e..318d435dcebb 100644 --- a/lib/rwsem.c +++ b/lib/rwsem.c @@ -67,6 +67,9 @@ __rwsem_do_wake(struct rw_semaphore *sem, int wake_type) goto readers_only; if (wake_type == RWSEM_WAKE_READ_OWNED) + /* Another active reader was observed, so wakeup is not + * likely to succeed. Save the atomic op. + */ goto out; /* There's a writer at the front of the queue - try to grant it the @@ -111,8 +114,8 @@ __rwsem_do_wake(struct rw_semaphore *sem, int wake_type) * count adjustment pretty soon. */ if (wake_type == RWSEM_WAKE_ANY && - (rwsem_atomic_update(0, sem) & RWSEM_ACTIVE_MASK)) - /* Someone grabbed the sem already */ + rwsem_atomic_update(0, sem) < RWSEM_WAITING_BIAS) + /* Someone grabbed the sem for write already */ goto out; /* Grant an infinite number of read locks to the readers at the front @@ -187,9 +190,17 @@ rwsem_down_failed_common(struct rw_semaphore *sem, /* we're now waiting on the lock, but no longer actively locking */ count = rwsem_atomic_update(adjustment, sem); - /* if there are no active locks, wake the front queued process(es) up */ - if (!(count & RWSEM_ACTIVE_MASK)) + /* If there are no active locks, wake the front queued process(es) up. + * + * Alternatively, if we're called from a failed down_write(), there + * were already threads queued before us and there are no active + * writers, the lock must be read owned; so we try to wake any read + * locks that were queued ahead of us. */ + if (count == RWSEM_WAITING_BIAS) sem = __rwsem_do_wake(sem, RWSEM_WAKE_NO_ACTIVE); + else if (count > RWSEM_WAITING_BIAS && + adjustment == -RWSEM_ACTIVE_WRITE_BIAS) + sem = __rwsem_do_wake(sem, RWSEM_WAKE_READ_OWNED); spin_unlock_irq(&sem->wait_lock); |