diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2008-01-08 08:20:27 +0100 |
---|---|---|
committer | Christoph Lameter <christoph@stapp.engr.sgi.com> | 2008-02-08 02:47:42 +0100 |
commit | a76d354629ea46c449705970a2c0b9e9090d6f03 (patch) | |
tree | b2e1b9db59125e9e9a7866a8aff58165ac2ea1fd | |
parent | SLUB: Support for performance statistics (diff) | |
download | linux-a76d354629ea46c449705970a2c0b9e9090d6f03.tar.xz linux-a76d354629ea46c449705970a2c0b9e9090d6f03.zip |
Use non atomic unlock
Slub can use the non-atomic version to unlock because other flags will not
get modified with the lock held.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r-- | mm/slub.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c index ac836d31e3be..bccfb6a17864 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1219,7 +1219,7 @@ static __always_inline void slab_lock(struct page *page) static __always_inline void slab_unlock(struct page *page) { - bit_spin_unlock(PG_locked, &page->flags); + __bit_spin_unlock(PG_locked, &page->flags); } static __always_inline int slab_trylock(struct page *page) |