diff options
author | Minchan Kim <minchan@kernel.org> | 2016-07-27 00:23:14 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-27 01:19:19 +0200 |
commit | 1b8320b620d6caa5879380f83f3884908ceedd4a (patch) | |
tree | fc896cc89721fa4da16c0378271a791bd6b6bd33 /mm | |
parent | zsmalloc: keep max_object in size_class (diff) | |
download | linux-1b8320b620d6caa5879380f83f3884908ceedd4a.tar.xz linux-1b8320b620d6caa5879380f83f3884908ceedd4a.zip |
zsmalloc: use bit_spin_lock
Use kernel standard bit spin-lock instead of custom mess. Even, it has
a bug which doesn't disable preemption. The reason we don't have any
problem is that we have used it during preemption disable section by
class->lock spinlock. So no need to go to stable.
Link: http://lkml.kernel.org/r/1464736881-24886-6-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/zsmalloc.c | 10 |
1 files changed, 3 insertions, 7 deletions
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 79295c73dc9f..39f29aedd5d6 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -868,21 +868,17 @@ static unsigned long obj_idx_to_offset(struct page *page, static inline int trypin_tag(unsigned long handle) { - unsigned long *ptr = (unsigned long *)handle; - - return !test_and_set_bit_lock(HANDLE_PIN_BIT, ptr); + return bit_spin_trylock(HANDLE_PIN_BIT, (unsigned long *)handle); } static void pin_tag(unsigned long handle) { - while (!trypin_tag(handle)); + bit_spin_lock(HANDLE_PIN_BIT, (unsigned long *)handle); } static void unpin_tag(unsigned long handle) { - unsigned long *ptr = (unsigned long *)handle; - - clear_bit_unlock(HANDLE_PIN_BIT, ptr); + bit_spin_unlock(HANDLE_PIN_BIT, (unsigned long *)handle); } static void reset_page(struct page *page) |