diff options
author | Wei Yang <richard.weiyang@linux.alibaba.com> | 2020-04-02 06:06:16 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-04-02 18:35:27 +0200 |
commit | 2406b76fe815dbc7f97a19995b8d80eacc887895 (patch) | |
tree | 6b29743a8b0556347e9e188f7dbb4dd6d10359d7 /mm/swap_slots.c | |
parent | mm/swapfile: fix data races in try_to_unuse() (diff) | |
download | linux-2406b76fe815dbc7f97a19995b8d80eacc887895.tar.xz linux-2406b76fe815dbc7f97a19995b8d80eacc887895.zip |
mm/swap_slots.c: assign|reset cache slot by value directly
Currently we use a tmp pointer, pentry, to transfer and reset swap cache
slot, which is a little redundant. Swap cache slot stores the entry value
directly, assign and reset it by value would be straight forward.
Also this patch merges the else and if, since this is the only case we
refill and repeat swap cache.
Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Link: http://lkml.kernel.org/r/20200311055352.50574-1-richard.weiyang@linux.alibaba.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap_slots.c')
-rw-r--r-- | mm/swap_slots.c | 12 |
1 files changed, 5 insertions, 7 deletions
diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 63a7b4563a57..0975adc72253 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -309,7 +309,7 @@ direct_free: swp_entry_t get_swap_page(struct page *page) { - swp_entry_t entry, *pentry; + swp_entry_t entry; struct swap_slots_cache *cache; entry.val = 0; @@ -336,13 +336,11 @@ swp_entry_t get_swap_page(struct page *page) if (cache->slots) { repeat: if (cache->nr) { - pentry = &cache->slots[cache->cur++]; - entry = *pentry; - pentry->val = 0; + entry = cache->slots[cache->cur]; + cache->slots[cache->cur++].val = 0; cache->nr--; - } else { - if (refill_swap_slots_cache(cache)) - goto repeat; + } else if (refill_swap_slots_cache(cache)) { + goto repeat; } } mutex_unlock(&cache->alloc_lock); |