diff options
author | Hou Tao <houtao1@huawei.com> | 2020-09-15 16:07:50 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-09-16 16:26:56 +0200 |
commit | e6b1a44eccfcab5e5e280be376f65478c3b2c7a2 (patch) | |
tree | 2175cb1bc02e8b795a2ebb5d3fe5263539e5cb4f /kernel/locking | |
parent | locking/lockdep: Fix "USED" <- "IN-NMI" inversions (diff) | |
download | linux-e6b1a44eccfcab5e5e280be376f65478c3b2c7a2.tar.xz linux-e6b1a44eccfcab5e5e280be376f65478c3b2c7a2.zip |
locking/percpu-rwsem: Use this_cpu_{inc,dec}() for read_count
The __this_cpu*() accessors are (in general) IRQ-unsafe which, given
that percpu-rwsem is a blocking primitive, should be just fine.
However, file_end_write() is used from IRQ context and will cause
load-store issues on architectures where the per-cpu accessors are not
natively irq-safe.
Fix it by using the IRQ-safe this_cpu_*() for operations on
read_count. This will generate more expensive code on a number of
platforms, which might cause a performance regression for some of the
other percpu-rwsem users.
If any such is reported, we can consider alternative solutions.
Fixes: 70fe2f48152e ("aio: fix freeze protection of aio writes")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Link: https://lkml.kernel.org/r/20200915140750.137881-1-houtao1@huawei.com
Diffstat (limited to 'kernel/locking')
-rw-r--r-- | kernel/locking/percpu-rwsem.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index 8bbafe3e5203..70a32a576f3f 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -45,7 +45,7 @@ EXPORT_SYMBOL_GPL(percpu_free_rwsem); static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem) { - __this_cpu_inc(*sem->read_count); + this_cpu_inc(*sem->read_count); /* * Due to having preemption disabled the decrement happens on @@ -71,7 +71,7 @@ static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem) if (likely(!atomic_read_acquire(&sem->block))) return true; - __this_cpu_dec(*sem->read_count); + this_cpu_dec(*sem->read_count); /* Prod writer to re-evaluate readers_active_check() */ rcuwait_wake_up(&sem->writer); |