diff options
author | Will Deacon <will.deacon@arm.com> | 2016-09-05 12:56:05 +0200 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2016-09-09 13:33:48 +0200 |
commit | 872c63fbf9e153146b07f0cece4da0d70b283eeb (patch) | |
tree | 86880c47da4f3557f5673fdb1cb3e00a660fff7d /arch/arm64/include/asm/percpu.h | |
parent | Linux 4.8-rc5 (diff) | |
download | linux-872c63fbf9e153146b07f0cece4da0d70b283eeb.tar.xz linux-872c63fbf9e153146b07f0cece4da0d70b283eeb.zip |
arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.
Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.
This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().
Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/include/asm/percpu.h')
0 files changed, 0 insertions, 0 deletions