diff options
author | Peter Zijlstra <peterz@infradead.org> | 2019-06-13 15:43:20 +0200 |
---|---|---|
committer | Paul Burton <paul.burton@mips.com> | 2019-08-31 12:06:02 +0200 |
commit | 42344113ba7a1ed7b5654cd5270af0d5698d8521 (patch) | |
tree | 7ac40f4f47e97e27ff2909c0a58023ba6fb41b63 /arch/mips/include/asm/atomic.h | |
parent | mips/atomic: Fix loongson_llsc_mb() wreckage (diff) | |
download | linux-42344113ba7a1ed7b5654cd5270af0d5698d8521.tar.xz linux-42344113ba7a1ed7b5654cd5270af0d5698d8521.zip |
mips/atomic: Fix smp_mb__{before,after}_atomic()
Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:
*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;
Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:
atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;
Which the CPU is then allowed to re-order (under TSO rules) like:
atomic_inc(u);
r0 = *y;
*x = 1;
And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.
Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Diffstat (limited to 'arch/mips/include/asm/atomic.h')
-rw-r--r-- | arch/mips/include/asm/atomic.h | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h index 190f1b615122..bb8658cc7f12 100644 --- a/arch/mips/include/asm/atomic.h +++ b/arch/mips/include/asm/atomic.h @@ -68,7 +68,7 @@ static __inline__ void atomic_##op(int i, atomic_t * v) \ "\t" __scbeqz " %0, 1b \n" \ " .set pop \n" \ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ + : "Ir" (i) : __LLSC_CLOBBER); \ } else { \ unsigned long flags; \ \ @@ -98,7 +98,7 @@ static __inline__ int atomic_##op##_return_relaxed(int i, atomic_t * v) \ " .set pop \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ + : "Ir" (i) : __LLSC_CLOBBER); \ } else { \ unsigned long flags; \ \ @@ -132,7 +132,7 @@ static __inline__ int atomic_fetch_##op##_relaxed(int i, atomic_t * v) \ " move %0, %1 \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ + : "Ir" (i) : __LLSC_CLOBBER); \ } else { \ unsigned long flags; \ \ @@ -210,7 +210,7 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v) " .set pop \n" : "=&r" (result), "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) - : "Ir" (i)); + : "Ir" (i) : __LLSC_CLOBBER); } else { unsigned long flags; @@ -270,7 +270,7 @@ static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \ "\t" __scbeqz " %0, 1b \n" \ " .set pop \n" \ : "=&r" (temp), "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ + : "Ir" (i) : __LLSC_CLOBBER); \ } else { \ unsigned long flags; \ \ @@ -300,7 +300,7 @@ static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \ " .set pop \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ + : "Ir" (i) : __LLSC_CLOBBER); \ } else { \ unsigned long flags; \ \ @@ -334,7 +334,7 @@ static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \ " .set pop \n" \ : "=&r" (result), "=&r" (temp), \ "+" GCC_OFF_SMALL_ASM() (v->counter) \ - : "Ir" (i)); \ + : "Ir" (i) : __LLSC_CLOBBER); \ } else { \ unsigned long flags; \ \ |