diff options
author | Peter Zijlstra <peterz@infradead.org> | 2019-06-13 15:43:20 +0200 |
---|---|---|
committer | Paul Burton <paul.burton@mips.com> | 2019-08-31 12:06:02 +0200 |
commit | 42344113ba7a1ed7b5654cd5270af0d5698d8521 (patch) | |
tree | 7ac40f4f47e97e27ff2909c0a58023ba6fb41b63 /arch/m68k/sun3x | |
parent | mips/atomic: Fix loongson_llsc_mb() wreckage (diff) | |
download | linux-42344113ba7a1ed7b5654cd5270af0d5698d8521.tar.xz linux-42344113ba7a1ed7b5654cd5270af0d5698d8521.zip |
mips/atomic: Fix smp_mb__{before,after}_atomic()
Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:
*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;
Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:
atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;
Which the CPU is then allowed to re-order (under TSO rules) like:
atomic_inc(u);
r0 = *y;
*x = 1;
And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.
Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Diffstat (limited to 'arch/m68k/sun3x')
0 files changed, 0 insertions, 0 deletions