diff options
author | Boqun Feng <boqun.feng@gmail.com> | 2020-03-26 03:40:22 +0100 |
---|---|---|
committer | Paul E. McKenney <paulmck@kernel.org> | 2020-06-29 21:05:18 +0200 |
commit | e30d02355536e9678ab8a4dfcd6e90a86479b10f (patch) | |
tree | dd40f5e1dde3b1b009987450cab9bb19e45ce07f /Documentation/atomic_t.txt | |
parent | Documentation/litmus-tests/atomic: Add a test for atomic_set() (diff) | |
download | linux-e30d02355536e9678ab8a4dfcd6e90a86479b10f.tar.xz linux-e30d02355536e9678ab8a4dfcd6e90a86479b10f.zip |
Documentation/litmus-tests/atomic: Add a test for smp_mb__after_atomic()
We already use a litmus test in atomic_t.txt to describe atomic RMW +
smp_mb__after_atomic() is stronger than acquire (both the read and the
write parts are ordered). So make it a litmus test in atomic-tests
directory, so that people can access the litmus easily.
Additionally, change the processor numbers "P1, P2" to "P0, P1" in
atomic_t.txt for the consistency with the processor numbers in the
litmus test, which herd can handle.
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Andrea Parri <parri.andrea@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'Documentation/atomic_t.txt')
-rw-r--r-- | Documentation/atomic_t.txt | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt index 67d1d99f8589..0f1fdedf36bb 100644 --- a/Documentation/atomic_t.txt +++ b/Documentation/atomic_t.txt @@ -233,19 +233,19 @@ as well. Similarly, something like: is an ACQUIRE pattern (though very much not typical), but again the barrier is strictly stronger than ACQUIRE. As illustrated: - C strong-acquire + C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire { } - P1(int *x, atomic_t *y) + P0(int *x, atomic_t *y) { r0 = READ_ONCE(*x); smp_rmb(); r1 = atomic_read(y); } - P2(int *x, atomic_t *y) + P1(int *x, atomic_t *y) { atomic_inc(y); smp_mb__after_atomic(); @@ -253,14 +253,14 @@ strictly stronger than ACQUIRE. As illustrated: } exists - (r0=1 /\ r1=0) + (0:r0=1 /\ 0:r1=0) This should not happen; but a hypothetical atomic_inc_acquire() -- (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome, because it would not order the W part of the RMW against the following WRITE_ONCE. Thus: - P1 P2 + P0 P1 t = LL.acq *y (0) t++; |