diff options
author | Will Deacon <will@kernel.org> | 2019-10-30 18:15:01 +0100 |
---|---|---|
committer | Will Deacon <will@kernel.org> | 2020-07-21 11:50:36 +0200 |
commit | bb7cdd38185a4f9fa32e62db115c2c6dceb2b621 (patch) | |
tree | e595c58c6d7cdb11ba37afcf5e1b35dc3708b0ad /mm/memory.c | |
parent | vhost: Remove redundant use of read_barrier_depends() barrier (diff) | |
download | linux-bb7cdd38185a4f9fa32e62db115c2c6dceb2b621.tar.xz linux-bb7cdd38185a4f9fa32e62db115c2c6dceb2b621.zip |
alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()
In preparation for removing smp_read_barrier_depends() altogether,
move the Alpha code over to using smp_rmb() and smp_mb() directly.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c index 87ec87cdc1ff..e1f2c730d8bb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -437,7 +437,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) * of a chain of data-dependent loads, meaning most CPUs (alpha * being the notable exception) will already guarantee loads are * seen in-order. See the alpha page table accessors for the - * smp_read_barrier_depends() barriers in page table walking code. + * smp_rmb() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ |