summaryrefslogtreecommitdiffstats
path: root/include/asm-generic/barrier.h
diff options
context:
space:
mode:
authorMichael S. Tsirkin <mst@redhat.com>2015-12-27 17:23:01 +0100
committerMichael S. Tsirkin <mst@redhat.com>2016-01-12 19:46:59 +0100
commit6a65d26385bf487926a0616650927303058551e3 (patch)
tree5255f100d5688d41d47ab86a2fd4a6933d3325d9 /include/asm-generic/barrier.h
parentx86: define __smp_xxx (diff)
downloadlinux-6a65d26385bf487926a0616650927303058551e3.tar.xz
linux-6a65d26385bf487926a0616650927303058551e3.zip
asm-generic: implement virt_xxx memory barriers
Guests running within virtual machines might be affected by SMP effects even if the guest itself is compiled without SMP support. This is an artifact of interfacing with an SMP host while running an UP kernel. Using mandatory barriers for this use-case would be possible but is often suboptimal. In particular, virtio uses a bunch of confusing ifdefs to work around this, while xen just uses the mandatory barriers. To better handle this case, low-level virt_mb() etc macros are made available. These are implemented trivially using the low-level __smp_xxx macros, the purpose of these wrappers is to annotate those specific cases. These have the same effect as smp_mb() etc when SMP is enabled, but generate identical code for SMP and non-SMP systems. For example, virtual machine guests should use virt_mb() rather than smp_mb() when synchronizing against a (possibly SMP) host. Suggested-by: David Miller <davem@davemloft.net> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Diffstat (limited to 'include/asm-generic/barrier.h')
-rw-r--r--include/asm-generic/barrier.h11
1 files changed, 11 insertions, 0 deletions
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964f8ff7..1cceca146905 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do { \
#endif
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic() __smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_GENERIC_BARRIER_H */