summaryrefslogtreecommitdiffstats
path: root/include/target
diff options
context:
space:
mode:
authorJoern Engel <joern@logfs.org>2014-09-16 22:23:12 +0200
committerNicholas Bellinger <nab@linux-iscsi.org>2014-10-01 23:39:06 +0200
commit33940d09937276cd3c81f2874faf43e37c2db0e2 (patch)
tree2c3043e6902ee4e8e23b947f2e30e50967e99c5b /include/target
parenttarget: remove some smp_mb__after_atomic()s (diff)
downloadlinux-33940d09937276cd3c81f2874faf43e37c2db0e2.tar.xz
linux-33940d09937276cd3c81f2874faf43e37c2db0e2.zip
target: encapsulate smp_mb__after_atomic()
The target code has a rather generous helping of smp_mb__after_atomic() throughout the code base. Most atomic operations were followed by one and none were preceded by smp_mb__before_atomic(), nor accompanied by a comment explaining the need for a barrier. Instead of trying to prove for every case whether or not it is needed, this patch introduces atomic_inc_mb() and atomic_dec_mb(), which explicitly include the memory barriers before and after the atomic operation. For now they are defined in a target header, although they could be of general use. Most of the existing atomic/mb combinations were replaced by the new helpers. In a few cases the atomic was sandwiched in spin_lock/spin_unlock and I simply removed the barrier. I suspect that in most cases the correct conversion would have been to drop the barrier. I also suspect that a few cases exist where a) the barrier was necessary and b) a second barrier before the atomic would have been necessary and got added by this patch. Signed-off-by: Joern Engel <joern@logfs.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Diffstat (limited to 'include/target')
-rw-r--r--include/target/target_core_base.h14
1 files changed, 14 insertions, 0 deletions
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 9ec9864ecf38..b106240d8385 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -903,4 +903,18 @@ struct se_wwn {
struct config_group fabric_stat_group;
};
+static inline void atomic_inc_mb(atomic_t *v)
+{
+ smp_mb__before_atomic();
+ atomic_inc(v);
+ smp_mb__after_atomic();
+}
+
+static inline void atomic_dec_mb(atomic_t *v)
+{
+ smp_mb__before_atomic();
+ atomic_dec(v);
+ smp_mb__after_atomic();
+}
+
#endif /* TARGET_CORE_BASE_H */