summaryrefslogtreecommitdiffstats
path: root/scripts
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2023-06-05 09:01:01 +0200
committerPeter Zijlstra <peterz@infradead.org>2023-06-05 09:57:14 +0200
commitd12157efc8e083c77d054675fcdd594f54cc7e2b (patch)
tree9be23f46b4b8db9d3e36a8b551d8961d8d41822d /scripts
parentlocking/atomic: hexagon: remove redundant arch_atomic_cmpxchg (diff)
downloadlinux-d12157efc8e083c77d054675fcdd594f54cc7e2b.tar.xz
linux-d12157efc8e083c77d054675fcdd594f54cc7e2b.zip
locking/atomic: make atomic*_{cmp,}xchg optional
Most architectures define the atomic/atomic64 xchg and cmpxchg operations in terms of arch_xchg and arch_cmpxchg respectfully. Add fallbacks for these cases and remove the trivial cases from arch code. On some architectures the existing definitions are kept as these are used to build other arch_atomic*() operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230605070124.3741859-5-mark.rutland@arm.com
Diffstat (limited to 'scripts')
-rw-r--r--scripts/atomic/fallbacks/cmpxchg7
-rw-r--r--scripts/atomic/fallbacks/xchg7
2 files changed, 14 insertions, 0 deletions
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
new file mode 100644
index 000000000000..87cd010f98d5
--- /dev/null
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
+{
+ return arch_cmpxchg${order}(&v->counter, old, new);
+}
+EOF
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
new file mode 100644
index 000000000000..733b8980b2f3
--- /dev/null
+++ b/scripts/atomic/fallbacks/xchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
+{
+ return arch_xchg${order}(&v->counter, new);
+}
+EOF