diff options
author | Paul Mackerras <paulus@samba.org> | 2009-06-12 23:10:05 +0200 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2009-06-15 05:27:38 +0200 |
commit | 09d4e0edd4614e787393acc582ac701c6ec3565b (patch) | |
tree | 77f3b85e0f59a168ac78639e510ebcbd3791b3d2 /include/asm-generic/atomic64.h | |
parent | powerpc: Add compiler memory barrier to mtmsr macro (diff) | |
download | linux-09d4e0edd4614e787393acc582ac701c6ec3565b.tar.xz linux-09d4e0edd4614e787393acc582ac701c6ec3565b.zip |
lib: Provide generic atomic64_t implementation
Many processor architectures have no 64-bit atomic instructions, but
we need atomic64_t in order to support the perf_counter subsystem.
This adds an implementation of 64-bit atomic operations using hashed
spinlocks to provide atomicity. For each atomic operation, the address
of the atomic64_t variable is hashed to an index into an array of 16
spinlocks. That spinlock is taken (with interrupts disabled) around the
operation, which can then be coded non-atomically within the lock.
On UP, all the spinlock manipulation goes away and we simply disable
interrupts around each operation. In fact gcc eliminates the whole
atomic64_lock variable as well.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'include/asm-generic/atomic64.h')
-rw-r--r-- | include/asm-generic/atomic64.h | 42 |
1 files changed, 42 insertions, 0 deletions
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h new file mode 100644 index 000000000000..b18ce4f9ee3d --- /dev/null +++ b/include/asm-generic/atomic64.h @@ -0,0 +1,42 @@ +/* + * Generic implementation of 64-bit atomics using spinlocks, + * useful on processors that don't have 64-bit atomic instructions. + * + * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + */ +#ifndef _ASM_GENERIC_ATOMIC64_H +#define _ASM_GENERIC_ATOMIC64_H + +typedef struct { + long long counter; +} atomic64_t; + +#define ATOMIC64_INIT(i) { (i) } + +extern long long atomic64_read(const atomic64_t *v); +extern void atomic64_set(atomic64_t *v, long long i); +extern void atomic64_add(long long a, atomic64_t *v); +extern long long atomic64_add_return(long long a, atomic64_t *v); +extern void atomic64_sub(long long a, atomic64_t *v); +extern long long atomic64_sub_return(long long a, atomic64_t *v); +extern long long atomic64_dec_if_positive(atomic64_t *v); +extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n); +extern long long atomic64_xchg(atomic64_t *v, long long new); +extern int atomic64_add_unless(atomic64_t *v, long long a, long long u); + +#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0) +#define atomic64_inc(v) atomic64_add(1LL, (v)) +#define atomic64_inc_return(v) atomic64_add_return(1LL, (v)) +#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0) +#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0) +#define atomic64_dec(v) atomic64_sub(1LL, (v)) +#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v)) +#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0) +#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL) + +#endif /* _ASM_GENERIC_ATOMIC64_H */ |