diff options
author | Chen Gang <gang.chen@asianux.com> | 2013-09-25 06:14:08 +0200 |
---|---|---|
committer | Chris Metcalf <cmetcalf@tilera.com> | 2013-09-27 22:08:56 +0200 |
commit | b924a69067b00d3121debae5a738fb0bcbbbb03c (patch) | |
tree | 574ab34819b91df7e7b4eb9cf750d3c6853033f3 /arch/tile/lib | |
parent | Linux 3.12-rc2 (diff) | |
download | linux-b924a69067b00d3121debae5a738fb0bcbbbb03c.tar.xz linux-b924a69067b00d3121debae5a738fb0bcbbbb03c.zip |
tile: include: asm: use 'long long' instead of 'u64' for atomic64_t and its related functions
atomic* value is signed value, and atomic* functions need also process
signed value (parameter value, and return value), so use 'long long'
instead of 'u64'.
After replacement, it will also fix a bug for atomic64_add_negative():
"u64 is never less than 0".
The modifications are:
in vim, use "1,% s/\<u64\>/long long/g" command.
remove redundant '__aligned(8)'.
be sure of 80 (and macro '\') columns limitation after replacement.
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> [re-instated const cast]
Diffstat (limited to 'arch/tile/lib')
-rw-r--r-- | arch/tile/lib/atomic_32.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/tile/lib/atomic_32.c b/arch/tile/lib/atomic_32.c index 759efa337be8..c89b211fd9e7 100644 --- a/arch/tile/lib/atomic_32.c +++ b/arch/tile/lib/atomic_32.c @@ -107,19 +107,19 @@ unsigned long _atomic_xor(volatile unsigned long *p, unsigned long mask) EXPORT_SYMBOL(_atomic_xor); -u64 _atomic64_xchg(u64 *v, u64 n) +long long _atomic64_xchg(long long *v, long long n) { return __atomic64_xchg(v, __atomic_setup(v), n); } EXPORT_SYMBOL(_atomic64_xchg); -u64 _atomic64_xchg_add(u64 *v, u64 i) +long long _atomic64_xchg_add(long long *v, long long i) { return __atomic64_xchg_add(v, __atomic_setup(v), i); } EXPORT_SYMBOL(_atomic64_xchg_add); -u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u) +long long _atomic64_xchg_add_unless(long long *v, long long a, long long u) { /* * Note: argument order is switched here since it is easier @@ -130,7 +130,7 @@ u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u) } EXPORT_SYMBOL(_atomic64_xchg_add_unless); -u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n) +long long _atomic64_cmpxchg(long long *v, long long o, long long n) { return __atomic64_cmpxchg(v, __atomic_setup(v), o, n); } |