summaryrefslogtreecommitdiffstats
path: root/arch/mips/include/asm/atomic.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* MIPS: retire "asm/llsc.h"Huang Pei2022-01-051-6/+5
| | | | | | | | | | | | | all that "asm/llsc.h" does is just to help inline asm, which can be stringifyed from "asm/asm.h" +. Since "asm/asm.h" has all we need, retire "asm/llsc.h" +. remove unused header file Inspired-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
* MIPS: locking/atomic: Fix atomic{_64,}_sub_if_positiveRui Wang2021-08-051-1/+1
| | | | | | | This looks like a typo and that caused atomic64 test failed. Signed-off-by: Rui Wang <wangrui@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
* locking/atomic: mips: move to ARCH_ATOMICMark Rutland2021-05-261-26/+29
| | | | | | | | | | | | | | | | | | | | | We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates mips to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-23-mark.rutland@arm.com
* MIPS: Compare __SYNC_loongson3_war against 0Nathan Chancellor2021-01-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is enabled: In file included from lib/errseq.c:4: In file included from ./include/linux/atomic.h:7: ./arch/mips/include/asm/atomic.h:52:1: warning: converting the result of '<<' to a boolean always evaluates to true [-Wtautological-constant-compare] ATOMIC_OPS(atomic64, s64) ^ ./arch/mips/include/asm/atomic.h:40:9: note: expanded from macro 'ATOMIC_OPS' return cmpxchg(&v->counter, o, n); ^ ./arch/mips/include/asm/cmpxchg.h:194:7: note: expanded from macro 'cmpxchg' if (!__SYNC_loongson3_war) ^ ./arch/mips/include/asm/sync.h:147:34: note: expanded from macro '__SYNC_loongson3_war' # define __SYNC_loongson3_war (1 << 31) ^ While it is not wrong that the result of this shift is always true in a boolean context, it is not a problem here. Regardless, the warning is really noisy so rather than making the shift a boolean implicitly, use it in an equality comparison so the shift is used as an integer value. Fixes: 4d1dbfe6cbec ("MIPS: atomic: Emit Loongson3 sync workarounds within asm") Fixes: a91f2a1dba44 ("MIPS: cmpxchg: Omit redundant barriers for Loongson3") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
* locking/atomic: Move ATOMIC_INIT into linux/types.hHerbert Xu2020-07-291-1/+0
| | | | | | | | | | | This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h. This allows users of atomic_t to use ATOMIC_INIT without having to include atomic.h as that way may lead to header loops. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au
* MIPS: atomic: Deduplicate 32b & 64b read, set, xchg, cmpxchgPaul Burton2019-10-071-43/+27
| | | | | | | | | | | | | | | | | | | | | | Remove the remaining duplication between 32b & 64b in asm/atomic.h by making use of an ATOMIC_OPS() macro to generate: - atomic_read()/atomic64_read() - atomic_set()/atomic64_set() - atomic_cmpxchg()/atomic64_cmpxchg() - atomic_xchg()/atomic64_xchg() This is consistent with the way all other functions in asm/atomic.h are generated, and ensures consistency between the 32b & 64b functions. Of note is that this results in the above now being static inline functions rather than macros. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: atomic: Unify 32b & 64b sub_if_positivePaul Burton2019-10-071-106/+58
| | | | | | | | | | | | | | | | | | | Unify the definitions of atomic_sub_if_positive() & atomic64_sub_if_positive() using a macro like we do for most other atomic functions. This allows us to share the implementation ensuring consistency between the two. Notably this provides the appropriate loongson3_war barriers in the atomic64_sub_if_positive() case which were previously missing. The code is rearranged a little to handle the !kernel_uses_llsc case first in order to de-indent the LL/SC case & allow us not to go over 80 characters per line. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: atomic: Use _atomic barriers in atomic_sub_if_positive()Paul Burton2019-10-071-2/+2
| | | | | | | | | | | | | | Use smp_mb__before_atomic() & smp_mb__after_atomic() in atomic_sub_if_positive() rather than the equivalent smp_mb__before_llsc() & smp_llsc_mb(). The former are more standard & this preps us for avoiding redundant duplicate barriers on Loongson3 in a later patch. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: atomic: Emit Loongson3 sync workarounds within asmPaul Burton2019-10-071-6/+14
| | | | | | | | | | | | | | Generate the sync instructions required to workaround Loongson3 LL/SC errata within inline asm blocks, which feels a little safer than doing it from C where strictly speaking the compiler would be well within its rights to insert a memory access between the separate asm statements we previously had, containing sync & ll instructions respectively. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: atomic: Use one macro to generate 32b & 64b functionsPaul Burton2019-10-071-151/+45
| | | | | | | | | | | | | Cut down on duplication by generalizing the ATOMIC_OP(), ATOMIC_OP_RETURN() & ATOMIC_FETCH_OP() macros to work for both 32b & 64b atomics, and removing the ATOMIC64_ variants. This ensures consistency between our atomic_* & atomic64_* functions. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: atomic: Handle !kernel_uses_llsc firstPaul Burton2019-10-071-50/+49
| | | | | | | | | | | | | Handle the !kernel_uses_llsc path first in our ATOMIC_OP(), ATOMIC_OP_RETURN() & ATOMIC_FETCH_OP() macros & return from within the block. This allows us to de-indent the kernel_uses_llsc path by one level which will be useful when making further changes. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: atomic: Fix whitespace in ATOMIC_OP macrosPaul Burton2019-10-071-92/+92
| | | | | | | | | | | | | We define macros in asm/atomic.h which end each line with space characters before a backslash to continue on the next line. Remove the space characters leaving tabs as the whitespace used for conformity with coding convention. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* MIPS: Unify sc beqz definitionPaul Burton2019-10-071-19/+9
| | | | | | | | | | | | | | | | We currently duplicate the definition of __scbeqz in asm/atomic.h & asm/cmpxchg.h. Move it to asm/llsc.h & rename it to __SC_BEQZ to fit better with the existing __SC macro provided there. We include a tab in the string in order to avoid the need for users to indent code any further to include whitespace of their own after the instruction mnemonic. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
* mips/atomic: Fix smp_mb__{before,after}_atomic()Peter Zijlstra2019-08-311-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full memory ordering and smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS without WEAK_REORDERING_BEYOND_LLSC) fail for: *x = 1; atomic_inc(u); smp_mb__after_atomic(); r0 = *y; Because, while the atomic_inc() implies memory order, it (surprisingly) does not provide a compiler barrier. This then allows the compiler to re-order like so: atomic_inc(u); *x = 1; smp_mb__after_atomic(); r0 = *y; Which the CPU is then allowed to re-order (under TSO rules) like: atomic_inc(u); r0 = *y; *x = 1; And this very much was not intended. Therefore strengthen the atomic RmW ops to include a compiler barrier. Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
* mips/atomic: Fix loongson_llsc_mb() wreckagePeter Zijlstra2019-08-311-2/+3
| | | | | | | | | | | | | | | | | | | | The comment describing the loongson_llsc_mb() reorder case doesn't make any sense what so ever. Instruction re-ordering is not an SMP artifact, but rather a CPU local phenomenon. Clarify the comment by explaining that these issue cause a coherence fail. For the branch speculation case; if futex_atomic_cmpxchg_inatomic() needs one at the bne branch target, then surely the normal __cmpxch_asm() implementation does too. We cannot rely on the barriers from cmpxchg() because cmpxchg_local() is implemented with the same macro, and branch prediction and speculation are, too, CPU local. Fixes: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()") Cc: Huacai Chen <chenhc@lemote.com> Cc: Huang Pei <huangpei@loongson.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
* locking/atomic, mips: Use s64 for atomic64Mark Rutland2019-06-031-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As a step towards making the atomic64 API use consistent types treewide, let's have the mips atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: James Hogan <jhogan@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paulus@samba.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-10-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* MIPS: Loongson: Introduce and use loongson_llsc_mb()Huacai Chen2019-02-041-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On the Loongson-2G/2H/3A/3B there is a hardware flaw that ll/sc and lld/scd is very weak ordering. We should add sync instructions "before each ll/lld" and "at the branch-target between ll/sc" to workaround. Otherwise, this flaw will cause deadlock occasionally (e.g. when doing heavy load test with LTP). Below is the explaination of CPU designer: "For Loongson 3 family, when a memory access instruction (load, store, or prefetch)'s executing occurs between the execution of LL and SC, the success or failure of SC is not predictable. Although programmer would not insert memory access instructions between LL and SC, the memory instructions before LL in program-order, may dynamically executed between the execution of LL/SC, so a memory fence (SYNC) is needed before LL/LLD to avoid this situation. Since Loongson-3A R2 (3A2000), we have improved our hardware design to handle this case. But we later deduce a rarely circumstance that some speculatively executed memory instructions due to branch misprediction between LL/SC still fall into the above case, so a memory fence (SYNC) at branch-target (if its target is not between LL/SC) is needed for Loongson 3A1000, 3B1500, 3A2000 and 3A3000. Our processor is continually evolving and we aim to to remove all these workaround-SYNCs around LL/SC for new-come processor." Here is an example: Both cpu1 and cpu2 simutaneously run atomic_add by 1 on same atomic var, this bug cause both 'sc' run by two cpus (in atomic_add) succeed at same time('sc' return 1), and the variable is only *added by 1*, sometimes, which is wrong and unacceptable(it should be added by 2). Why disable fix-loongson3-llsc in compiler? Because compiler fix will cause problems in kernel's __ex_table section. This patch fix all the cases in kernel, but: +. the fix at the end of futex_atomic_cmpxchg_inatomic is for branch-target of 'bne', there other cases which smp_mb__before_llsc() and smp_llsc_mb() fix the ll and branch-target coincidently such as atomic_sub_if_positive/ cmpxchg/xchg, just like this one. +. Loongson 3 does support CONFIG_EDAC_ATOMIC_SCRUB, so no need to touch edac.h +. local_ops and cmpxchg_local should not be affected by this bug since only the owner can write. +. mips_atomic_set for syscall.c is deprecated and rarely used, just let it go Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Huang Pei <huangpei@loongson.cn> [paul.burton@mips.com: - Simplify the addition of -mno-fix-loongson3-llsc to cflags, and add a comment describing why it's there. - Make loongson_llsc_mb() a no-op when CONFIG_CPU_LOONGSON3_WORKAROUNDS=n, rather than a compiler memory barrier. - Add a comment describing the bug & how loongson_llsc_mb() helps in asm/barrier.h.] Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: ambrosehua@gmail.com Cc: Steven J . Hill <Steven.Hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: Li Xuefeng <lixuefeng@loongson.cn> Cc: Xu Chenghua <xuchenghua@loongson.cn>
* MIPS: Fix a R10000_LLSC_WAR logic in atomic.hHuacai Chen2018-12-311-1/+1
| | | | | | | | | | | | | | | | | Commit 4936084c2ee2 ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") introduce a mistake in atomic64_fetch_##op##_relaxed(), because it forget to delete R10000_LLSC_WAR in the if-condition. So fix it. Fixes: 4936084c2ee2 ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Steven J . Hill <Steven.Hill@cavium.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 4.19+
* MIPS: Avoid using .set mips0 to restore ISAPaul Burton2018-11-091-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently have 2 commonly used methods for switching ISA within assembly code, then restoring the original ISA. 1) Using a pair of .set push & .set pop directives. For example: .set push .set mips32r2 <some_insn> .set pop 2) Using .set mips0 to restore the ISA originally specified on the command line. For example: .set mips32r2 <some_insn> .set mips0 Unfortunately method 2 does not work with nanoMIPS toolchains, where the assembler rejects the .set mips0 directive like so: Error: cannot change ISA from nanoMIPS to mips0 In preparation for supporting nanoMIPS builds, switch all instances of method 2 in generic non-platform-specific code to use push & pop as in method 1 instead. The .set push & .set pop is arguably cleaner anyway, and if nothing else it's good to consistently use one method. Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21037/ Cc: linux-mips@linux-mips.org
* Merge tag 'mips_4.19_2' of ↵Linus Torvalds2018-08-231-1/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: - Fix microMIPS build failures by adding a .insn directive to the barrier_before_unreachable() asm statement in order to convince the toolchain that the asm statement is a valid branch target rather than a bogus attempt to switch ISA. - Clean up our declarations of TLB functions that we overwrite with generated code in order to prevent the compiler making assumptions about alignment that cause microMIPS kernels built with GCC 7 & above to die early during boot. - Fix up a regression for MIPS32 kernels which slipped into the main MIPS pull for 4.19, causing CONFIG_32BIT=y kernels to contain inappropriate MIPS64 instructions. - Extend our existing workaround for MIPSr6 builds that end up using the __multi3 intrinsic to GCC 7 & below, rather than just GCC 7. * tag 'mips_4.19_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: lib: Provide MIPS64r6 __multi3() for GCC < 7 MIPS: Workaround GCC __builtin_unreachable reordering bug compiler.h: Allow arch-specific asm/compiler.h MIPS: Avoid move psuedo-instruction whilst using MIPS_ISA_LEVEL MIPS: Consistently declare TLB functions MIPS: Export tlbmiss_handler_setup_pgd near its definition
| * MIPS: Avoid move psuedo-instruction whilst using MIPS_ISA_LEVELPaul Burton2018-08-181-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIPS_ISA_LEVEL is always defined as the 64 bit ISA that is a compatible superset of the ISA that the kernel build is targeting, and is used to allow us to emit instructions that we may detect support for at runtime. When we use a .set MIPS_ISA_LEVEL directive & are building a 32-bit kernel, we therefore are temporarily allowing the assembler to generate MIPS64 instructions. Using the move pseudo-instruction whilst this is the case is problematic because the assembler is likely to emit a daddu instruction which will generate a reserved instruction exception when executed on a MIPS32 machine. Unfortunately the combination of commit a0a5ac3ce8fe ("MIPS: Fix delay slot bug in `atomic*_sub_if_positive' for R10000_LLSC_WAR") and commit 4936084c2ee2 ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") causes us to do exactly this in atomic_sub_if_positive(), and the result is MIPS64 daddu instructions in 32-bit kernels. Fix this by using .set mips0 to restore the default ISA after the ll instruction, and use .set MIPS_ISA_LEVEL again prior to the sc. This ensures everything but the ll & sc are assembled using the default ISA for the kernel build & the move pseudo-instruction is emitted as a MIPS32 addu instruction. We appear to have another pre-existing instance of the same issue in our atomic_fetch_*_relaxed() functions, and fix that up too by moving our .set move0 such that it occurs prior to use of the move pseudo-instruction. Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: a0a5ac3ce8fe ("MIPS: Fix delay slot bug in `atomic*_sub_if_positive' for R10000_LLSC_WAR") Fixes: 4936084c2ee2 ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") Patchwork: https://patchwork.linux-mips.org/patch/20253/ Cc: James Hogan <jhogan@kernel.org> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
* | Merge tag 'mips_4.19' of ↵Linus Torvalds2018-08-141-159/+36
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS updates from Paul Burton: "Here are the main MIPS changes for 4.19. An overview of the general architecture changes: - Massive DMA ops refactoring from Christoph Hellwig (huzzah for deleting crufty code!). - We introduce NT_MIPS_DSP & NT_MIPS_FP_MODE ELF notes & corresponding regsets to expose DSP ASE & floating point mode state respectively, both for live debugging & core dumps. - We better optimize our code by hard-coding cpu_has_* macros at compile time where their values are known due to the ISA revision that the kernel build is targeting. - The EJTAG exception handler now better handles SMP systems, where it was previously possible for CPUs to clobber a register value saved by another CPU. - Our implementation of memset() gained a couple of fixes for MIPSr6 systems to return correct values in some cases where stores fault. - We now implement ioremap_wc() using the uncached-accelerated cache coherency attribute where supported, which is detected during boot, and fall back to plain uncached access where necessary. The MIPS-specific (and unused in tree) ioremap_uncached_accelerated() & ioremap_cacheable_cow() are removed. - The prctl(PR_SET_FP_MODE, ...) syscall is better supported for SMP systems by reworking the way we ensure remote CPUs that may be running threads within the affected process switch mode. - Systems using the MIPS Coherence Manager will now set the MIPS_IC_SNOOPS_REMOTE flag to avoid some unnecessary cache maintenance overhead when flushing the icache. - A few fixes were made for building with clang/LLVM, which now sucessfully builds kernels for many of our platforms. - Miscellaneous cleanups all over. And some platform-specific changes: - ar7 gained stubs for a few clock API functions to fix build failures for some drivers. - ath79 gained support for a few new SoCs, a few fixes & better gpio-keys support. - Ci20 now exposes its SPI bus using the spi-gpio driver. - The generic platform can now auto-detect a suitable value for PHYS_OFFSET based upon the memory map described by the device tree, allowing us to avoid wasting memory on page book-keeping for systems where RAM starts at a non-zero physical address. - Ingenic systems using the jz4740 platform code now link their vmlinuz higher to allow for kernels of a realistic size. - Loongson32 now builds the kernel targeting MIPSr1 rather than MIPSr2 to avoid CPU errata. - Loongson64 gains a couple of fixes, a workaround for a write buffering issue & support for the Loongson 3A R3.1 CPU. - Malta now uses the piix4-poweroff driver to handle powering down. - Microsemi Ocelot gained support for its SPI bus & NOR flash, its second MDIO bus and can now be supported by a FIT/.itb image. - Octeon saw a bunch of header cleanups which remove a lot of duplicate or unused code" * tag 'mips_4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (123 commits) MIPS: Remove remnants of UASM_ISA MIPS: netlogic: xlr: Remove erroneous check in nlm_fmn_send() MIPS: VDSO: Force link endianness MIPS: Always specify -EB or -EL when using clang MIPS: Use dins to simplify __write_64bit_c0_split() MIPS: Use read-write output operand in __write_64bit_c0_split() MIPS: Avoid using array as parameter to write_c0_kpgd() MIPS: vdso: Allow clang's --target flag in VDSO cflags MIPS: genvdso: Remove GOT checks MIPS: Remove obsolete MIPS checks for DST node "chosen@0" MIPS: generic: Remove input symbols from defconfig MIPS: Delete unused code in linux32.c MIPS: Remove unused sys_32_mmap2 MIPS: Remove nabi_no_regargs mips: dts: mscc: enable spi and NOR flash support on ocelot PCB123 mips: dts: mscc: Add spi on Ocelot MIPS: Loongson: Merge load addresses MIPS: Loongson: Set Loongson32 to MIPS32R1 MIPS: mscc: ocelot: add interrupt controller properties to GPIO controller MIPS: generic: Select MIPS_AUTO_PFN_OFFSET ...
| * MIPS: Cleanup R10000_LLSC_WAR logic in atomic.hJoshua Kinard2018-07-121-147/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch reduces down the conditionals in MIPS atomic code that deal with a silicon bug in early R10000 cpus that required a workaround of a branch-likely instruction following a store-conditional in order to to guarantee the whole ll/sc sequence is atomic. As the only real difference is a branch-likely instruction (beqzl) over a standard branch (beqz), the conditional is reduced down to a single preprocessor check at the top to pick the required instruction. This requires writing the uses in assembler, thus we discard the non-R10000 case that uses a mixture of a C do...while loop with embedded assembler that was added back in commit 7837314d141c ("MIPS: Get rid of branches to .subsections."). A note found in the git log for commit 5999eca25c1f ("[MIPS] Improve branch prediction in ll/sc atomic operations.") is also addressed. The macro definition for the branch instruction and the code comment derives from a patch sent in earlier by Paul Burton for various cmpxchg cleanups. [paul.burton@mips.com: - Minor whitespace fix for checkpatch.] Signed-off-by: Joshua Kinard <kumba@gentoo.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/17736/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <james.hogan@mips.com> Cc: "Maciej W. Rozycki" <macro@mips.com> Cc: linux-mips@linux-mips.org
| * MIPS: Fix delay slot bug in `atomic*_sub_if_positive' for R10000_LLSC_WARJoshua Kinard2018-07-121-20/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes an old bug in MIPS ll/sc atomics, in the `atomic_sub_if_positive' and `atomic64_sub_if_positive' functions, for the R10000_LLSC_WAR case where the result of the subu/dsubu instruction would potentially not be made available to the sc/scd instruction due to being in the delay-slot of the branch-likely (beqzl) instruction. This also removes the need for the `noreorder' directive, allowing GAS to use delay slot scheduling as needed. The same fix is also applied to the standard branch (beqz) case in preparation for a follow-up patch that will cleanup/merge the R10000_LLSC_WAR and non-R10K sections together. Signed-off-by: Joshua Kinard <kumba@gentoo.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Tested-by: Joshua Kinard <kumba@gentoo.org> Patchwork: https://patchwork.linux-mips.org/patch/17735/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <james.hogan@mips.com> Cc: "Maciej W. Rozycki" <macro@mips.com> Cc: linux-mips@linux-mips.org
* | atomics/treewide: Make unconditional inc/dec ops optionalMark Rutland2018-06-211-38/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many of the inc/dec ops are mandatory, but for most architectures inc/dec are simply trivial wrappers around their corresponding add/sub ops. Let's make all the inc/dec ops optional, so that we can get rid of these boilerplate wrappers. The instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-17-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | atomics/treewide: Make test ops optionalMark Rutland2018-06-211-84/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some of the atomics return the result of a test applied after the atomic operation, and almost all architectures implement these as trivial wrappers around the underlying atomic. Specifically: * <atomic>_inc_and_test(v) is (<atomic>_inc_return(v) == 0) * <atomic>_dec_and_test(v) is (<atomic>_dec_return(v) == 0) * <atomic>_sub_and_test(i, v) is (<atomic>_sub_return(i, v) == 0) * <atomic>_add_negative(i, v) is (<atomic>_add_return(i, v) < 0) Rather than have these definitions duplicated in all architectures, with minor inconsistencies in formatting and documentation, let's make these operations optional, with default fallbacks as above. Implementations must now provide a preprocessor symbol. The instrumented atomics are updated accordingly. Both x86 and m68k have custom implementations, which are left as-is, given preprocessor symbols to avoid being overridden. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-16-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | atomics/treewide: Make atomic64_fetch_add_unless() optionalMark Rutland2018-06-211-24/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Architectures with atomic64_fetch_add_unless() provide a preprocessor symbol if they do so, and all other architectures have trivial C implementations of atomic64_add_unless() which are near-identical. Let's unify the trivial definitions of atomic64_fetch_add_unless() in <linux/atomic.h>, so that we always have both atomic64_fetch_add_unless() and atomic64_add_unless() with less boilerplate code. This means that atomic64_add_unless() is always implemented in core code, and the instrumented atomics are updated accordingly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-15-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | atomics/treewide: Make atomic_fetch_add_unless() optionalMark Rutland2018-06-211-24/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several architectures these have a near-identical implementation based on atomic_read() and atomic_cmpxchg() which we can instead define in <linux/atomic.h>, so let's do so, using something close to the existing x86 implementation with try_cmpxchg(). Where an architecture provides its own atomic_fetch_add_unless(), it must define a preprocessor symbol for it. The instrumented atomics are updated accordingly. Note that arch/arc's existing atomic_fetch_add_unless() had redundant barriers, as these are already present in its atomic_cmpxchg() implementation. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Link: https://lore.kernel.org/lkml/20180621121321.4761-7-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | atomics/treewide: Make atomic64_inc_not_zero() optionalMark Rutland2018-06-211-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to define the same boilerplate. Let's add a fallback in <linux/atomic.h>, and remove the redundant implementations. Note that atomic64_add_unless() is always defined in <linux/atomic.h>, and promotes its arguments to the requisite types, so we need not do this explicitly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-6-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | atomics/treewide: Rename __atomic_add_unless() => atomic_fetch_add_unless()Mark Rutland2018-06-211-2/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While __atomic_add_unless() was originally intended as a building-block for atomic_add_unless(), it's now used in a number of places around the kernel. It's the only common atomic operation named __atomic*(), rather than atomic_*(), and for consistency it would be better named atomic_fetch_add_unless(). This lack of consistency is slightly confusing, and gets in the way of scripting atomics. Given that, let's clean things up and promote it to an official part of the atomics API, in the form of atomic_fetch_add_unless(). This patch converts definitions and invocations over to the new name, including the instrumented version, using the following script: ---- git grep -w __atomic_add_unless | while read line; do sed -i '{s/\<__atomic_add_unless\>/atomic_fetch_add_unless/}' "${line%%:*}"; done git grep -w __arch_atomic_add_unless | while read line; do sed -i '{s/\<__arch_atomic_add_unless\>/arch_atomic_fetch_add_unless/}' "${line%%:*}"; done ---- Note that we do not have atomic{64,_long}_fetch_add_unless(), which will be introduced by later patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* locking/atomic, arch/mips: Convert to _relaxed atomicsPeter Zijlstra2016-06-161-20/+22
| | | | | | | | | | | | | | | | | | | | | Generic code will construct {,_acquire,_release} versions by adding the required smp_mb__{before,after}_atomic() calls. XXX if/when MIPS will start using their new SYNCxx instructions they can provide custom __atomic_op_{acquire,release}() macros as per the powerpc example. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* locking/atomic: Remove linux/atomic.h:atomic_fetch_or()Peter Zijlstra2016-06-161-2/+0
| | | | | | | | | | | | | | | Since all architectures have this implemented now natively, remove this dead code. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* locking/atomic, arch/mips: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()Peter Zijlstra2016-06-161-9/+129
| | | | | | | | | | | | | | | | | | | | | | Implement FETCH-OP atomic primitives, these are very similar to the existing OP-RETURN primitives we already have, except they return the value of the atomic variable _before_ modification. This is especially useful for irreversible operations -- such as bitops (because it becomes impossible to reconstruct the state prior to modification). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds2015-11-151-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull MIPS updates from Ralf Baechle: "These are the highlists of the main MIPS pull request for 4.4: - Add latencytop support - Support appended DTBs - VDSO support and initially use it for gettimeofday. - Drop the .MIPS.abiflags and ELF NOTE sections from vmlinux - Support for the 5KE, an internal test core. - Switch all MIPS platfroms to libata drivers. - Improved support, cleanups for ralink and Lantiq platforms. - Support for the new xilfpga platform. - A number of DTB improvments for BMIPS. - Improved support for CM and CPS. - Minor JZ4740 and BCM47xx enhancements" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (120 commits) MIPS: idle: add case for CPU_5KE MIPS: Octeon: Support APPENDED_DTB MIPS: vmlinux: create a section for appended DTB MIPS: Clean up compat_siginfo_t MIPS: Fix PAGE_MASK definition MIPS: BMIPS: Enable GZIP ramdisk and timed printks MIPS: Add xilfpga defconfig MIPS: xilfpga: Add mipsfpga platform code MIPS: xilfpga: Add xilfpga device tree files. dt-bindings: MIPS: Document xilfpga bindings and boot style MIPS: Make MIPS_CMDLINE_DTB default MIPS: Make the kernel arguments from dtb available MIPS: Use USE_OF as the guard for appended dtb MIPS: BCM63XX: Use pr_* instead of printk MIPS: Loongson: Cleanup CONFIG_LOONGSON_SUSPEND. MIPS: lantiq: Disable xbar fpi burst mode MIPS: lantiq: Force the crossbar to big endian MIPS: lantiq: Initialize the USB core on boot MIPS: lantiq: Return correct value for fpi clock on ar9 MIPS: ralink: Add missing clock on rt305x ...
| * MIPS: atomic: Fix comment describing atomic64_add_unless's return value.Ralf Baechle2015-10-261-1/+1
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Fixes: f24219b4e90cf70ec4a211b17fbabc725a0ddf3c (cherry picked from commit f0a232cde7be18a207fd057dd79bbac8a0a45dec)
* | atomic, arch: Audit atomic_{read,set}()Peter Zijlstra2015-09-231-4/+4
|/ | | | | | | | | | | | | | | | | | | | | | | | | This patch makes sure that atomic_{read,set}() are at least {READ,WRITE}_ONCE(). We already had the 'requirement' that atomic_read() should use ACCESS_ONCE(), and most archs had this, but a few were lacking. All are now converted to use READ_ONCE(). And, by a symmetry and general paranoia argument, upgrade atomic_set() to use WRITE_ONCE(). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: james.hogan@imgtec.com Cc: linux-kernel@vger.kernel.org Cc: oleg@redhat.com Cc: will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* atomic: Provide atomic_{or,xor,and}Peter Zijlstra2015-07-271-2/+0
| | | | | | | | | | Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* mips: Provide atomic_{or,xor,and}Peter Zijlstra2015-07-271-0/+9
| | | | | | | | | | | Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Acked-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* MIPS: asm: atomic: Update ISA constraints for MIPS R6 supportMarkos Chandras2015-02-171-6/+6
| | | | | | | | MIPS R6 changed the opcodes for LL/SC instructions so we need to set the correct ISA level. Cc: Matthew Fortune <Matthew.Fortune@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: asm: Rename GCC_OFF12_ASM to GCC_OFF_SMALL_ASMMarkos Chandras2015-02-171-15/+15
| | | | | | | | | The GCC_OFF12_ASM macro is used for 12-bit immediate constrains but we will also use it for 9-bit constrains on MIPS R6 so we rename it to something more appropriate. Cc: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
* MIPS: atomic.h: Reformat to fit in 79 columnsMaciej W. Rozycki2014-11-241-180/+181
| | | | | | | Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8484/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Fix microMIPS LL/SC immediate offsetsMaciej W. Rozycki2014-11-241-15/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the microMIPS encoding some memory access instructions have their immediate offset reduced to 12 bits only. That does not match the GCC `R' constraint we use in some places to satisfy the requirement, resulting in build failures like this: {standard input}: Assembler messages: {standard input}:720: Error: macro used $at after ".set noat" {standard input}:720: Warning: macro instruction expanded into multiple instructions Fix the problem by defining a macro, `GCC_OFF12_ASM', that expands to the right constraint depending on whether microMIPS or standard MIPS code is produced. Also apply the fix to where `m' is used as in the worst case this change does nothing, e.g. where the pointer was already in a register such as a function argument and no further offset was requested, and in the best case it avoids an extraneous sequence of up to two instructions to load the high 20 bits of the address in the LL/SC loop. This reduces the risk of lock contention that is the higher the more instructions there are in the critical section between LL and SC. Strictly speaking we could just bulk-replace `R' with `ZC' as the latter constraint adjusts automatically depending on the ISA selected. However it was only introduced with GCC 4.9 and we keep supporing older compilers for the standard MIPS configuration, hence the slightly more complicated approach I chose. The choice of a zero-argument function-like rather than an object-like macro was made so that it does not look like a function call taking the C expression used for the constraint as an argument. This is so as not to confuse the reader or formatting checkers like `checkpatch.pl' and follows previous practice. Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com> Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8482/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* locking,arch: Use ACCESS_ONCE() instead of cast to volatile in atomic_read()Pranith Kumar2014-10-031-2/+2
| | | | | | | | | | | | | | | Use the much more reader friendly ACCESS_ONCE() instead of the cast to volatile. This is purely a stylistic change. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1411482607-20948-1-git-send-email-bobby.prani@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* locking, mips: Fix atomicsPeter Zijlstra2014-09-101-3/+3
| | | | | | | | | | | | | | | | | | | | | The patch folding the atomic ops had two silly fails in the _return primitives. Fixes: ef31563e950c ("locking,arch,mips: Fold atomic_ops") Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Hannes Reinecke <hare@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maciej W. Rozycki <macro@codesourcery.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20140902202126.GA3190@worktop.ger.corp.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* locking,arch,mips: Fold atomic_opsPeter Zijlstra2014-08-141-370/+187
| | | | | | | | | | | | | | | | Many of the atomic op implementations are the same except for one instruction; fold the lot into a few CPP macros and reduce LoC. This also prepares for easy addition of new ops. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maciej W. Rozycki <macro@codesourcery.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Link: http://lkml.kernel.org/r/20140508135852.521548500@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* arch,mips: Convert smp_mb__*()Peter Zijlstra2014-04-181-9/+0
| | | | | | | | | | | | | | | | | | MIPS is interesting and has hardware variants that reorder over ll/sc as well as those that do not. Implement the 2 new barrier functions as per the old barriers. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-9ph49jbae3hol9v721sbc2g6@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maciej W. Rozycki" <macro@codesourcery.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* MIPS: Fix gigaton of warning building with microMIPS.Ralf Baechle2014-03-311-20/+20
| | | | | | | | | | | | | | | | With binutils 2.24 the attempt to switch with microMIPS mode to MIPS III mode through .set mips3 results in *lots* of warnings like {standard input}: Assembler messages: {standard input}:397: Warning: the 64-bit MIPS architecture does not support the `smartmips' extension during a kernel build. Fixed by using .set arch=r4000 instead. This breaks support for building the kernel with binutils 2.13 which was supported for 32 bit kernels only anyway and 2.14 which was a bad vintage for MIPS anyway. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Random whitespace clean-upsMaciej W. Rozycki2013-11-041-1/+1
| | | | | | | | | | Another whitespace clean-up, this removes tabs from between sentences in some comments. Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6103/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Whitespace cleanup.Ralf Baechle2013-02-011-2/+2
| | | | | | | | Having received another series of whitespace patches I decided to do this once and for all rather than dealing with this kind of patches trickling in forever. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* Improve atomic.h robustnessJoshua Kinard2012-10-111-35/+29
| | | | | | | | | | | | | | | | | | | | | | | | | I've maintained this patch, originally from Thiemo Seufer in 2004, for a really long time, but I think it's time for it to get a look at for possible inclusion. I have had no problems with it across various SGI systems over the years. To quote the post here: http://www.linux-mips.org/archives/linux-mips/2004-12/msg00000.html "the atomic functions use so far memory references for the inline assembler to access the semaphore. This can lead to additional instructions in the ll/sc loop, because newer compilers don't expand the memory reference any more but leave it to the assembler. The appended patch uses registers instead, and makes the ll/sc arguments more explicit. In some cases it will lead also to better register scheduling because the register isn't bound to an output any more." Signed-off-by: Joshua Kinard <kumba@gentoo.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/4029/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>