diff options
author | Will Deacon <will.deacon@arm.com> | 2014-02-07 19:12:32 +0100 |
---|---|---|
committer | Russell King <rmk+kernel@arm.linux.org.uk> | 2014-02-10 12:44:50 +0100 |
commit | 7c8746a9eb287642deaad0e7c2cdf482dce5e4be (patch) | |
tree | d8740bb7222926df4bae9f93a6564839b08dc6c3 /arch/arm | |
parent | ARM: 7953/1: mm: ensure TLB invalidation is complete before enabling MMU (diff) | |
download | linux-7c8746a9eb287642deaad0e7c2cdf482dce5e4be.tar.xz linux-7c8746a9eb287642deaad0e7c2cdf482dce5e4be.zip |
ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev
When unlocking a spinlock, we require the following, strictly ordered
sequence of events:
<barrier> /* dmb */
<unlock>
<barrier> /* dsb */
<sev>
Whilst the code does indeed reflect this in terms of the architecture,
the final <barrier> + <sev> have been contracted into a single inline
asm without a "memory" clobber, therefore the compiler is at liberty to
reorder the unlock to the end of the above sequence. In such a case,
a waiting CPU may be woken up before the lock has been unlocked, leading
to extremely poor performance.
This patch reworks the dsb_sev() function to make use of the dsb()
macro and ensure ordering against the unlock.
Cc: <stable@vger.kernel.org>
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch/arm')
-rw-r--r-- | arch/arm/include/asm/spinlock.h | 15 |
1 files changed, 3 insertions, 12 deletions
diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h index ef3c6072aa45..ac4bfae26702 100644 --- a/arch/arm/include/asm/spinlock.h +++ b/arch/arm/include/asm/spinlock.h @@ -37,18 +37,9 @@ static inline void dsb_sev(void) { -#if __LINUX_ARM_ARCH__ >= 7 - __asm__ __volatile__ ( - "dsb ishst\n" - SEV - ); -#else - __asm__ __volatile__ ( - "mcr p15, 0, %0, c7, c10, 4\n" - SEV - : : "r" (0) - ); -#endif + + dsb(ishst); + __asm__(SEV); } /* |