diff options
author | Ingo Molnar <mingo@kernel.org> | 2015-01-01 22:21:22 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-01-01 22:21:22 +0100 |
commit | 2aba73a6146bb85d4a42386ca41dec0f4aa4b3ad (patch) | |
tree | 434ff593bdf040c890427599dc4c1ecfc1377b3c | |
parent | x86/build: Clean auto-generated processor feature files (diff) | |
parent | x86, vdso: Use asm volatile in __getcpu (diff) | |
download | linux-2aba73a6146bb85d4a42386ca41dec0f4aa4b3ad.tar.xz linux-2aba73a6146bb85d4a42386ca41dec0f4aa4b3ad.zip |
Merge tag 'pr-20141223-x86-vdso' of git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux into x86/urgent
Pull VDSO fix from Andy Lutomirski:
"This is hopefully the last vdso fix for 3.19. It should be very
safe (it just adds a volatile).
I don't think it fixes an actual bug (the __getcpu calls in the
pvclock code may not have been needed in the first place), but
discussion on that point is ongoing.
It also fixes a big performance issue in 3.18 and earlier in which
the lsl instructions in vclock_gettime got hoisted so far up the
function that they happened even when the function they were in was
never called. n 3.19, the performance issue seems to be gone due to
the whims of my compiler and some interaction with a branch that's
now gone.
I'll hopefully have a much bigger overhaul of the pvclock code
for 3.20, but it needs careful review."
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | arch/x86/include/asm/vgtod.h | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h index e7e9682a33e9..f556c4843aa1 100644 --- a/arch/x86/include/asm/vgtod.h +++ b/arch/x86/include/asm/vgtod.h @@ -80,9 +80,11 @@ static inline unsigned int __getcpu(void) /* * Load per CPU data from GDT. LSL is faster than RDTSCP and - * works on all CPUs. + * works on all CPUs. This is volatile so that it orders + * correctly wrt barrier() and to keep gcc from cleverly + * hoisting it out of the calling function. */ - asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG)); + asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG)); return p; } |