diff options
author | Andy Lutomirski <luto@kernel.org> | 2015-07-03 21:44:30 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-07-07 10:59:08 +0200 |
commit | a586f98e9767fb0dfdb989002866b4024f00ce08 (patch) | |
tree | 31931a3784c55b4cffd79c094055900bddd232fc /arch/x86/entry/entry_64.S | |
parent | x86/asm/entry/64: Save all regs on interrupt entry (diff) | |
download | linux-a586f98e9767fb0dfdb989002866b4024f00ce08.tar.xz linux-a586f98e9767fb0dfdb989002866b4024f00ce08.zip |
x86/asm/entry/64: Simplify IRQ stack pt_regs handling
There's no need for both RSI and RDI to point to the original stack.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Denys Vlasenko <vda.linux@googlemail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/3a0481f809dd340c7d3f54ce3fd6d66ef2a578cd.1435952415.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/entry/entry_64.S')
-rw-r--r-- | arch/x86/entry/entry_64.S | 8 |
1 files changed, 3 insertions, 5 deletions
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 65029f48bcc4..83eb63d31da4 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -506,8 +506,6 @@ END(irq_entries_start) SAVE_C_REGS SAVE_EXTRA_REGS - movq %rsp,%rdi /* arg1 for \func (pointer to pt_regs) */ - testb $3, CS(%rsp) jz 1f SWAPGS @@ -519,14 +517,14 @@ END(irq_entries_start) * a little cheaper to use a separate counter in the PDA (short of * moving irq_enter into assembly, which would be too much work) */ - movq %rsp, %rsi + movq %rsp, %rdi incl PER_CPU_VAR(irq_count) cmovzq PER_CPU_VAR(irq_stack_ptr), %rsp - pushq %rsi + pushq %rdi /* We entered an interrupt context - irqs are off: */ TRACE_IRQS_OFF - call \func + call \func /* rdi points to pt_regs */ .endm /* |