diff options
author | Steven Rostedt <srostedt@redhat.com> | 2009-10-13 22:33:50 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-10-14 08:13:53 +0200 |
commit | 194ec34184869f0de1cf255c924fc5299e1b3d27 (patch) | |
tree | 7eb411e56f381b65bcafd0aa750f6f6705f3e451 /arch/x86/kernel/entry_64.S | |
parent | Merge branch 'tracing/core' of git://git.kernel.org/pub/scm/linux/kernel/git/... (diff) | |
download | linux-194ec34184869f0de1cf255c924fc5299e1b3d27.tar.xz linux-194ec34184869f0de1cf255c924fc5299e1b3d27.zip |
function-graph/x86: Replace unbalanced ret with jmp
The function graph tracer replaces the return address with a hook
to trace the exit of the function call. This hook will finish by
returning to the real location the function should return to.
But the current implementation uses a ret to jump to the real
return location. This causes a imbalance between calls and ret.
That is the original function does a call, the ret goes to the
handler and then the handler does a ret without a matching call.
Although the function graph tracer itself still breaks the branch
predictor by replacing the original ret, by using a second ret and
causing an imbalance, it breaks the predictor even more.
This patch replaces the ret with a jmp to keep the calls and ret
balanced. I tested this on one box and it showed a 1.7% increase in
performance. Another box only showed a small 0.3% increase. But no
box that I tested this on showed a decrease in performance by
making this change.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20091013203425.042034383@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86/kernel/entry_64.S')
-rw-r--r-- | arch/x86/kernel/entry_64.S | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index b5c061f8f358..bd5bbddddf91 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -155,11 +155,11 @@ GLOBAL(return_to_handler) call ftrace_return_to_handler - movq %rax, 16(%rsp) + movq %rax, %rdi movq 8(%rsp), %rdx movq (%rsp), %rax - addq $16, %rsp - retq + addq $24, %rsp + jmp *%rdi #endif |