diff options
author | Heiko Carstens <heiko.carstens@de.ibm.com> | 2014-09-03 13:26:23 +0200 |
---|---|---|
committer | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2014-09-09 08:53:30 +0200 |
commit | 3d1e220d08c6a00ffa83d39030b8162f66665b2b (patch) | |
tree | 4529f0d568ef53d296476a640d26ae0128bcbacf /arch/s390/include/asm/ftrace.h | |
parent | s390/kprobes: remove unused jprobe_return_end() (diff) | |
download | linux-3d1e220d08c6a00ffa83d39030b8162f66665b2b.tar.xz linux-3d1e220d08c6a00ffa83d39030b8162f66665b2b.zip |
s390/ftrace: optimize mcount code
Reduce the number of executed instructions within the mcount block if
function tracing is enabled. We achieve that by using a non-standard
C function call ABI. Since the called function is also written in
assembler this is not a problem.
This also allows to replace the unconditional store at the beginning
of the mcount block with a larl instruction, which doesn't touch
memory.
In theory we could also patch the first instruction of the mcount block
to enable and disable function tracing. However this would break kprobes.
This could be fixed with implementing the "kprobes_on_ftrace" feature;
however keeping the odd jprobes working seems not to be possible without
a lot of code churn. Therefore keep the code easy and simply accept one
wasted 1-cycle "larl" instruction per function prologue.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/s390/include/asm/ftrace.h')
-rw-r--r-- | arch/s390/include/asm/ftrace.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h index 1759d73fb95b..d419362dc231 100644 --- a/arch/s390/include/asm/ftrace.h +++ b/arch/s390/include/asm/ftrace.h @@ -19,7 +19,7 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) #endif /* __ASSEMBLY__ */ #ifdef CONFIG_64BIT -#define MCOUNT_INSN_SIZE 12 +#define MCOUNT_INSN_SIZE 18 #else #define MCOUNT_INSN_SIZE 22 #endif |