diff options
author | Josh Poimboeuf <jpoimboe@redhat.com> | 2017-06-28 17:11:06 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-06-30 10:19:19 +0200 |
commit | c207aee48037abca71c669cbec407b9891965c34 (patch) | |
tree | 449b9f19161959d2eff847d27d20211022df1210 /arch/x86/lib | |
parent | objtool: Move checking code to check.c (diff) | |
download | linux-c207aee48037abca71c669cbec407b9891965c34.tar.xz linux-c207aee48037abca71c669cbec407b9891965c34.zip |
objtool, x86: Add several functions and files to the objtool whitelist
In preparation for an objtool rewrite which will have broader checks,
whitelist functions and files which cause problems because they do
unusual things with the stack.
These whitelists serve as a TODO list for which functions and files
don't yet have undwarf unwinder coverage. Eventually most of the
whitelists can be removed in favor of manual CFI hint annotations or
objtool improvements.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: live-patching@vger.kernel.org
Link: http://lkml.kernel.org/r/7f934a5d707a574bda33ea282e9478e627fb1829.1498659915.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/lib')
-rw-r--r-- | arch/x86/lib/msr-reg.S | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/x86/lib/msr-reg.S b/arch/x86/lib/msr-reg.S index c81556409bbb..10ffa7e8519f 100644 --- a/arch/x86/lib/msr-reg.S +++ b/arch/x86/lib/msr-reg.S @@ -13,14 +13,14 @@ .macro op_safe_regs op ENTRY(\op\()_safe_regs) pushq %rbx - pushq %rbp + pushq %r12 movq %rdi, %r10 /* Save pointer */ xorl %r11d, %r11d /* Return value */ movl (%rdi), %eax movl 4(%rdi), %ecx movl 8(%rdi), %edx movl 12(%rdi), %ebx - movl 20(%rdi), %ebp + movl 20(%rdi), %r12d movl 24(%rdi), %esi movl 28(%rdi), %edi 1: \op @@ -29,10 +29,10 @@ ENTRY(\op\()_safe_regs) movl %ecx, 4(%r10) movl %edx, 8(%r10) movl %ebx, 12(%r10) - movl %ebp, 20(%r10) + movl %r12d, 20(%r10) movl %esi, 24(%r10) movl %edi, 28(%r10) - popq %rbp + popq %r12 popq %rbx ret 3: |