summaryrefslogtreecommitdiffstats
path: root/arch (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'x86_core_for_v6.2' of ↵Linus Torvalds2022-12-1579-399/+1902
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Add the call depth tracking mitigation for Retbleed which has been long in the making. It is a lighterweight software-only fix for Skylake-based cores where enabling IBRS is a big hammer and causes a significant performance impact. What it basically does is, it aligns all kernel functions to 16 bytes boundary and adds a 16-byte padding before the function, objtool collects all functions' locations and when the mitigation gets applied, it patches a call accounting thunk which is used to track the call depth of the stack at any time. When that call depth reaches a magical, microarchitecture-specific value for the Return Stack Buffer, the code stuffs that RSB and avoids its underflow which could otherwise lead to the Intel variant of Retbleed. This software-only solution brings a lot of the lost performance back, as benchmarks suggest: https://lore.kernel.org/all/20220915111039.092790446@infradead.org/ That page above also contains a lot more detailed explanation of the whole mechanism - Implement a new control flow integrity scheme called FineIBT which is based on the software kCFI implementation and uses hardware IBT support where present to annotate and track indirect branches using a hash to validate them - Other misc fixes and cleanups * tag 'x86_core_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (80 commits) x86/paravirt: Use common macro for creating simple asm paravirt functions x86/paravirt: Remove clobber bitmask from .parainstructions x86/debug: Include percpu.h in debugreg.h to get DECLARE_PER_CPU() et al x86/cpufeatures: Move X86_FEATURE_CALL_DEPTH from bit 18 to bit 19 of word 11, to leave space for WIP X86_FEATURE_SGX_EDECCSSA bit x86/Kconfig: Enable kernel IBT by default x86,pm: Force out-of-line memcpy() objtool: Fix weak hole vs prefix symbol objtool: Optimize elf_dirty_reloc_sym() x86/cfi: Add boot time hash randomization x86/cfi: Boot time selection of CFI scheme x86/ibt: Implement FineIBT objtool: Add --cfi to generate the .cfi_sites section x86: Add prefix symbols for function padding objtool: Add option to generate prefix symbols objtool: Avoid O(bloody terrible) behaviour -- an ode to libelf objtool: Slice up elf_create_section_symbol() kallsyms: Revert "Take callthunks into account" x86: Unconfuse CONFIG_ and X86_FEATURE_ namespaces x86/retpoline: Fix crash printing warning x86/paravirt: Fix a !PARAVIRT build warning ...
| * x86/paravirt: Use common macro for creating simple asm paravirt functionsJuergen Gross2022-11-244-61/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are some paravirt assembler functions which are sharing a common pattern. Introduce a macro DEFINE_PARAVIRT_ASM() for creating them. Note that this macro is including explicit alignment of the generated functions, leading to __raw_callee_save___kvm_vcpu_is_preempted(), _paravirt_nop() and paravirt_ret0() to be aligned at 4 byte boundaries now. The explicit _paravirt_nop() prototype in paravirt.c isn't needed, as it is included in paravirt_types.h already. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu> Link: https://lkml.kernel.org/r/20221109134418.6516-1-jgross@suse.com
| * x86/paravirt: Remove clobber bitmask from .parainstructionsKees Cook2022-11-241-49/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The u16 "clobber" value is not used in .parainstructions since commit 27876f3882fd ("x86/paravirt: Remove clobbers from struct paravirt_patch_site") Remove the u16 from the section macro, the argument from all macros, and all now-unused CLBR_* macros. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20220903073706.3193746-1-keescook@chromium.org
| * Merge tag 'v6.1-rc6' into x86/core, to resolve conflictsIngo Molnar2022-11-21213-915/+1794
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Resolve conflicts between these commits in arch/x86/kernel/asm-offsets.c: # upstream: debc5a1ec0d1 ("KVM: x86: use a separate asm-offsets.c file") # retbleed work in x86/core: 5d8213864ade ("x86/retbleed: Add SKL return thunk") ... and these commits in include/linux/bpf.h: # upstram: 18acb7fac22f ("bpf: Revert ("Fix dispatcher patchable function entry to 5 bytes nop")") # x86/core commits: 931ab63664f0 ("x86/ibt: Implement FineIBT") bea75b33895f ("x86/Kconfig: Introduce function padding") The latter two modify BPF_DISPATCHER_ATTRIBUTES(), which was removed upstream. Conflicts: arch/x86/kernel/asm-offsets.c include/linux/bpf.h Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | x86/debug: Include percpu.h in debugreg.h to get DECLARE_PER_CPU() et alSean Christopherson2022-11-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Include percpu.h to pick up the definition of DECLARE_PER_CPU() and friends instead of relying on the parent to provide the #include. E.g. swapping the order of includes in arch/x86/kvm/vmx/nested.c (simulating KVM code movement being done for other purposes) results in build errors: In file included from arch/x86/kvm/vmx/nested.c:3: arch/x86/include/asm/debugreg.h:9:32: error: unknown type name ‘cpu_dr7â€=99 9 | DECLARE_PER_CPU(unsigned long, cpu_dr7); | ^~~~~~~ Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221110201707.1976032-1-seanjc@google.com
| * | x86/cpufeatures: Move X86_FEATURE_CALL_DEPTH from bit 18 to bit 19 of word ↵Ingo Molnar2022-11-061-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11, to leave space for WIP X86_FEATURE_SGX_EDECCSSA bit Reallocate a soft-cpufeatures bit allocated for call-depth tracking code, which clashes with this recent KVM/SGX patch being worked on: KVM/VMX: Allow exposing EDECCSSA user leaf function to KVM guest Instead of reallocating cpufeatures bits in evil merges, make the allocation explicit. Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: x86@kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | x86/Kconfig: Enable kernel IBT by defaultKees Cook2022-11-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel IBT defense strongly mitigates the common "first step" of ROP attacks, by eliminating arbitrary stack pivots (that appear either at the end of a function or in immediate values), which cannot be reached if indirect calls must be to marked function entry addresses. IBT is also required to be enabled to gain the FineIBT feature when built with Kernel Control Flow Integrity. Additionally, given that this feature is runtime enabled via CPU ID, it clearly should be built in by default; it will only be enabled if the CPU supports it. The build takes 2 seconds longer, which seems a small price to pay for gaining this coverage by default. Suggested-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221101172503.gonna.094-kees@kernel.org
| * | x86,pm: Force out-of-line memcpy()Peter Zijlstra2022-11-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCC fancies inlining memcpy(), and because it cannot prove the destination is page-aligned (it is) it ends up generating atrocious code like: 19e: 48 8b 15 00 00 00 00 mov 0x0(%rip),%rdx # 1a5 <relocate_restore_code+0x25> 1a1: R_X86_64_PC32 core_restore_code-0x4 1a5: 48 8d 78 08 lea 0x8(%rax),%rdi 1a9: 48 89 c1 mov %rax,%rcx 1ac: 48 c7 c6 00 00 00 00 mov $0x0,%rsi 1af: R_X86_64_32S core_restore_code 1b3: 48 83 e7 f8 and $0xfffffffffffffff8,%rdi 1b7: 48 29 f9 sub %rdi,%rcx 1ba: 48 89 10 mov %rdx,(%rax) 1bd: 48 8b 15 00 00 00 00 mov 0x0(%rip),%rdx # 1c4 <relocate_restore_code+0x44> 1c0: R_X86_64_PC32 core_restore_code+0xff4 1c4: 48 29 ce sub %rcx,%rsi 1c7: 81 c1 00 10 00 00 add $0x1000,%ecx 1cd: 48 89 90 f8 0f 00 00 mov %rdx,0xff8(%rax) 1d4: c1 e9 03 shr $0x3,%ecx 1d7: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi) Notably the alignment code generates a text reference to code_restore_code+0xff8, for which objtool raises the objection: vmlinux.o: warning: objtool: relocate_restore_code+0x3d: relocation to !ENDBR: next_arg+0x18 Applying some __assume_aligned(PAGE_SIZE) improve code-gen to: 19e: 48 89 c7 mov %rax,%rdi 1a1: 48 c7 c6 00 00 00 00 mov $0x0,%rsi 1a4: R_X86_64_32S core_restore_code 1a8: b9 00 02 00 00 mov $0x200,%ecx 1ad: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi) And resolve the problem, however, none of this is important code and a much simpler solution still is to force a memcpy() call: 1a1: ba 00 10 00 00 mov $0x1000,%edx 1a6: 48 c7 c6 00 00 00 00 mov $0x0,%rsi 1a9: R_X86_64_32S core_restore_code 1ad: e8 00 00 00 00 call 1b2 <relocate_restore_code+0x32> 1ae: R_X86_64_PLT32 __memcpy-0x4 Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
| * | x86/cfi: Add boot time hash randomizationPeter Zijlstra2022-11-011-12/+108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to avoid known hashes (from knowing the boot image), randomize the CFI hashes with a per-boot random seed. Suggested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221027092842.765195516@infradead.org
| * | x86/cfi: Boot time selection of CFI schemePeter Zijlstra2022-11-011-18/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the "cfi=" boot parameter to allow people to select a CFI scheme at boot time. Mostly useful for development / debugging. Requested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221027092842.699804264@infradead.org
| * | x86/ibt: Implement FineIBTPeter Zijlstra2022-11-019-20/+292
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement an alternative CFI scheme that merges both the fine-grained nature of kCFI but also takes full advantage of the coarse grained hardware CFI as provided by IBT. To contrast: kCFI is a pure software CFI scheme and relies on being able to read text -- specifically the instruction *before* the target symbol, and does the hash validation *before* doing the call (otherwise control flow is compromised already). FineIBT is a software and hardware hybrid scheme; by ensuring every branch target starts with a hash validation it is possible to place the hash validation after the branch. This has several advantages: o the (hash) load is avoided; no memop; no RX requirement. o IBT WAIT-FOR-ENDBR state is a speculation stop; by placing the hash validation in the immediate instruction after the branch target there is a minimal speculation window and the whole is a viable defence against SpectreBHB. o Kees feels obliged to mention it is slightly more vulnerable when the attacker can write code. Obviously this patch relies on kCFI, but additionally it also relies on the padding from the call-depth-tracking patches. It uses this padding to place the hash-validation while the call-sites are re-written to modify the indirect target to be 16 bytes in front of the original target, thus hitting this new preamble. Notably, there is no hardware that needs call-depth-tracking (Skylake) and supports IBT (Tigerlake and onwards). Suggested-by: Joao Moreira (Intel) <joao@overdrivepizza.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221027092842.634714496@infradead.org
| * | x86: Add prefix symbols for function paddingPeter Zijlstra2022-11-011-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When code is compiled with: -fpatchable-function-entry=${PADDING_BYTES},${PADDING_BYTES} functions will have PADDING_BYTES of NOP in front of them. Unwinders and other things that symbolize code locations will typically attribute these bytes to the preceding function. Given that these bytes nominally belong to the following symbol this mis-attribution is confusing. Inspired by the fact that CFI_CLANG emits __cfi_##name symbols to claim these bytes, use objtool to emit __pfx_##name symbols to do the same when CFI_CLANG is not used. This then shows the callthunk for symbol 'name' as: __pfx_##name+0x6/0x10 Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Yujie Liu <yujie.liu@intel.com> Link: https://lkml.kernel.org/r/20221028194453.592512209@infradead.org
| * | x86: Unconfuse CONFIG_ and X86_FEATURE_ namespacesPeter Zijlstra2022-11-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Lukas reported someone fat fingered the CONFIG_ symbol; fix er up. Fixes: 5d8213864ade ("x86/retbleed: Add SKL return thunk") Reported-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Y1+fL4qQEIGZEEKB@hirez.programming.kicks-ass.net
| * | x86/retpoline: Fix crash printing warningDan Carpenter2022-10-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The first argument of WARN() is a condition, so this will use "addr" as the format string and possibly crash. Fixes: 3b6c1747da48 ("x86/retpoline: Add SKL retthunk retpolines") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/Y1gBoUZrRK5N%2FlCB@kili/
| * | Merge branch 'x86/urgent' into x86/core, to resolve conflictIngo Molnar2022-10-229-52/+67
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's a conflict between the call-depth tracking commits in x86/core: ee3e2469b346 ("x86/ftrace: Make it call depth tracking aware") 36b64f101219 ("x86/ftrace: Rebalance RSB") eac828eaef29 ("x86/ftrace: Remove ftrace_epilogue()") And these fixes in x86/urgent: 883bbbffa5a4 ("ftrace,kcfi: Separate ftrace_stub() and ftrace_stub_graph()") b5f1fc318440 ("x86/ftrace: Remove ftrace_epilogue()") It's non-trivial overlapping modifications - resolve them. Conflicts: arch/x86/kernel/ftrace_64.S Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | x86/paravirt: Fix a !PARAVIRT build warningBorislav Petkov2022-10-201-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix ./include/trace/events/xen.h:28:31: warning: ‘enum paravirt_lazy_mode’ \ declared inside parameter list will not be visible outside of this definition or declaration which turns into a build error: ./include/trace/events/xen.h:28:50: error: parameter 1 (‘mode’) has incomplete type 28 | TP_PROTO(enum paravirt_lazy_mode mode), \ due to enum paravirt_lazy_mode being visible only under CONFIG_PARAVIRT. Just pull it up where it is unconditionally visible. Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Y1AtAXM8YjtBm2cj@zn.tnic
| * | | x86/bugs: Add retbleed=forcePeter Zijlstra (Intel)2022-10-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Debug aid, allows running retbleed=force,stuff on non-affected uarchs Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
| * | | x86/retbleed: Add call depth tracking mitigationThomas Gleixner2022-10-171-2/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The fully secure mitigation for RSB underflow on Intel SKL CPUs is IBRS, which inflicts up to 30% penalty for pathological syscall heavy work loads. Software based call depth tracking and RSB refill is not perfect, but reduces the attack surface massively. The penalty for the pathological case is about 8% which is still annoying but definitely more palatable than IBRS. Add a retbleed=stuff command line option to enable the call depth tracking and software refill of the RSB. This gives admins a choice. IBeeRS are safe and cause headaches, call depth tracking is considered to be s(t)ufficiently safe. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111149.029587352@infradead.org
| * | | x86/ftrace: Make it call depth tracking awarePeter Zijlstra2022-10-175-7/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since ftrace has trampolines, don't use thunks for the __fentry__ site but instead require that every function called from there includes accounting. This very much includes all the direct-call functions. Additionally, ftrace uses ROP tricks in two places: - return_to_handler(), and - ftrace_regs_caller() when pt_regs->orig_ax is set by a direct-call. return_to_handler() already uses a retpoline to replace an indirect-jump to defeat IBT, since this is a jump-type retpoline, make sure there is no accounting done and ALTERNATIVE the RET into a ret. ftrace_regs_caller() does much the same and gets the same treatment. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.927545073@infradead.org
| * | | x86/ftrace: Rebalance RSBPeter Zijlstra2022-10-171-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ftrace_regs_caller() uses a PUSH;RET pattern to tail-call into a direct-call function, this unbalances the RSB, fix that. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.823216933@infradead.org
| * | | x86/ftrace: Remove ftrace_epilogue()Peter Zijlstra2022-10-171-15/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the weird jumps to RET and simply use RET. This then promotes ftrace_stub() to a real function; which becomes important for kcfi. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.719080593@infradead.org
| * | | x86/bpf: Emit call depth accounting if requiredThomas Gleixner2022-10-173-9/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that calls in BPF jitted programs are emitting call depth accounting when enabled to keep the call/return balanced. The return thunk jump is already injected due to the earlier retbleed mitigations. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.615413406@infradead.org
| * | | x86/orc: Make it callthunk awarePeter Zijlstra2022-10-173-1/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Callthunks addresses on the stack would confuse the ORC unwinder. Handle them correctly and tell ORC to proceed further down the stack. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.511637628@infradead.org
| * | | static_call: Add call depth tracking supportPeter Zijlstra2022-10-173-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When indirect calls are switched to direct calls then it has to be ensured that the call target is not the function, but the call thunk when call depth tracking is enabled. But static calls are available before call thunks have been set up. Ensure a second run through the static call patching code after call thunks have been created. When call thunks are not enabled this has no side effects. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.306100465@infradead.org
| * | | x86/calldepth: Add ret/call counting for debugThomas Gleixner2022-10-173-5/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a debuigfs mechanism to validate the accounting, e.g. vs. call/ret balance and to gather statistics about the stuffing to call ratio. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.204285506@infradead.org
| * | | x86/retbleed: Add SKL call thunkThomas Gleixner2022-10-171-5/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the actual SKL call thunk for call depth accounting. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.101125588@infradead.org
| * | | x86/retpoline: Add SKL retthunk retpolinesPeter Zijlstra2022-10-174-12/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that retpolines do the proper call accounting so that the return accounting works correctly. Specifically; retpolines are used to replace both 'jmp *%reg' and 'call *%reg', however these two cases do not have the same accounting requirements. Therefore split things up and provide two different retpoline arrays for SKL. The 'jmp *%reg' case needs no accounting, the __x86_indirect_jump_thunk_array[] covers this. The retpoline is changed to not use the return thunk; it's a simple call;ret construct. [ strictly speaking it should do: andq $(~0x1f), PER_CPU_VAR(__x86_call_depth) but we can argue this can be covered by the fuzz we already have in the accounting depth (12) vs the RSB depth (16) ] The 'call *%reg' case does need accounting, the __x86_indirect_call_thunk_array[] covers this. Again, this retpoline avoids the use of the return-thunk, in this case to avoid double accounting. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.996634749@infradead.org
| * | | x86/retbleed: Add SKL return thunkThomas Gleixner2022-10-176-10/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To address the Intel SKL RSB underflow issue in software it's required to do call depth tracking. Provide a return thunk for call depth tracking on Intel SKL CPUs. The tracking does not use a counter. It uses uses arithmetic shift right on call entry and logical shift left on return. The depth tracking variable is initialized to 0x8000.... when the call depth is zero. The arithmetic shift right sign extends the MSB and saturates after the 12th call. The shift count is 5 so the tracking covers 12 nested calls. On return the variable is shifted left logically so it becomes zero again. CALL RET 0: 0x8000000000000000 0x0000000000000000 1: 0xfc00000000000000 0xf000000000000000 ... 11: 0xfffffffffffffff8 0xfffffffffffffc00 12: 0xffffffffffffffff 0xffffffffffffffe0 After a return buffer fill the depth is credited 12 calls before the next stuffing has to take place. There is a inaccuracy for situations like this: 10 calls 5 returns 3 calls 4 returns 3 calls .... The shift count might cause this to be off by one in either direction, but there is still a cushion vs. the RSB depth. The algorithm does not claim to be perfect, but it should obfuscate the problem enough to make exploitation extremly difficult. The theory behind this is: RSB is a stack with depth 16 which is filled on every call. On the return path speculation "pops" entries to speculate down the call chain. Once the speculative RSB is empty it switches to other predictors, e.g. the Branch History Buffer, which can be mistrained by user space and misguide the speculation path to a gadget. Call depth tracking is designed to break this speculation path by stuffing speculation trap calls into the RSB which are never getting a corresponding return executed. This stalls the prediction path until it gets resteered, The assumption is that stuffing at the 12th return is sufficient to break the speculation before it hits the underflow and the fallback to the other predictors. Testing confirms that it works. Johannes, one of the retbleed researchers. tried to attack this approach but failed. There is obviously no scientific proof that this will withstand future research progress, but all we can do right now is to speculate about it. The SAR/SHL usage was suggested by Andi Kleen. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.890071690@infradead.org
| * | | x86/asm: Provide ALTERNATIVE_3Peter Zijlstra2022-10-171-3/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fairly straight forward adaptation/extention of ALTERNATIVE_2. Required for call depth tracking. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.787711192@infradead.org
| * | | x86/returnthunk: Allow different return thunksPeter Zijlstra2022-10-175-7/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for call depth tracking on Intel SKL CPUs, make it possible to patch in a SKL specific return thunk. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.680469665@infradead.org
| * | | x86/modules: Add call patchingThomas Gleixner2022-10-173-1/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As for the builtins create call thunks and patch the call sites to call the thunk on Intel SKL CPUs for retbleed mitigation. Note, that module init functions are ignored for sake of simplicity because loading modules is not something which is done in high frequent loops and the attacker has not really a handle on when this happens in order to launch a matching attack. The depth tracking will still work for calls into the builtins and because the call is not accounted it will underflow faster and overstuff, but that's mitigated by the saturating counter and the side effect is only temporary. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.575673066@infradead.org
| * | | x86/callthunks: Add call patching for call depth trackingThomas Gleixner2022-10-178-10/+287
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mitigating the Intel SKL RSB underflow issue in software requires to track the call depth. That is every CALL and every RET need to be intercepted and additional code injected. The existing retbleed mitigations already include means of redirecting RET to __x86_return_thunk; this can be re-purposed and RET can be redirected to another function doing RET accounting. CALL accounting will use the function padding introduced in prior patches. For each CALL instruction, the destination symbol's padding is rewritten to do the accounting and the CALL instruction is adjusted to call into the padding. This ensures only affected CPUs pay the overhead of this accounting. Unaffected CPUs will leave the padding unused and have their 'JMP __x86_return_thunk' replaced with an actual 'RET' instruction. Objtool has been modified to supply a .call_sites section that lists all the 'CALL' instructions. Additionally the paravirt instruction sites are iterated since they will have been patched from an indirect call to direct calls (or direct instructions in which case it'll be ignored). Module handling and the actual thunk code for SKL will be added in subsequent steps. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.470877038@infradead.org
| * | | x86/paravirt: Make struct paravirt_call_site unconditionally availableThomas Gleixner2022-10-172-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the upcoming call thunk patching it's less ifdeffery when the data structure is unconditionally available. The code can then be trivially fenced off with IS_ENABLED(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.367853167@infradead.org
| * | | x86/entry: Make some entry symbols globalThomas Gleixner2022-10-171-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | paranoid_entry(), error_entry() and xen_error_entry() have to be exempted from call accounting by thunk patching because they are before UNTRAIN_RET. Expose them so they are available in the alternative code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.265598113@infradead.org
| * | | x86/alternatives: Provide text_poke_copy_locked()Thomas Gleixner2022-10-172-15/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The upcoming call thunk patching must hold text_mutex and needs access to text_poke_copy(), which takes text_mutex. Provide a _locked postfixed variant to expose the inner workings. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.159977224@infradead.org
| * | | x86/retbleed: Add X86_FEATURE_CALL_DEPTHThomas Gleixner2022-10-173-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Intel SKL CPUs fall back to other predictors when the RSB underflows. The only microcode mitigation is IBRS which is insanely expensive. It comes with performance drops of up to 30% depending on the workload. A way less expensive, but nevertheless horrible mitigation is to track the call depth in software and overeagerly fill the RSB when returns underflow the software counter. Provide a configuration symbol and a CPU misfeature bit. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111147.056176424@infradead.org
| * | | x86/Kconfig: Introduce function paddingThomas Gleixner2022-10-174-5/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all functions are 16 byte aligned, add 16 bytes of NOP padding in front of each function. This prepares things for software call stack tracking and kCFI/FineIBT. This significantly increases kernel .text size, around 5.1% on a x86_64-defconfig-ish build. However, per the random access argument used for alignment, these 16 extra bytes are code that wouldn't be used. Performance measurements back this up by showing no significant performance regressions. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111146.950884492@infradead.org
| * | | x86/Kconfig: Add CONFIG_CALL_THUNKSThomas Gleixner2022-10-171-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for mitigating the Intel SKL RSB underflow issue in software, add a new configuration symbol which allows to build the required call thunk infrastructure conditionally. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111146.849523555@infradead.org
| * | | x86/putuser: Provide room for paddingThomas Gleixner2022-10-171-13/+49
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111146.746429822@infradead.org
| * | | x86/entry: Make sync_regs() invocation a tail callPeter Zijlstra2022-10-171-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No point in having a call there. Spare the call/ret overhead. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111146.539578813@infradead.org
| * | | objtool: Allow symbol range comparisons for IBT/ENDBRPeter Zijlstra2022-10-171-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A semi common pattern is where code checks if a code address is within a specific range. All text addresses require either ENDBR or ANNOTATE_ENDBR, however the ANNOTATE_NOENDBR past the range is unnatural. Instead, suppress this warning when this is exactly at the end of a symbol that itself starts with either ENDBR/ANNOTATE_ENDBR. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111146.434642471@infradead.org
| * | | objtool: Add .call_sites sectionPeter Zijlstra2022-10-171-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for call depth tracking provide a section which collects all direct calls. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111146.016511961@infradead.org
| * | | x86: Fixup asm-offsets duplicatePeter Zijlstra (Intel)2022-10-172-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It turns out that 'stack_canary_offset' is a variable name; shadowing that with a #define is ripe of fail when the asm-offsets.h header gets included. Rename the thing. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
| * | | x86/softirq: Move softirq pending next to current taskThomas Gleixner2022-10-172-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Another hot variable which is strict per CPU and benefits from being in the same cache line. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.702133710@infradead.org
| * | | x86/percpu: Move irq_stack variables next to current_taskThomas Gleixner2022-10-179-28/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Further extend struct pcpu_hot with the hard and soft irq stack pointers. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.599170752@infradead.org
| * | | x86/percpu: Move current_top_of_stack next to current_taskThomas Gleixner2022-10-1711-28/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extend the struct pcpu_hot cacheline with current_top_of_stack; another very frequently used value. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.493038635@infradead.org
| * | | x86/percpu: Move cpu_number next to current_taskThomas Gleixner2022-10-173-11/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also add cpu_number to the pcpu_hot structure, it is often referenced and this cacheline is there. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.387678283@infradead.org
| * | | x86/percpu: Move preempt_count next to current_taskThomas Gleixner2022-10-173-20/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add preempt_count to pcpu_hot, since it is once of the most used per-cpu variables. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.284170644@infradead.org
| * | | x86: Put hot per CPU variables into a structThomas Gleixner2022-10-175-15/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The layout of per-cpu variables is at the mercy of the compiler. This can lead to random performance fluctuations from build to build. Create a structure to hold some of the hottest per-cpu variables, starting with current_task. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.179707194@infradead.org
| * | | crypto: x86/poly1305: Remove custom function alignmentThomas Gleixner2022-10-171-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SYM_FUNC_START*() and friends already imply alignment, remove custom alignment hacks to make code consistent. This prepares for future function call ABI changes. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111145.073285765@infradead.org