diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2022-09-15 13:11:23 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2022-10-17 16:41:13 +0200 |
commit | e81dc127ef69887c72735a3e3868930e2bf313ed (patch) | |
tree | ab2f1fd791dba8e3d3db75b441b962ab2dc2cb5a /arch/x86/kernel/module.c | |
parent | x86/paravirt: Make struct paravirt_call_site unconditionally available (diff) | |
download | linux-e81dc127ef69887c72735a3e3868930e2bf313ed.tar.xz linux-e81dc127ef69887c72735a3e3868930e2bf313ed.zip |
x86/callthunks: Add call patching for call depth tracking
Mitigating the Intel SKL RSB underflow issue in software requires to
track the call depth. That is every CALL and every RET need to be
intercepted and additional code injected.
The existing retbleed mitigations already include means of redirecting
RET to __x86_return_thunk; this can be re-purposed and RET can be
redirected to another function doing RET accounting.
CALL accounting will use the function padding introduced in prior
patches. For each CALL instruction, the destination symbol's padding
is rewritten to do the accounting and the CALL instruction is adjusted
to call into the padding.
This ensures only affected CPUs pay the overhead of this accounting.
Unaffected CPUs will leave the padding unused and have their 'JMP
__x86_return_thunk' replaced with an actual 'RET' instruction.
Objtool has been modified to supply a .call_sites section that lists
all the 'CALL' instructions. Additionally the paravirt instruction
sites are iterated since they will have been patched from an indirect
call to direct calls (or direct instructions in which case it'll be
ignored).
Module handling and the actual thunk code for SKL will be added in
subsequent steps.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220915111147.470877038@infradead.org
Diffstat (limited to '')
0 files changed, 0 insertions, 0 deletions