summaryrefslogtreecommitdiffstats
path: root/arch/x86/kvm/kvm-asm-offsets.c
diff options
context:
space:
mode:
authorPaolo Bonzini <pbonzini@redhat.com>2022-11-07 11:14:27 +0100
committerPaolo Bonzini <pbonzini@redhat.com>2022-11-09 18:25:06 +0100
commite61ab42de874c5af8c5d98b327c77a374d9e7da1 (patch)
tree19fc4417cd0b452e2f8fe523495b5b36025ede22 /arch/x86/kvm/kvm-asm-offsets.c
parentKVM: SVM: do not allocate struct svm_cpu_data dynamically (diff)
downloadlinux-e61ab42de874c5af8c5d98b327c77a374d9e7da1.tar.xz
linux-e61ab42de874c5af8c5d98b327c77a374d9e7da1.zip
KVM: SVM: move guest vmsave/vmload back to assembly
It is error-prone that code after vmexit cannot access percpu data because GSBASE has not been restored yet. It forces MSR_IA32_SPEC_CTRL save/restore to happen very late, after the predictor untraining sequence, and it gets in the way of return stack depth tracking (a retbleed mitigation that is in linux-next as of 2022-11-09). As a first step towards fixing that, move the VMCB VMSAVE/VMLOAD to assembly, essentially undoing commit fb0c4a4fee5a ("KVM: SVM: move VMLOAD/VMSAVE to C code", 2021-03-15). The reason for that commit was that it made it simpler to use a different VMCB for VMLOAD/VMSAVE versus VMRUN; but that is not a big hassle anymore thanks to the kvm-asm-offsets machinery and other related cleanups. The idea on how to number the exception tables is stolen from a prototype patch by Peter Zijlstra. Cc: stable@vger.kernel.org Fixes: a149180fbcf3 ("x86: Add magic AMD return-thunk") Link: <https://lore.kernel.org/all/f571e404-e625-bae1-10e9-449b2eb4cbd8@citrix.com/> Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/kvm-asm-offsets.c')
-rw-r--r--arch/x86/kvm/kvm-asm-offsets.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/x86/kvm/kvm-asm-offsets.c b/arch/x86/kvm/kvm-asm-offsets.c
index f1b694e431ae..f83e88b85bf2 100644
--- a/arch/x86/kvm/kvm-asm-offsets.c
+++ b/arch/x86/kvm/kvm-asm-offsets.c
@@ -16,6 +16,7 @@ static void __used common(void)
BLANK();
OFFSET(SVM_vcpu_arch_regs, vcpu_svm, vcpu.arch.regs);
OFFSET(SVM_current_vmcb, vcpu_svm, current_vmcb);
+ OFFSET(SVM_vmcb01, vcpu_svm, vmcb01);
OFFSET(KVM_VMCB_pa, kvm_vmcb_info, pa);
}