summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* KVM: SEV-ES: Delegate LBR virtualization to the processorRavi Bangoria2024-06-033-7/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As documented in APM[1], LBR Virtualization must be enabled for SEV-ES guests. Although KVM currently enforces LBRV for SEV-ES guests, there are multiple issues with it: o MSR_IA32_DEBUGCTLMSR is still intercepted. Since MSR_IA32_DEBUGCTLMSR interception is used to dynamically toggle LBRV for performance reasons, this can be fatal for SEV-ES guests. For ex SEV-ES guest on Zen3: [guest ~]# wrmsr 0x1d9 0x4 KVM: entry failed, hardware error 0xffffffff EAX=00000004 EBX=00000000 ECX=000001d9 EDX=00000000 Fix this by never intercepting MSR_IA32_DEBUGCTLMSR for SEV-ES guests. No additional save/restore logic is required since MSR_IA32_DEBUGCTLMSR is of swap type A. o KVM will disable LBRV if userspace sets MSR_IA32_DEBUGCTLMSR before the VMSA is encrypted. Fix this by moving LBRV enablement code post VMSA encryption. [1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June 2023, Vol 2, 15.35.2 Enabling SEV-ES. https://bugzilla.kernel.org/attachment.cgi?id=304653 Fixes: 376c6d285017 ("KVM: SVM: Provide support for SEV-ES vCPU creation/loading") Co-developed-by: Nikunj A Dadhania <nikunj@amd.com> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Message-ID: <20240531044644.768-4-ravi.bangoria@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SEV-ES: Disallow SEV-ES guests when X86_FEATURE_LBRV is absentRavi Bangoria2024-06-033-9/+14
| | | | | | | | | | | | | | As documented in APM[1], LBR Virtualization must be enabled for SEV-ES guests. So, prevent SEV-ES guests when LBRV support is missing. [1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June 2023, Vol 2, 15.35.2 Enabling SEV-ES. https://bugzilla.kernel.org/attachment.cgi?id=304653 Fixes: 376c6d285017 ("KVM: SVM: Provide support for SEV-ES vCPU creation/loading") Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Message-ID: <20240531044644.768-3-ravi.bangoria@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SEV-ES: Prevent MSR access post VMSA encryptionNikunj A Dadhania2024-06-031-0/+18
| | | | | | | | | | | | | | | KVM currently allows userspace to read/write MSRs even after the VMSA is encrypted. This can cause unintentional issues if MSR access has side- effects. For ex, while migrating a guest, userspace could attempt to migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on the target. Fix this by preventing access to those MSRs which are context switched via the VMSA, once the VMSA is encrypted. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Message-ID: <20240531044644.768-2-ravi.bangoria@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SVM: WARN on vNMI + NMI window iff NMIs are outright maskedSean Christopherson2024-05-231-8/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When requesting an NMI window, WARN on vNMI support being enabled if and only if NMIs are actually masked, i.e. if the vCPU is already handling an NMI. KVM's ABI for NMIs that arrive simultanesouly (from KVM's point of view) is to inject one NMI and pend the other. When using vNMI, KVM pends the second NMI simply by setting V_NMI_PENDING, and lets the CPU do the rest (hardware automatically sets V_NMI_BLOCKING when an NMI is injected). However, if KVM can't immediately inject an NMI, e.g. because the vCPU is in an STI shadow or is running with GIF=0, then KVM will request an NMI window and trigger the WARN (but still function correctly). Whether or not the GIF=0 case makes sense is debatable, as the intent of KVM's behavior is to provide functionality that is as close to real hardware as possible. E.g. if two NMIs are sent in quick succession, the probability of both NMIs arriving in an STI shadow is infinitesimally low on real hardware, but significantly larger in a virtual environment, e.g. if the vCPU is preempted in the STI shadow. For GIF=0, the argument isn't as clear cut, because the window where two NMIs can collide is much larger in bare metal (though still small). That said, KVM should not have divergent behavior for the GIF=0 case based on whether or not vNMI support is enabled. And KVM has allowed simultaneous NMIs with GIF=0 for over a decade, since commit 7460fb4a3400 ("KVM: Fix simultaneous NMIs"). I.e. KVM's GIF=0 handling shouldn't be modified without a *really* good reason to do so, and if KVM's behavior were to be modified, it should be done irrespective of vNMI support. Fixes: fa4c027a7956 ("KVM: x86: Add support for SVM's Virtual NMI") Cc: stable@vger.kernel.org Cc: Santosh Shukla <Santosh.Shukla@amd.com> Cc: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240522021435.1684366-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86: Force KVM_WERROR if the global WERROR is enabledSean Christopherson2024-05-231-1/+2
| | | | | | | | | | | | Force KVM_WERROR if the global WERROR is enabled to avoid pestering the user about a Kconfig that will ultimately be ignored. Force KVM_WERROR instead of making it mutually exclusive with WERROR to avoid generating a .config builds KVM with -Werror, but has KVM_WERROR=n. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240517180341.974251-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86: Disable KVM_INTEL_PROVE_VE by defaultSean Christopherson2024-05-231-3/+5
| | | | | | | | | | | | | | | | Disable KVM's "prove #VE" support by default, as it provides no functional value, and even its sanity checking benefits are relatively limited. I.e. it should be fully opt-in even on debug kernels, especially since EPT Violation #VE suppression appears to be buggy on some CPUs. Opportunistically add a line in the help text to make it abundantly clear that KVM_INTEL_PROVE_VE should never be enabled in a production environment. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: VMX: Enumerate EPT Violation #VE support in /proc/cpuinfoSean Christopherson2024-05-231-1/+1
| | | | | | | | | | | | Don't suppress printing EPT_VIOLATION_VE in /proc/cpuinfo, knowing whether or not KVM_INTEL_PROVE_VE actually does anything is extremely valuable. A privileged user can get at the information by reading the raw MSR, but the whole point of the VMX flags is to avoid needing to glean information from raw MSR reads. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86/mmu: Print SPTEs on unexpected #VESean Christopherson2024-05-233-8/+38
| | | | | | | | | | | | | | Print the SPTEs that correspond to the faulting GPA on an unexpected EPT Violation #VE to help the user debug failures, e.g. to pinpoint which SPTE didn't have SUPPRESS_VE set. Opportunistically assert that the underlying exit reason was indeed an EPT Violation, as the CPU has *really* gone off the rails if a #VE occurs due to a completely unexpected exit reason. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: VMX: Dump VMCS on unexpected #VESean Christopherson2024-05-231-1/+3
| | | | | | | | | Dump the VMCS on an unexpected #VE, otherwise it's practically impossible to figure out why the #VE occurred. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86/mmu: Add sanity checks that KVM doesn't create EPT #VE SPTEsSean Christopherson2024-05-233-0/+14
| | | | | | | | | | | | | | | | | Assert that KVM doesn't set a SPTE to a value that could trigger an EPT Violation #VE on a non-MMIO SPTE, e.g. to help detect bugs even without KVM_INTEL_PROVE_VE enabled, and to help debug actual #VE failures. Note, this will run afoul of TDX support, which needs to reflect emulated MMIO accesses into the guest as #VEs (which was the whole point of adding EPT Violation #VE support in KVM). The obvious fix for that is to exempt MMIO SPTEs, but that's annoyingly difficult now that is_mmio_spte() relies on a per-VM value. However, resolving that conundrum is a future problem, whereas getting KVM_INTEL_PROVE_VE healthy is a current problem. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: nVMX: Always handle #VEs in L0 (never forward #VEs from L2 to L1)Sean Christopherson2024-05-231-0/+2
| | | | | | | | | | | Always handle #VEs, e.g. due to prove EPT Violation #VE failures, in L0, as KVM does not expose any #VE capabilities to L1, i.e. any and all #VEs are KVM's responsibility. Fixes: 8131cf5b4fd8 ("KVM: VMX: Introduce test mode related to EPT violation VE") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: nVMX: Initialize #VE info page for vmcs02 when proving #VE supportSean Christopherson2024-05-231-0/+3
| | | | | | | | | | | Point vmcs02.VE_INFORMATION_ADDRESS at the vCPU's #VE info page when initializing vmcs02, otherwise KVM will run L2 with EPT Violation #VE enabled and a VE info address pointing at pfn 0. Fixes: 8131cf5b4fd8 ("KVM: VMX: Introduce test mode related to EPT violation VE") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: VMX: Don't kill the VM on an unexpected #VESean Christopherson2024-05-231-2/+2
| | | | | | | | | | | | Don't terminate the VM on an unexpected #VE, as it's extremely unlikely the #VE is fatal to the guest, and even less likely that it presents a danger to the host. Simply resume the guest on "failure", as the #VE info page's BUSY field will prevent converting any more EPT Violations to #VEs for the vCPU (at least, that's what the BUSY field is supposed to do). Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20240518000430.1118488-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86/mmu: Use SHADOW_NONPRESENT_VALUE for atomic zap in TDP MMUIsaku Yamahata2024-05-231-1/+1
| | | | | | | | | | | | | | Use SHADOW_NONPRESENT_VALUE when zapping TDP MMU SPTEs with mmu_lock held for read, tdp_mmu_zap_spte_atomic() was simply missed during the initial development. Fixes: 7f01cab84928 ("KVM: x86/mmu: Allow non-zero value for non-present SPTE and removed SPTE") Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> [sean: write changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Kai Huang <kai.huang@intel.com> Message-ID: <20240518000430.1118488-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* selftests/kvm: remove dead filePaolo Bonzini2024-05-151-1135/+0
| | | | | | | | This file was supposed to be removed in commit 2b7deea3ec7c ("Revert "kvm: selftests: move base kvm_util.h declarations to kvm_util_base.h""), but it survived. Remove it now. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge tag 'kvm-x86-misc-6.10' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini2024-05-127-31/+53
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM x86 misc changes for 6.10: - Advertise the max mappable GPA in the "guest MAXPHYADDR" CPUID field, which is unused by hardware, so that KVM can communicate its inability to map GPAs that set bits 51:48 due to lack of 5-level paging. Guest firmware is expected to use the information to safely remap BARs in the uppermost GPA space, i.e to avoid placing a BAR at a legal, but unmappable, GPA. - Use vfree() instead of kvfree() for allocations that always use vcalloc() or __vcalloc(). - Don't completely ignore same-value writes to immutable feature MSRs, as doing so results in KVM failing to reject accesses to MSR that aren't supposed to exist given the vCPU model and/or KVM configuration. - Don't mark APICv as being inhibited due to ABSENT if APICv is disabled KVM-wide to avoid confusing debuggers (KVM will never bother clearing the ABSENT inhibit, even if userspace enables in-kernel local APIC).
| * KVM: x86: Remove VT-d mention in posted interrupt tracepointAlejandro Jimenez2024-05-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | The kvm_pi_irte_update tracepoint is called from both SVM and VMX vendor code, and while the "posted interrupt" naming is also adopted by SVM in several places, VT-d specifically refers to Intel's "Virtualization Technology for Directed I/O". Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Link: https://lore.kernel.org/r/20240418021823.1275276-3-alejandro.j.jimenez@oracle.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Only set APICV_INHIBIT_REASON_ABSENT if APICv is enabledAlejandro Jimenez2024-05-021-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the APICv enablement status to determine if APICV_INHIBIT_REASON_ABSENT needs to be set, instead of unconditionally setting the reason during initialization. Specifically, in cases where AVIC is disabled via module parameter or lack of hardware support, unconditionally setting an inhibit reason due to the absence of an in-kernel local APIC can lead to a scenario where the reason incorrectly remains set after a local APIC has been created by either KVM_CREATE_IRQCHIP or the enabling of KVM_CAP_IRQCHIP_SPLIT. This is because the helpers in charge of removing the inhibit return early if enable_apicv is not true, and therefore the bit remains set. This leads to confusion as to the cause why APICv is not active, since an incorrect reason will be reported by tracepoints and/or a debugging tool that examines the currently set inhibit reasons. Fixes: ef8b4b720368 ("KVM: ensure APICv is considered inactive if there is no APIC") Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Link: https://lore.kernel.org/r/20240418021823.1275276-2-alejandro.j.jimenez@oracle.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Allow, don't ignore, same-value writes to immutable MSRsSean Christopherson2024-05-021-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When handling userspace writes to immutable feature MSRs for a vCPU that has already run, fall through into the normal code to set the MSR instead of immediately returning '0'. I.e. allow such writes, instead of ignoring such writes. This fixes a bug where KVM incorrectly allows writes to the VMX MSRs that enumerate which CR{0,4} can be set, but only if the vCPU has already run. The intent of returning '0' and thus ignoring the write, was to avoid any side effects, e.g. refreshing the PMU and thus doing weird things with perf events while the vCPU is running. That approach sounds nice in theory, but in practice it makes it all but impossible to maintain a sane ABI, e.g. all VMX MSRs return -EBUSY if the CPU is post-VMXON, and the VMX MSRs for fixed-1 CR bits are never writable, etc. As for refreshing the PMU, kvm_set_msr_common() explicitly skips the PMU refresh if MSR_IA32_PERF_CAPABILITIES is being written with the current value, specifically to avoid unwanted side effects. And if necessary, adding similar logic for other MSRs is not difficult. Fixes: 0094f62c7eaa ("KVM: x86: Disallow writes to immutable feature MSRs after KVM_RUN") Reported-by: Jim Mattson <jmattson@google.com> Cc: Raghavendra Rao Ananta <rananta@google.com> Link: https://lore.kernel.org/r/20240408231500.1388122-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: Use vfree for memory allocated by vcalloc()/__vcalloc()Li RongQing2024-04-093-5/+5
| | | | | | | | | | | | | | | | | | | | commit 37b2a6510a48("KVM: use __vcalloc for very large allocations") replaced kvzalloc()/kvcalloc() with vcalloc(), but didn't replace kvfree() with vfree(). Signed-off-by: Li RongQing <lirongqing@baidu.com> Link: https://lore.kernel.org/r/20240131012357.53563-1-lirongqing@baidu.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Advertise max mappable GPA in CPUID.0x80000008.GuestPhysBitsGerd Hoffmann2024-04-093-3/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the GuestPhysBits field in CPUID.0x80000008 to communicate the max mappable GPA to userspace, i.e. the max GPA that is addressable by the CPU itself. Typically this is identical to the max effective GPA, except in the case where the CPU supports MAXPHYADDR > 48 but does not support 5-level TDP (the CPU consults bits 51:48 of the GPA only when walking the fifth level TDP page table entry). Enumerating the max mappable GPA via CPUID will allow guest firmware to map resources like PCI bars in the highest possible address space, while ensuring that the GPA is addressable by the CPU. Without precise knowledge about the max mappable GPA, the guest must assume that 5-level paging is unsupported and thus restrict its mappings to the lower 48 bits. Advertise the max mappable GPA via KVM_GET_SUPPORTED_CPUID as userspace doesn't have easy access to whether or not 5-level paging is supported, and to play nice with userspace VMMs that reflect the supported CPUID directly into the guest. AMD's APM (3.35) defines GuestPhysBits (EAX[23:16]) as: Maximum guest physical address size in bits. This number applies only to guests using nested paging. When this field is zero, refer to the PhysAddrSize field for the maximum guest physical address size. Tom Lendacky confirmed that the purpose of GuestPhysBits is software use and KVM can use it as described above. Real hardware always returns zero. Leave GuestPhysBits as '0' when TDP is disabled in order to comply with the APM's statement that GuestPhysBits "applies only to guest using nested paging". As above, guest firmware will likely create suboptimal mappings, but that is a very minor issue and not a functional concern. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20240313125844.912415-3-kraxel@redhat.com [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
| * KVM: x86: Don't advertise guest.MAXPHYADDR as host.MAXPHYADDR in CPUIDGerd Hoffmann2024-04-091-11/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drop KVM's propagation of GuestPhysBits (CPUID leaf 80000008, EAX[23:16]) to HostPhysBits (same leaf, EAX[7:0]) when advertising the address widths to userspace via KVM_GET_SUPPORTED_CPUID. Per AMD, GuestPhysBits is intended for software use, and physical CPUs do not set that field. I.e. GuestPhysBits will be non-zero if and only if KVM is running as a nested hypervisor, and in that case, GuestPhysBits is NOT guaranteed to capture the CPU's effective MAXPHYADDR when running with TDP enabled. E.g. KVM will soon use GuestPhysBits to communicate the CPU's maximum *addressable* guest physical address, which would result in KVM under- reporting PhysBits when running as an L1 on a CPU with MAXPHYADDR=52, but without 5-level paging. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20240313125844.912415-2-kraxel@redhat.com [sean: rewrite changelog with --verbose, Cc stable@] Signed-off-by: Sean Christopherson <seanjc@google.com>
* | Merge tag 'kvm-x86-mmu-6.10' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini2024-05-122-29/+66
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM x86 MMU changes for 6.10: - Process TDP MMU SPTEs that are are zapped while holding mmu_lock for read after replacing REMOVED_SPTE with '0' and flushing remote TLBs, which allows vCPU tasks to repopulate the zapped region while the zapper finishes tearing down the old, defunct page tables. - Fix a longstanding, likely benign-in-practice race where KVM could fail to detect a write from kvm_mmu_track_write() to a shadowed GPTE if the GPTE is first page table being shadowed.
| * | KVM: x86/mmu: Fix a largely theoretical race in kvm_mmu_track_write()Sean Christopherson2024-05-021-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add full memory barriers in kvm_mmu_track_write() and account_shadowed() to plug a (very, very theoretical) race where kvm_mmu_track_write() could miss a 0->1 transition of indirect_shadow_pages and fail to zap relevant, *stale* SPTEs. Without the barriers, because modern x86 CPUs allow (per the SDM): Reads may be reordered with older writes to different locations but not with older writes to the same location. it's possible that the following could happen (terms of values being visible/resolved): CPU0 CPU1 read memory[gfn] (=Y) memory[gfn] Y=>X read indirect_shadow_pages (=0) indirect_shadow_pages 0=>1 or conversely: CPU0 CPU1 indirect_shadow_pages 0=>1 read indirect_shadow_pages (=0) read memory[gfn] (=Y) memory[gfn] Y=>X E.g. in the below scenario, CPU0 could fail to zap SPTEs, and CPU1 could fail to retry the faulting instruction, resulting in a KVM entering the guest with a stale SPTE (map PTE=X instead of PTE=Y). PTE = X; CPU0: emulator_write_phys() PTE = Y kvm_page_track_write() kvm_mmu_track_write() // memory barrier missing here if (indirect_shadow_pages) zap(); CPU1: FNAME(page_fault) FNAME(walk_addr) FNAME(walk_addr_generic) gw->pte = PTE; // X FNAME(fetch) kvm_mmu_get_child_sp kvm_mmu_get_shadow_page __kvm_mmu_get_shadow_page kvm_mmu_alloc_shadow_page account_shadowed indirect_shadow_pages++ // memory barrier missing here if (FNAME(gpte_changed)) // if (PTE == X) return RET_PF_RETRY; In practice, this bug likely cannot be observed as both the 0=>1 transition and reordering of this scope are extremely rare occurrences. Note, if the cost of the barrier (which is simply a locked ADD, see commit 450cbdd0125c ("locking/x86: Use LOCK ADD for smp_mb() instead of MFENCE")), is problematic, KVM could avoid the barrier by bailing earlier if checking kvm_memslots_have_rmaps() is false. But the odds of the barrier being problematic is extremely low, *and* the odds of the extra checks being meaningfully faster overall is also low. Link: https://lore.kernel.org/r/20240423193114.2887673-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: x86/mmu: Process atomically-zapped SPTEs after TLB flushDavid Matlack2024-04-091-26/+49
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When zapping TDP MMU SPTEs under read-lock, processes zapped SPTEs *after* flushing TLBs and after replacing the special REMOVED_SPTE with '0'. When zapping an SPTE that points to a page table, processing SPTEs after flushing TLBs minimizes contention on the child SPTEs (e.g. vCPUs won't hit write-protection faults via stale, read-only child SPTEs), and processing after replacing REMOVED_SPTE with '0' minimizes the amount of time vCPUs will be blocked by the REMOVED_SPTE. Processing SPTEs after setting the SPTE to '0', i.e. in parallel with the SPTE potentially being replacing with a new SPTE, is safe because KVM does not depend on completing the processing before a new SPTE is installed, and the processing is done on a subset of the page tables that is disconnected from the root, and thus unreachable by other tasks (after the TLB flush). KVM already relies on similar logic, as kvm_mmu_zap_all_fast() can result in KVM processing all SPTEs in a given root after vCPUs create mappings in a new root. In VMs with a large (400+) number of vCPUs, it can take KVM multiple seconds to process a 1GiB region mapped with 4KiB entries, e.g. when disabling dirty logging in a VM backed by 1GiB HugeTLB. During those seconds, if a vCPU accesses the 1GiB region being zapped it will be stalled until KVM finishes processing the SPTE and replaces the REMOVED_SPTE with 0. Re-ordering the processing does speed up the atomic-zaps somewhat, but the main benefit is avoiding blocking vCPU threads. Before: $ ./dirty_log_perf_test -s anonymous_hugetlb_1gb -v 416 -b 1G -e ... Disabling dirty logging time: 509.765146313s $ ./funclatency -m tdp_mmu_zap_spte_atomic msec : count distribution 0 -> 1 : 0 | | 2 -> 3 : 0 | | 4 -> 7 : 0 | | 8 -> 15 : 0 | | 16 -> 31 : 0 | | 32 -> 63 : 0 | | 64 -> 127 : 0 | | 128 -> 255 : 8 |** | 256 -> 511 : 68 |****************** | 512 -> 1023 : 129 |********************************** | 1024 -> 2047 : 151 |****************************************| 2048 -> 4095 : 60 |*************** | After: $ ./dirty_log_perf_test -s anonymous_hugetlb_1gb -v 416 -b 1G -e ... Disabling dirty logging time: 336.516838548s $ ./funclatency -m tdp_mmu_zap_spte_atomic msec : count distribution 0 -> 1 : 0 | | 2 -> 3 : 0 | | 4 -> 7 : 0 | | 8 -> 15 : 0 | | 16 -> 31 : 0 | | 32 -> 63 : 0 | | 64 -> 127 : 0 | | 128 -> 255 : 12 |** | 256 -> 511 : 166 |****************************************| 512 -> 1023 : 101 |************************ | 1024 -> 2047 : 137 |********************************* | Note, KVM's processing of collapsible SPTEs is still extremely slow and can be improved. For example, a significant amount of time is spent calling kvm_set_pfn_{accessed,dirty}() for every last-level SPTE, even when processing SPTEs that all map the same folio. But avoiding blocking vCPUs and contending SPTEs is valuable regardless of how fast KVM can process collapsible SPTEs. Link: https://lore.kernel.org/all/20240320005024.3216282-1-seanjc@google.com Cc: Vipin Sharma <vipinsh@google.com> Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: David Matlack <dmatlack@google.com> Reviewed-by: Vipin Sharma <vipinsh@google.com> Link: https://lore.kernel.org/r/20240307194059.1357377-1-dmatlack@google.com [sean: massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
* | Merge tag 'kvm-x86-selftests_utils-6.10' of https://github.com/kvm-x86/linux ↵Paolo Bonzini2024-05-1286-447/+1420
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into HEAD KVM selftests treewide updates for 6.10: - Define _GNU_SOURCE for all selftests to fix a warning that was introduced by a change to kselftest_harness.h late in the 6.9 cycle, and because forcing every test to #define _GNU_SOURCE is painful. - Provide a global psuedo-RNG instance for all tests, so that library code can generate random, but determinstic numbers. - Use the global pRNG to randomly force emulation of select writes from guest code on x86, e.g. to help validate KVM's emulation of locked accesses. - Rename kvm_util_base.h back to kvm_util.h, as the weird layer of indirection was added purely to avoid manually #including ucall_common.h in a handful of locations. - Allocate and initialize x86's GDT, IDT, TSS, segments, and default exception handlers at VM creation, instead of forcing tests to manually trigger the related setup.
| * | KVM: selftests: Drop @selector from segment helpersSean Christopherson2024-04-291-15/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drop the @selector from the kernel code, data, and TSS builders and instead hardcode the respective selector in the helper. Accepting a selector but not a base makes the selector useless, e.g. the data helper can't create per-vCPU for FS or GS, and so loading GS with KERNEL_DS is the only logical choice. And for code and TSS, there is no known reason to ever want multiple segments, e.g. there are zero plans to support 32-bit kernel code (and again, that would require more than just the selector). If KVM selftests ever do add support for per-vCPU segments, it'd arguably be more readable to add a dedicated helper for building/setting the per-vCPU segment, and move the common data segment code to an inner helper. Lastly, hardcoding the selector reduces the probability of setting the wrong selector in the vCPU versus what was created by the VM in the GDT. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-19-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Init x86's segments during VM creationSean Christopherson2024-04-291-48/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initialize x86's various segments in the GDT during creation of relevant VMs instead of waiting until vCPUs come along. Re-installing the segments for every vCPU is both wasteful and confusing, as is installing KERNEL_DS multiple times; NOT installing KERNEL_DS for GS is icing on the cake. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-18-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add macro for TSS selector, rename up code/data macrosSean Christopherson2024-04-291-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a proper #define for the TSS selector instead of open coding 0x18 and hoping future developers don't use that selector for something else. Opportunistically rename the code and data selector macros to shorten the names, align the naming with the kernel's scheme, and capture that they are *kernel* segments. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-17-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Allocate x86's TSS at VM creationSean Christopherson2024-04-291-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate x86's per-VM TSS at creation of a non-barebones VM. Like the GDT, the TSS is needed to actually run vCPUs, i.e. every non-barebones VM is all but guaranteed to allocate the TSS sooner or later. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-16-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Fold x86's descriptor tables helpers into vcpu_init_sregs()Sean Christopherson2024-04-291-25/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the per-VM, on-demand allocation logic in kvm_setup_gdt() and vcpu_init_descriptor_tables() is gone, fold them into vcpu_init_sregs(). Note, both kvm_setup_gdt() and vcpu_init_descriptor_tables() configured the GDT, which is why it looks like kvm_setup_gdt() disappears. Opportunistically delete the pointless zeroing of the IDT limit (it was being unconditionally overwritten by vcpu_init_descriptor_tables()). Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-15-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Drop superfluous switch() on vm->mode in vcpu_init_sregs()Sean Christopherson2024-04-291-16/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the switch statement on vm->mode in x86's vcpu_init_sregs()'s with a simple assert that the VM has a 48-bit virtual address space. A switch statement is both overkill and misleading, as the existing code incorrectly implies that VMs with LA57 would need different to configuration for the LDT, TSS, and flat segments. In all likelihood, the only difference that would be needed for selftests is CR4.LA57 itself. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-14-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Allocate x86's GDT during VM creationSean Christopherson2024-04-291-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate the GDT during creation of non-barebones VMs instead of waiting until the first vCPU is created, as the whole point of non-barebones VMs is to be able to run vCPUs, i.e. the GDT is going to get allocated no matter what. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-13-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Map x86's exception_handlers at VM creation, not vCPU setupSean Christopherson2024-04-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Map x86's exception handlers at VM creation, not vCPU setup, as the mapping is per-VM, i.e. doesn't need to be (re)done for every vCPU. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-12-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Init IDT and exception handlers for all VMs/vCPUs on x86Sean Christopherson2024-04-2923-69/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initialize the IDT and exception handlers for all non-barebones VMs and vCPUs on x86. Forcing tests to manually configure the IDT just to save 8KiB of memory is a terrible tradeoff, and also leads to weird tests (multiple tests have deliberately relied on shutdown to indicate success), and hard-to-debug failures, e.g. instead of a precise unexpected exception failure, tests see only shutdown. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-11-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Rename x86's vcpu_setup() to vcpu_init_sregs()Sean Christopherson2024-04-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename vcpu_setup() to be more descriptive and precise, there is a whole lot of "setup" that is done for a vCPU that isn't in said helper. No functional change intended. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-10-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Move x86's descriptor table helpers "up" in processor.cSean Christopherson2024-04-291-96/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move x86's various descriptor table helpers in processor.c up above kvm_arch_vm_post_create() and vcpu_setup() so that the helpers can be made static and invoked from the aforementioned functions. No functional change intended. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-9-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Explicitly clobber the IDT in the "delete memslot" testcaseSean Christopherson2024-04-291-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explicitly clobber the guest IDT in the "delete memslot" test, which expects the deleted memslot to result in either a KVM emulation error, or a triple fault shutdown. A future change to the core selftests library will configuring the guest IDT and exception handlers by default, i.e. will install a guest #PF handler and put the guest into an infinite #NPF loop (the guest hits a !PRESENT SPTE when trying to vector a #PF, and KVM reinjects the #PF without fixing the #NPF, because there is no memslot). Note, it's not clear whether or not KVM's behavior is reasonable in this case, e.g. arguably KVM should try (and fail) to emulate in response to the #NPF. But barring a goofy/broken userspace, this scenario will likely never happen in practice. Punt the KVM investigation to the future. Link: https://lore.kernel.org/r/20240314232637.2538648-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Rework platform_info_test to actually verify #GPSean Christopherson2024-04-291-33/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rework platform_info_test to actually handle and verify the expected #GP on RDMSR when the associated KVM capability is disabled. Currently, the test _deliberately_ doesn't handle the #GP, and instead lets it escalated to a triple fault shutdown. In addition to verifying that KVM generates the correct fault, handling the #GP will be necessary (without even more shenanigans) when a future change to the core KVM selftests library configures the IDT and exception handlers by default (the test subtly relies on the IDT limit being '0'). Link: https://lore.kernel.org/r/20240314232637.2538648-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Move platform_info_test's main assert into guest codeSean Christopherson2024-04-291-8/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As a first step toward gracefully handling the expected #GP on RDMSR in platform_info_test, move the test's assert on the non-faulting RDMSR result into the guest itself. This will allow using a unified flow for the host userspace side of things. Link: https://lore.kernel.org/r/20240314232637.2538648-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Fix off-by-one initialization of GDT limitAckerley Tng2024-04-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix an off-by-one bug in the initialization of the GDT limit, which as defined in the SDM is inclusive, not exclusive. Note, vcpu_init_descriptor_tables() gets the limit correct, it's only vcpu_setup() that is broken, i.e. only tests that _don't_ invoke vcpu_init_descriptor_tables() can have problems. And the fact that KVM effectively initializes the GDT twice will be cleaned up in the near future. Signed-off-by: Ackerley Tng <ackerleytng@google.com> [sean: rewrite changelog] Link: https://lore.kernel.org/r/20240314232637.2538648-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Move GDT, IDT, and TSS fields to x86's kvm_vm_archSean Christopherson2024-04-295-16/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Now that kvm_vm_arch exists, move the GDT, IDT, and TSS fields to x86's implementation, as the structures are firmly x86-only. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: sefltests: Add kvm_util_types.h to hold common types, e.g. vm_vaddr_tSean Christopherson2024-04-292-15/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the base types unique to KVM selftests out of kvm_util.h and into a new header, kvm_util_types.h. This will allow kvm_util_arch.h, i.e. core arch headers, to reference common types, e.g. vm_vaddr_t and vm_paddr_t. No functional change intended. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | Revert "kvm: selftests: move base kvm_util.h declarations to kvm_util_base.h"Sean Christopherson2024-04-2928-12/+1156
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Effectively revert the movement of code from kvm_util.h => kvm_util_base.h, as the TL;DR of the justification for the move was to avoid #idefs and/or circular dependencies between what ended up being ucall_common.h and what was (and now again, is), kvm_util.h. But avoiding #ifdef and circular includes is trivial: don't do that. The cost of removing kvm_util_base.h is a few extra includes of ucall_common.h, but that cost is practically nothing. On the other hand, having a "base" version of a header that is really just the header itself is confusing, and makes it weird/hard to choose names for headers that actually are "base" headers, e.g. to hold core KVM selftests typedefs. For all intents and purposes, this reverts commit 7d9a662ed9f0403e7b94940dceb81552b8edb931. Reviewed-by: Ackerley Tng <ackerleytng@google.com> Link: https://lore.kernel.org/r/20240314232637.2538648-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Randomly force emulation on x86 writes from guest codeSean Christopherson2024-04-291-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Override vcpu_arch_put_guest() to randomly force emulation on supported accesses. Force emulation of LOCK CMPXCHG as well as a regular MOV to stress KVM's emulation of atomic accesses, which has a unique path in KVM's emulator. Arbitrarily give all the decisions 50/50 odds; absent much, much more sophisticated infrastructure for generating random numbers, it's highly unlikely that doing more than a coin flip with affect selftests' ability to find KVM bugs. This is effectively a regression test for commit 910c57dfa4d1 ("KVM: x86: Mark target gfn of emulated atomic instruction as dirty"). Link: https://lore.kernel.org/r/20240314185459.2439072-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add vcpu_arch_put_guest() to do writes from guest codeSean Christopherson2024-04-293-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a macro, vcpu_arch_put_guest(), for "putting" values to memory from guest code in "interesting" situations, e.g. when writing memory that is being dirty logged. Structure the macro so that arch code can provide a custom implementation, e.g. x86 will use the macro to force emulation of the access. Use the helper in dirty_log_test, which is of particular interest (see above), and in xen_shinfo_test, which isn't all that interesting, but provides a second usage of the macro with a different size operand (uint8_t versus uint64_t), i.e. to help verify that the macro works for more than just 64-bit values. Use "put" as the verb to align with the kernel's {get,put}_user() terminology. Link: https://lore.kernel.org/r/20240314185459.2439072-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Add global snapshot of kvm_is_forced_emulation_enabled()Sean Christopherson2024-04-294-11/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a global snapshot of kvm_is_forced_emulation_enabled() and sync it to all VMs by default so that core library code can force emulation, e.g. to allow for easier testing of the intersections between emulation and other features in KVM. Link: https://lore.kernel.org/r/20240314185459.2439072-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Provide an API for getting a random bool from an RNGSean Christopherson2024-04-292-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move memstress' random bool logic into common code to avoid reinventing the wheel for basic yes/no decisions. Provide an outer wrapper to handle the basic/common case of just wanting a 50/50 chance of something happening. Link: https://lore.kernel.org/r/20240314185459.2439072-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Provide a global pseudo-RNG instance for all testsSean Christopherson2024-04-296-29/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a global guest_random_state instance, i.e. a pseudo-RNG, so that an RNG is available for *all* tests. This will allow randomizing behavior in core library code, e.g. x86 will utilize the pRNG to conditionally force emulation of writes from within common guest code. To allow for deterministic runs, and to be compatible with existing tests, allow tests to override the seed used to initialize the pRNG. Note, the seed *must* be overwritten before a VM is created in order for the seed to take effect, though it's perfectly fine for a test to initialize multiple VMs with different seeds. And as evidenced by memstress_guest_code(), it's also a-ok to instantiate more RNGs using the global seed (or a modified version of it). The goal of the global RNG is purely to ensure that _a_ source of random numbers is available, it doesn't have to be the _only_ RNG. Link: https://lore.kernel.org/r/20240314185459.2439072-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
| * | KVM: selftests: Define _GNU_SOURCE for all selftests codeSean Christopherson2024-04-2957-120/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Define _GNU_SOURCE is the base CFLAGS instead of relying on selftests to manually #define _GNU_SOURCE, which is repetitive and error prone. E.g. kselftest_harness.h requires _GNU_SOURCE for asprintf(), but if a selftest includes kvm_test_harness.h after stdio.h, the include guards result in the effective version of stdio.h consumed by kvm_test_harness.h not defining asprintf(): In file included from x86_64/fix_hypercall_test.c:12: In file included from include/kvm_test_harness.h:11: ../kselftest_harness.h:1169:2: error: call to undeclared function 'asprintf'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 1169 | asprintf(&test_name, "%s%s%s.%s", f->name, | ^ When including the rseq selftest's "library" code, #undef _GNU_SOURCE so that rseq.c controls whether or not it wants to build with _GNU_SOURCE. Reported-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Acked-by: Oliver Upton <oliver.upton@linux.dev> Acked-by: Anup Patel <anup@brainfault.org> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Link: https://lore.kernel.org/r/20240423190308.2883084-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>