diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-05-01 21:06:20 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-05-01 21:06:20 +0200 |
commit | c8c655c34e33544aec9d64b660872ab33c29b5f1 (patch) | |
tree | 4aad88f698f04cef9e5d9d573a6df6283085dadd /virt/kvm | |
parent | Merge tag 'for-linus' of https://github.com/openrisc/linux (diff) | |
parent | Merge tag 'kvm-x86-vmx-6.4' of https://github.com/kvm-x86/linux into HEAD (diff) | |
download | linux-c8c655c34e33544aec9d64b660872ab33c29b5f1.tar.xz linux-c8c655c34e33544aec9d64b660872ab33c29b5f1.zip |
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Paolo Bonzini:
"s390:
- More phys_to_virt conversions
- Improvement of AP management for VSIE (nested virtualization)
ARM64:
- Numerous fixes for the pathological lock inversion issue that
plagued KVM/arm64 since... forever.
- New framework allowing SMCCC-compliant hypercalls to be forwarded
to userspace, hopefully paving the way for some more features being
moved to VMMs rather than be implemented in the kernel.
- Large rework of the timer code to allow a VM-wide offset to be
applied to both virtual and physical counters as well as a
per-timer, per-vcpu offset that complements the global one. This
last part allows the NV timer code to be implemented on top.
- A small set of fixes to make sure that we don't change anything
affecting the EL1&0 translation regime just after having having
taken an exception to EL2 until we have executed a DSB. This
ensures that speculative walks started in EL1&0 have completed.
- The usual selftest fixes and improvements.
x86:
- Optimize CR0.WP toggling by avoiding an MMU reload when TDP is
enabled, and by giving the guest control of CR0.WP when EPT is
enabled on VMX (VMX-only because SVM doesn't support per-bit
controls)
- Add CR0/CR4 helpers to query single bits, and clean up related code
where KVM was interpreting kvm_read_cr4_bits()'s "unsigned long"
return as a bool
- Move AMD_PSFD to cpufeatures.h and purge KVM's definition
- Avoid unnecessary writes+flushes when the guest is only adding new
PTEs
- Overhaul .sync_page() and .invlpg() to utilize .sync_page()'s
optimizations when emulating invalidations
- Clean up the range-based flushing APIs
- Revamp the TDP MMU's reaping of Accessed/Dirty bits to clear a
single A/D bit using a LOCK AND instead of XCHG, and skip all of
the "handle changed SPTE" overhead associated with writing the
entire entry
- Track the number of "tail" entries in a pte_list_desc to avoid
having to walk (potentially) all descriptors during insertion and
deletion, which gets quite expensive if the guest is spamming
fork()
- Disallow virtualizing legacy LBRs if architectural LBRs are
available, the two are mutually exclusive in hardware
- Disallow writes to immutable feature MSRs (notably
PERF_CAPABILITIES) after KVM_RUN, similar to CPUID features
- Overhaul the vmx_pmu_caps selftest to better validate
PERF_CAPABILITIES
- Apply PMU filters to emulated events and add test coverage to the
pmu_event_filter selftest
- AMD SVM:
- Add support for virtual NMIs
- Fixes for edge cases related to virtual interrupts
- Intel AMX:
- Don't advertise XTILE_CFG in KVM_GET_SUPPORTED_CPUID if
XTILE_DATA is not being reported due to userspace not opting in
via prctl()
- Fix a bug in emulation of ENCLS in compatibility mode
- Allow emulation of NOP and PAUSE for L2
- AMX selftests improvements
- Misc cleanups
MIPS:
- Constify MIPS's internal callbacks (a leftover from the hardware
enabling rework that landed in 6.3)
Generic:
- Drop unnecessary casts from "void *" throughout kvm_main.c
- Tweak the layout of "struct kvm_mmu_memory_cache" to shrink the
struct size by 8 bytes on 64-bit kernels by utilizing a padding
hole
Documentation:
- Fix goof introduced by the conversion to rST"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (211 commits)
KVM: s390: pci: fix virtual-physical confusion on module unload/load
KVM: s390: vsie: clarifications on setting the APCB
KVM: s390: interrupt: fix virtual-physical confusion for next alert GISA
KVM: arm64: Have kvm_psci_vcpu_on() use WRITE_ONCE() to update mp_state
KVM: arm64: Acquire mp_state_lock in kvm_arch_vcpu_ioctl_vcpu_init()
KVM: selftests: Test the PMU event "Instructions retired"
KVM: selftests: Copy full counter values from guest in PMU event filter test
KVM: selftests: Use error codes to signal errors in PMU event filter test
KVM: selftests: Print detailed info in PMU event filter asserts
KVM: selftests: Add helpers for PMC asserts in PMU event filter test
KVM: selftests: Add a common helper for the PMU event filter guest code
KVM: selftests: Fix spelling mistake "perrmited" -> "permitted"
KVM: arm64: vhe: Drop extra isb() on guest exit
KVM: arm64: vhe: Synchronise with page table walker on MMU update
KVM: arm64: pkvm: Document the side effects of kvm_flush_dcache_to_poc()
KVM: arm64: nvhe: Synchronise with page table walker on TLBI
KVM: arm64: Handle 32bit CNTPCTSS traps
KVM: arm64: nvhe: Synchronise with page table walker on vcpu run
KVM: arm64: vgic: Don't acquire its_lock before config_lock
KVM: selftests: Add test to verify KVM's supported XCR0
...
Diffstat (limited to 'virt/kvm')
-rw-r--r-- | virt/kvm/kvm_main.c | 30 |
1 files changed, 14 insertions, 16 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 537f33ac49de..cb5c13eee193 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1301,7 +1301,7 @@ static void kvm_destroy_vm(struct kvm *kvm) * At this point, pending calls to invalidate_range_start() * have completed but no more MMU notifiers will run, so * mn_active_invalidate_count may remain unbalanced. - * No threads can be waiting in install_new_memslots as the + * No threads can be waiting in kvm_swap_active_memslots() as the * last reference on KVM has been dropped, but freeing * memslots would deadlock without this manual intervention. */ @@ -1745,13 +1745,13 @@ static void kvm_invalidate_memslot(struct kvm *kvm, kvm_arch_flush_shadow_memslot(kvm, old); kvm_arch_guest_memory_reclaimed(kvm); - /* Was released by kvm_swap_active_memslots, reacquire. */ + /* Was released by kvm_swap_active_memslots(), reacquire. */ mutex_lock(&kvm->slots_arch_lock); /* * Copy the arch-specific field of the newly-installed slot back to the * old slot as the arch data could have changed between releasing - * slots_arch_lock in install_new_memslots() and re-acquiring the lock + * slots_arch_lock in kvm_swap_active_memslots() and re-acquiring the lock * above. Writers are required to retrieve memslots *after* acquiring * slots_arch_lock, thus the active slot's data is guaranteed to be fresh. */ @@ -1813,11 +1813,11 @@ static int kvm_set_memslot(struct kvm *kvm, int r; /* - * Released in kvm_swap_active_memslots. + * Released in kvm_swap_active_memslots(). * - * Must be held from before the current memslots are copied until - * after the new memslots are installed with rcu_assign_pointer, - * then released before the synchronize srcu in kvm_swap_active_memslots. + * Must be held from before the current memslots are copied until after + * the new memslots are installed with rcu_assign_pointer, then + * released before the synchronize srcu in kvm_swap_active_memslots(). * * When modifying memslots outside of the slots_lock, must be held * before reading the pointer to the current memslots until after all @@ -3869,7 +3869,7 @@ static int create_vcpu_fd(struct kvm_vcpu *vcpu) #ifdef __KVM_HAVE_ARCH_VCPU_DEBUGFS static int vcpu_get_pid(void *data, u64 *val) { - struct kvm_vcpu *vcpu = (struct kvm_vcpu *) data; + struct kvm_vcpu *vcpu = data; *val = pid_nr(rcu_access_pointer(vcpu->pid)); return 0; } @@ -4470,7 +4470,7 @@ static int kvm_ioctl_create_device(struct kvm *kvm, return 0; } -static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) +static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) { switch (arg) { case KVM_CAP_USER_MEMORY: @@ -5047,7 +5047,7 @@ put_fd: static long kvm_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { - long r = -EINVAL; + int r = -EINVAL; switch (ioctl) { case KVM_GET_API_VERSION: @@ -5574,8 +5574,7 @@ static int kvm_debugfs_open(struct inode *inode, struct file *file, const char *fmt) { int ret; - struct kvm_stat_data *stat_data = (struct kvm_stat_data *) - inode->i_private; + struct kvm_stat_data *stat_data = inode->i_private; /* * The debugfs files are a reference to the kvm struct which @@ -5596,8 +5595,7 @@ static int kvm_debugfs_open(struct inode *inode, struct file *file, static int kvm_debugfs_release(struct inode *inode, struct file *file) { - struct kvm_stat_data *stat_data = (struct kvm_stat_data *) - inode->i_private; + struct kvm_stat_data *stat_data = inode->i_private; simple_attr_release(inode, file); kvm_put_kvm(stat_data->kvm); @@ -5646,7 +5644,7 @@ static int kvm_clear_stat_per_vcpu(struct kvm *kvm, size_t offset) static int kvm_stat_data_get(void *data, u64 *val) { int r = -EFAULT; - struct kvm_stat_data *stat_data = (struct kvm_stat_data *)data; + struct kvm_stat_data *stat_data = data; switch (stat_data->kind) { case KVM_STAT_VM: @@ -5665,7 +5663,7 @@ static int kvm_stat_data_get(void *data, u64 *val) static int kvm_stat_data_clear(void *data, u64 val) { int r = -EFAULT; - struct kvm_stat_data *stat_data = (struct kvm_stat_data *)data; + struct kvm_stat_data *stat_data = data; if (val) return -EINVAL; |