diff options
author | Like Xu <like.xu@linux.intel.com> | 2019-10-27 11:52:43 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-11-15 11:44:10 +0100 |
commit | b35e5548b41131eb06de041af2f5fb0890d96f96 (patch) | |
tree | 2ec144c378e67f57b87986d502aa4b7ed0733078 /arch/x86/kvm/pmu.h | |
parent | KVM: x86/vPMU: Reuse perf_event to avoid unnecessary pmc_reprogram_counter (diff) | |
download | linux-b35e5548b41131eb06de041af2f5fb0890d96f96.tar.xz linux-b35e5548b41131eb06de041af2f5fb0890d96f96.zip |
KVM: x86/vPMU: Add lazy mechanism to release perf_event per vPMC
Currently, a host perf_event is created for a vPMC functionality emulation.
It’s unpredictable to determine if a disabled perf_event will be reused.
If they are disabled and are not reused for a considerable period of time,
those obsolete perf_events would increase host context switch overhead that
could have been avoided.
If the guest doesn't WRMSR any of the vPMC's MSRs during an entire vcpu
sched time slice, and its independent enable bit of the vPMC isn't set,
we can predict that the guest has finished the use of this vPMC, and then
do request KVM_REQ_PMU in kvm_arch_sched_in and release those perf_events
in the first call of kvm_pmu_handle_event() after the vcpu is scheduled in.
This lazy mechanism delays the event release time to the beginning of the
next scheduled time slice if vPMC's MSRs aren't changed during this time
slice. If guest comes back to use this vPMC in next time slice, a new perf
event would be re-created via perf_event_create_kernel_counter() as usual.
Suggested-by: Wei Wang <wei.w.wang@intel.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/pmu.h')
-rw-r--r-- | arch/x86/kvm/pmu.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7eba298587dc..b7a625874203 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -62,6 +62,7 @@ static inline void pmc_release_perf_event(struct kvm_pmc *pmc) perf_event_release_kernel(pmc->perf_event); pmc->perf_event = NULL; pmc->current_config = 0; + pmc_to_pmu(pmc)->event_count--; } } @@ -126,6 +127,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); void kvm_pmu_refresh(struct kvm_vcpu *vcpu); void kvm_pmu_reset(struct kvm_vcpu *vcpu); void kvm_pmu_init(struct kvm_vcpu *vcpu); +void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); |