summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* x86/kvm: Resolve shadow warnings in macro expansionMark D Rustad2014-07-311-2/+2
| | | | | | | | | | | | Resolve shadow warnings that appear in W=2 builds. Instead of using ret to hold the return pointer, save the length in a new variable saved_len and compute the pointer on exit. This also resolves a very technical error, in that ret was declared as a const char *, when it really was a char * const. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge tag 'kvm-s390-20140730' of ↵Paolo Bonzini2014-07-312-0/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next Two fixes for recently introduced regressions - a memory leak on busy SIGP - pontentially lost SIGP stop in rare situations (shutdown loops) The first issue is not part of a released kernel. The 2nd issue is present in all KVM versions, but did not trigger before commit 7dfc63cf977447e09b1072911c2 (KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a time) with Linux as a guest. So no need for cc stable
| * KVM: s390: rework broken SIGP STOP interrupt handlingDavid Hildenbrand2014-07-311-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A VCPU might never stop if it intercepts (for whatever reason) between "fake interrupt delivery" and execution of the stop function. Heart of the problem is that SIGP STOP is an interrupt that has to be processed on every SIE entry until the VCPU finally executes the stop function. This problem was made apparent by commit 7dfc63cf977447e09b1072911c2 (KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a time). With the old code, the guest could (incorrectly) inject SIGP STOPs multiple times. The bug of losing a sigp stop exists in KVM before 7dfc63cf97, but it was hidden by Linux guests doing a sigp stop loop. The new code (rightfully) returns CC=2 and does not queue a new interrupt. This patch is a simple fix of the problem. Longterm we are going to rework that code - e.g. get rid of the action bits and so on. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> [some additional patch description]
| * KVM: s390: Fix memory leak on busy SIGP stopChristian Borntraeger2014-07-301-0/+1
| | | | | | | | | | | | | | | | | | | | commit 7dfc63cf977447e09b1072911c22564f900fc578 (KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a time) introduced a memory leak if a sigp stop is already pending. Free the allocated inti structure. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
* | KVM: x86: always exit on EOIs for interrupts listed in the IOAPIC redir tablePaolo Bonzini2014-07-301-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the EOI exit bitmap (used for APICv) does not include interrupts that are masked. However, this can cause a bug that manifests as an interrupt storm inside the guest. Alex Williamson reported the bug and is the one who really debugged this; I only wrote the patch. :) The scenario involves a multi-function PCI device with OHCI and EHCI USB functions and an audio function, all assigned to the guest, where both USB functions use legacy INTx interrupts. As soon as the guest boots, interrupts for these devices turn into an interrupt storm in the guest; the host does not see the interrupt storm. Basically the EOI path does not work, and the guest continues to see the interrupt over and over, even after it attempts to mask it at the APIC. The bug is only visible with older kernels (RHEL6.5, based on 2.6.32 with not many changes in the area of APIC/IOAPIC handling). Alex then tried forcing bit 59 (corresponding to the USB functions' IRQ) on in the eoi_exit_bitmap and TMR, and things then work. What happens is that VFIO asserts IRQ11, then KVM recomputes the EOI exit bitmap. It does not have set bit 59 because the RTE was masked, so the IOAPIC never sees the EOI and the interrupt continues to fire in the guest. My guess was that the guest is masking the interrupt in the redirection table in the interrupt routine, i.e. while the interrupt is set in a LAPIC's ISR, The simplest fix is to ignore the masking state, we would rather have an unnecessary exit rather than a missed IRQ ACK and anyway IOAPIC interrupts are not as performance-sensitive as for example MSIs. Alex tested this patch and it fixed his bug. [Thanks to Alex for his precise description of the problem and initial debugging effort. A lot of the text above is based on emails exchanged with him.] Reported-by: Alex Williamson <alex.williamson@redhat.com> Tested-by: Alex Williamson <alex.williamson@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: vmx: remove duplicate vmx_mpx_supported() prototypeChris J Arges2014-07-301-1/+0
|/ | | | | | | Remove a prototype which was added by both 93c4adc7afe and 36be0b9deb2. Signed-off-by: Chris J Arges <chris.j.arges@canonical.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* x86/kvm: Resolve shadow warning from min macroMark Rustad2014-07-251-2/+1
| | | | | | | | | | Resolve a shadow warning generated in W=2 builds by the nested use of the min macro by instead using the min3 macro for the minimum of 3 values. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* kvm: Resolve missing-field-initializers warningsMark Rustad2014-07-251-2/+2
| | | | | | | | | | Resolve missing-field-initializers warnings seen in W=2 kernel builds by having macros generate more elaborated initializers. That is enough to silence the warnings. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Replace NR_VMX_MSR with its definitionPaolo Bonzini2014-07-241-4/+4
| | | | | | | | Using ARRAY_SIZE directly makes it easier to read the code. While touching the code, replace the division by a multiplication in the recently added BUILD_BUG_ON. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86: Assertions to check no overrun in MSR listsNadav Amit2014-07-242-0/+3
| | | | | | | | | | Currently there is no check whether shared MSRs list overrun the allocated size which can results in bugs. In addition there is no check that vmx->guest_msrs has sufficient space to accommodate all the VMX msrs. This patch adds the assertions. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86: set rflags.rf during fault injectionNadav Amit2014-07-241-0/+30
| | | | | | | | | x86 does not automatically set rflags.rf during event injection. This patch does partial job, setting rflags.rf upon fault injection. It does not handle the setting of RF upon interrupt injection on rep-string instruction. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: x86: Setting rflags.rf during rep-string emulationNadav Amit2014-07-241-1/+5
| | | | | | | | | | | This patch updates RF for rep-string emulation. The flag is set upon the first iteration, and cleared after the last (if emulated). It is intended to make sure that if a trap (in future data/io #DB emulation) or interrupt is delivered to the guest during the rep-string instruction, RF will be set correctly. RF affects whether instruction breakpoint in the guest is masked. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge tag 'kvm-s390-20140721' of ↵Paolo Bonzini2014-07-227-104/+82
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next Bugfixes -------- - add IPTE to trace event decoder - document and advertise KVM_CAP_S390_IRQCHIP Cleanups -------- - Reuse kvm_vcpu_block for s390 - Get rid of tasklet for wakup processing
| * KVM: s390: add ipte to trace event decodingChristian Borntraeger2014-07-211-0/+1
| | | | | | | | | | | | | | IPTE intercept can happen, let's decode that. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
| * KVM: s390: advertise KVM_CAP_S390_IRQCHIPCornelia Huck2014-07-211-0/+1
| | | | | | | | | | | | | | | | | | We should advertise all capabilities, including those that can be enabled. Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: document KVM_CAP_S390_IRQCHIPCornelia Huck2014-07-211-0/+9
| | | | | | | | | | | | | | | | Let's document that this is a capability that may be enabled per-vm. Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: document target of capability enablementCornelia Huck2014-07-211-3/+15
| | | | | | | | | | | | | | | | | | | | Capabilities can be enabled on a vcpu or (since recently) on a vm. Document this and note for the existing capabilites whether they are per-vcpu or per-vm. Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: remove the tasklet used by the hrtimerDavid Hildenbrand2014-07-214-16/+1
| | | | | | | | | | | | | | | | | | | | We can get rid of the tasklet used for waking up a VCPU in the hrtimer code but wakeup the VCPU directly. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: move vcpu wakeup code to a central pointDavid Hildenbrand2014-07-213-23/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's move the vcpu wakeup code to a central point. We should set the vcpu->preempted flag only if the target is actually sleeping and before the real wakeup happens. Otherwise the preempted flag might be set, when not necessary. This may result in immediate reschedules after schedule() in some scenarios. The wakeup code doesn't require the local_int.lock to be held. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: remove _bh locking from start_stop_lockDavid Hildenbrand2014-07-211-4/+4
| | | | | | | | | | | | | | | | | | | | The start_stop_lock is no longer acquired when in atomic context, therefore we can convert it into an ordinary spin_lock. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: remove _bh locking from local_int.lockDavid Hildenbrand2014-07-213-28/+28
| | | | | | | | | | | | | | | | | | | | local_int.lock is not used in a bottom-half handler anymore, therefore we can turn it into an ordinary spin_lock at all occurrences. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: cleanup handle_wait by reusing kvm_vcpu_blockDavid Hildenbrand2014-07-213-37/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch cleans up the code in handle_wait by reusing the common code function kvm_vcpu_block. signal_pending(), kvm_cpu_has_pending_timer() and kvm_arch_vcpu_runnable() are sufficient for checking if we need to wake-up that VCPU. kvm_vcpu_block uses these functions, so no checks are lost. The flag "timer_due" can be removed - kvm_cpu_has_pending_timer() tests whether the timer is pending, thus the vcpu is correctly woken up. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
* | KVM: x86: DR6/7.RTM cannot be writtenNadav Amit2014-07-214-11/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Haswell and newer Intel CPUs have support for RTM, and in that case DR6.RTM is not fixed to 1 and DR7.RTM is not fixed to zero. That is not the case in the current KVM implementation. This bug is apparent only if the MOV-DR instruction is emulated or the host also debugs the guest. This patch is a partial fix which enables DR6.RTM and DR7.RTM to be cleared and set respectively. It also sets DR6.RTM upon every debug exception. Obviously, it is not a complete fix, as debugging of RTM is still unsupported. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: nVMX: clean up nested_release_vmcs12 and code around itPaolo Bonzini2014-07-211-21/+21
| | | | | | | | | | | | | | Make nested_release_vmcs12 idempotent. Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: nVMX: fix lifetime issues for vmcs02Paolo Bonzini2014-07-211-16/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | free_nested needs the loaded_vmcs to be valid if it is a vmcs02, in order to detach it from the shadow vmcs. However, this is not available anymore after commit 26a865f4aa8e (KVM: VMX: fix use after free of vmx->loaded_vmcs, 2014-01-03). Revert that patch, and fix its problem by forcing a vmcs01 as the active VMCS before freeing all the nested VMX state. Reported-by: Wanpeng Li <wanpeng.li@linux.intel.com> Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: Defining missing x86 vectorsNadav Amit2014-07-211-0/+3
| | | | | | | | | | | | | | Defining XE, XM and VE vector numbers. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: emulator injects #DB when RFLAGS.RF is setNadav Amit2014-07-211-1/+2
| | | | | | | | | | | | | | | | | | | | If the RFLAGS.RF is set, then no #DB should occur on instruction breakpoints. However, the KVM emulator injects #DB regardless to RFLAGS.RF. This patch fixes this behavior. KVM, however, still appears not to update RFLAGS.RF correctly, regardless of this patch. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: Cleanup of rflags.rf cleaningNadav Amit2014-07-211-4/+4
| | | | | | | | | | | | | | | | | | RFLAGS.RF was cleaned in several functions (e.g., syscall) in the x86 emulator. Now that we clear it before the execution of an instruction in the emulator, we can remove the specific cleanup of RFLAGS.RF. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: Clear rflags.rf on emulated instructionsNadav Amit2014-07-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | When an instruction is emulated RFLAGS.RF should be cleared. KVM previously did not do so. This patch clears RFLAGS.RF after interception is done. If a fault occurs during the instruction, RFLAGS.RF will be set by a previous patch. This patch does not handle the case of traps/interrupts during rep-strings. Traps are only expected to occur on debug watchpoints, and those are anyhow not handled by the emulator. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: popf emulation should not change RFNadav Amit2014-07-211-1/+1
| | | | | | | | | | | | | | | | RFLAGS.RF is always zero after popf. Therefore, popf should not updated RF, as anyhow emulating popf, just as any other instruction should clear RFLAGS.RF. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: Clearing rflags.rf upon skipped emulated instructionNadav Amit2014-07-211-0/+2
| | | | | | | | | | | | | | | | When skipping an emulated instruction, rflags.rf should be cleared as it would be on real x86 CPU. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | Merge tag 'kvm-s390-20140715' of ↵Paolo Bonzini2014-07-218-45/+98
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next This series enables the "KVM_(S|G)ET_MP_STATE" ioctls on s390 to make the cpu state settable by user space. This is necessary to avoid races in s390 SIGP/reset handling which happen because some SIGPs are handled in QEMU, while others are handled in the kernel. Together with the busy conditions as return value of SIGP races happen especially in areas like starting and stopping of CPUs. (For example, there is a program 'cpuplugd', that runs on several s390 distros which does automatic onlining and offlining on cpus.) As soon as the MPSTATE interface is used, user space takes complete control of the cpu states. Otherwise the kernel will use the old way. Therefore, the new kernel continues to work fine with old QEMUs.
| * KVM: s390: implement KVM_(S|G)ET_MP_STATE for user space state controlDavid Hildenbrand2014-07-107-8/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch - adds s390 specific MP states to linux headers and documents them - implements the KVM_{SET,GET}_MP_STATE ioctls - enables KVM_CAP_MP_STATE - allows user space to control the VCPU state on s390. If user space sets the VCPU state using the ioctl KVM_SET_MP_STATE, we can disable manual changing of the VCPU state and trust user space to do the right thing. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: prepare for KVM_(S|G)ET_MP_STATE on other architecturesDavid Hildenbrand2014-07-102-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Highlight the aspects of the ioctls that are actually specific to x86 and ia64. As defined restrictions (irqchip) and mp states may not apply to other architectures, these parts are flagged to belong to x86 and ia64. In preparation for the use of KVM_(S|G)ET_MP_STATE by s390. Fix a spelling error (KVM_SET_MP_STATE vs. KVM_SET_MPSTATE) on the way. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: remove __cpu_is_stopped and expose is_vcpu_stoppedDavid Hildenbrand2014-07-102-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The function "__cpu_is_stopped" is not used any more. Let's remove it and expose the function "is_vcpu_stopped" instead, which is actually what we want. This patch also converts an open coded check for CPUSTAT_STOPPED to is_vcpu_stopped(). Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: move finalization of SIGP STOP orders to kvm_s390_vcpu_stopDavid Hildenbrand2014-07-102-19/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's move the finalization of SIGP STOP and SIGP STOP AND STORE STATUS orders to the point where the VCPU is actually stopped. This change is needed to prepare for a user space driven VCPU state change. The action_bits may only be cleared when setting the cpu state to STOPPED while holding the local irq lock. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
| * KVM: s390: allow only one SIGP STOP (AND STORE STATUS) at a timeDavid Hildenbrand2014-07-101-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A SIGP STOP (AND STORE STATUS) order is complete as soon as the VCPU has been stopped. This patch makes sure that only one SIGP STOP (AND STORE STATUS) may be pending at a time (as defined by the architecture). If the action_bits are still set, a SIGP STOP has been issued but not completed yet. The VCPU is busy for further SIGP STOP orders. Also set the CPUSTAT_STOP_INT after the action_bits variable has been modified (the same order that is used when injecting a KVM_S390_SIGP_STOP from userspace). Both changes are needed in preparation for a user space driven VCPU state change (to avoid race conditions). Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
* | KVM: nVMX: Fix virtual interrupt delivery injectionWanpeng Li2014-07-171-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fix bug reported in https://bugzilla.kernel.org/show_bug.cgi?id=73331, after the patch http://www.spinics.net/lists/kvm/msg105230.html applied, there is some progress and the L2 can boot up, however, slowly. The original idea of this fix vid injection patch is from "Zhang, Yang Z" <yang.z.zhang@intel.com>. Interrupt which delivered by vid should be injected to L1 by L0 if current is in L1, or should be injected to L2 by L0 through the old injection way if L1 doesn't have set External-interrupt exiting bit. The current logic doen't consider these cases. This patch fix it by vid intr to L1 if current is L1 or L2 through old injection way if L1 doen't have External-interrupt exiting bit set. Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com> Signed-off-by: "Zhang, Yang Z" <yang.z.zhang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: Emulator support for #UD on CPL>0Nadav Amit2014-07-111-1/+5
| | | | | | | | | | | | | | | | | | | | | | Certain instructions (e.g., mwait and monitor) cause a #UD exception when they are executed in user mode. This is in contrast to the regular privileged instructions which cause #GP. In order not to mess with SVM interception of mwait and monitor which assumes privilege level assertions take place before interception, a flag has been added. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: Emulator flag for instruction that only support 16-bit addresses ↵Nadav Amit2014-07-111-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | in real mode Certain instructions, such as monitor and xsave do not support big real mode and cause a #GP exception if any of the accessed bytes effective address are not within [0, 0xffff]. This patch introduces a flag to mark these instructions, including the necassary checks. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: use kvm_read_guest_page for emulator accessesPaolo Bonzini2014-07-111-4/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Emulator accesses are always done a page at a time, either by the emulator itself (for fetches) or because we need to query the MMU for address translations. Speed up these accesses by using kvm_read_guest_page and, in the case of fetches, by inlining kvm_read_guest_virt_helper and dropping the loop around kvm_read_guest_page. This final tweak saves 30-100 more clock cycles (4-10%), bringing the count (as measured by kvm-unit-tests) down to 720-1100 clock cycles on a Sandy Bridge Xeon host, compared to 2300-3200 before the whole series and 925-1700 after the first two low-hanging fruit changes. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: x86: ensure emulator fetches do not span multiple pagesPaolo Bonzini2014-07-111-6/+7
| | | | | | | | | | | | | | | | | | | | | | When the CS base is not page-aligned, the linear address of the code could get close to the page boundary (e.g. 0x...ffe) even if the EIP value is not. So we need to first linearize the address, and only then compute the number of valid bytes that can be fetched. This happens relatively often when executing real mode code. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: put pointers in the fetch_cachePaolo Bonzini2014-07-113-24/+20
| | | | | | | | | | | | This simplifies the code a bit, especially the overflow checks. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: avoid per-byte copying in instruction fetchesPaolo Bonzini2014-07-111-24/+22
| | | | | | | | | | | | | | | | | | We do not need a memory copying loop anymore in insn_fetch; we can use a byte-aligned pointer to access instruction fields directly from the fetch_cache. This eliminates 50-150 cycles (corresponding to a 5-10% improvement in performance) from each instruction. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: avoid repeated calls to do_insn_fetch_bytesPaolo Bonzini2014-07-111-9/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | do_insn_fetch_bytes will only be called once in a given insn_fetch and insn_fetch_arr, because in fact it will only be called at most twice for any instruction and the first call is explicit in x86_decode_insn. This observation lets us hoist the call out of the memory copying loop. It does not buy performance, because most fetches are one byte long anyway, but it prepares for the next patch. The overflow check is tricky, but correct. Because do_insn_fetch_bytes has already been called once, we know that fc->end is at least 15. So it is okay to subtract the number of bytes we want to read. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: speed up do_insn_fetchPaolo Bonzini2014-07-111-31/+36
| | | | | | | | | | | | | | | | | | Hoist the common case up from do_insn_fetch_byte to do_insn_fetch, and prime the fetch_cache in x86_decode_insn. This helps a bit the compiler and the branch predictor, but above all it lays the ground for further changes in the next few patches. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: do not initialize memoppBandan Das2014-07-112-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rip_relative is only set if decode_modrm runs, and if you have ModRM you will also have a memopp. We can then access memopp unconditionally. Note that rip_relative cannot be hoisted up to decode_modrm, or you break "mov $0, xyz(%rip)". Also, move typecast on "out of range value" of mem.ea to decode_modrm. Together, all these optimizations save about 50 cycles on each emulated instructions (4-6%). Signed-off-by: Bandan Das <bsd@redhat.com> [Fix immediate operands with rip-relative addressing. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: rework seg_overrideBandan Das2014-07-112-27/+17
| | | | | | | | | | | | | | | | | | x86_decode_insn already sets a default for seg_override, so remove it from the zeroed area. Also replace set/get functions with direct access to the field. Signed-off-by: Bandan Das <bsd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: clean up initializations in init_decode_cacheBandan Das2014-07-112-14/+13
| | | | | | | | | | | | | | | | | | A lot of initializations are unnecessary as they get set to appropriate values before actually being used. Optimize placement of fields in x86_emulate_ctxt Signed-off-by: Bandan Das <bsd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | KVM: emulate: cleanup decode_modrmBandan Das2014-07-111-8/+6
| | | | | | | | | | | | | | | | | | Remove the if conditional - that will help us avoid an "else initialize to 0" Also, rearrange operators for slightly better code. Signed-off-by: Bandan Das <bsd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>