summaryrefslogtreecommitdiffstats
path: root/drivers/kvm (follow)
Commit message (Collapse)AuthorAgeFilesLines
* KVM: disable writeback for 0x0f 0x01 instructions.Aurelien Jarno2007-07-251-0/+2
| | | | | | | | | | | 0x0f 0x01 instructions (ie lgdt, lidt, smsw, lmsw and invlpg) does not use writeback. This patch set no_wb=1 when emulating those instructions. This fixes a regression booting the FreeBSD kernel on AMD. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix removal of nx capability from guest cpuidAvi Kivity2007-07-251-2/+2
| | | | | | | | | | | Testing the wrong bit caused kvm not to disable nx on the guest when it is disabled on the host (an mmu optimization relies on the nx bits being the same in the guest and host). This allows Windows to boot when nx is disabled on te host (e.g. when host pae is disabled). Signed-off-by: Avi Kivity <avi@qumranet.com>
* Revert "KVM: Avoid useless memory write when possible"Avi Kivity2007-07-251-4/+2
| | | | | | | | This reverts commit a3c870bdce4d34332ebdba7eb9969592c4c6b243. While it does save useless updates, it (probably) defeats the fork detector, causing a massive performance loss. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix unlikely kvm_create vs decache_vcpus_on_cpu raceRusty Russell2007-07-251-3/+3
| | | | | | | | We add the kvm to the vm_list before initializing the vcpu mutexes, which can be mutex_trylock()'ed by decache_vcpus_on_cpu(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Correctly handle writes crossing a page boundaryAvi Kivity2007-07-251-4/+24
| | | | | | | | | | | Writes that are contiguous in virtual memory may not be contiguous in physical memory; so split writes that straddle a page boundary. Thanks to Aurelien for reporting the bug, patient testing, and a fix to this very patch. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Require CONFIG_ANON_INODESAvi Kivity2007-07-221-0/+1
| | | | | | | Found by Sebastian Siewior and randconfig. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: MMU: Fix cleaning up the shadow page allocation cacheAvi Kivity2007-07-211-1/+1
| | | | | | | __free_page() wants a struct page, not a virtual address. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: MMU: Fix oopses with SLUBAvi Kivity2007-07-201-13/+26
| | | | | | | | | The kvm mmu uses page->private on shadow page tables; so does slub, and an oops result. Fix by allocating regular pages for shadows instead of using slub. Tested-by: S.Çağlar Onur <caglar@pardus.org.tr> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: x86 emulator: implement rdmsr and wrmsrAvi Kivity2007-07-203-5/+31
| | | | | | | Allow real-mode emulation of rdmsr and wrmsr. This allows smp Windows to boot, presumably for its sipi trampoline. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix memory slot management functions for guest smpAvi Kivity2007-07-203-123/+52
| | | | | | | | | | | The memory slot management functions were oriented against vcpu 0, where they should be kvm-wide. This causes hangs starting X on guest smp. Fix by making the functions (and resultant tail in the mmu) non-vcpu-specific. Unfortunately this reduces the efficiency of the mmu object cache a bit. We may have to revisit this later. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Store nx bit for large page shadowsAvi Kivity2007-07-202-2/+4
| | | | | | | | | | We need to distinguish between large page shadows which have the nx bit set and those which don't. The problem shows up when booting a newer smp Linux kernel, where the trampoline page (which is in real mode, which uses the same shadow pages as large pages) is using the same mapping as a kernel data page, which is mapped using nx, causing kvm to spin on that page. Signed-off-by: Avi Kivity <avi@qumranet.com>
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-201-4/+4
| | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* i386: Allow KVM on i386 nonpaeAvi Kivity2007-07-191-1/+0
| | | | | | | | | | | | | | | Currently, CONFIG_X86_CMPXCHG64 both enables boot-time checking of the cmpxchg64b feature and enables compilation of the set_64bit() family. Since the option is dependent on PAE, and since KVM depends on set_64bit(), this effectively disables KVM on i386 nopae. Simplify by removing the config option altogether: the boot check is made dependent on CONFIG_X86_PAE directly, and the set_64bit() family is exposed without constraints. It is up to users to check for the feature flag (KVM does not as virtualiation extensions imply its existence). Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: Use CPU_DYING for disabling virtualizationAvi Kivity2007-07-161-2/+2
| | | | | | | | Only at the CPU_DYING stage can we be sure that no user process will be scheduled onto the cpu and oops when trying to use virtualization extensions. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Tune hotplug/suspend IPIsAvi Kivity2007-07-161-2/+2
| | | | | | | | | The hotplug IPIs can be called from the cpu on which we are currently running on, so use on_cpu(). Similarly, drop on_each_cpu() for the suspend/resume callbacks, as we're in atomic context here and only one cpu is up anyway. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Keep track of which cpus have virtualization enabledAvi Kivity2007-07-161-12/+33
| | | | | | | | By keeping track of which cpus have virtualization enabled, we prevent double-enable or double-disable during hotplug, which is a very fatal oops. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Clean up #includesAvi Kivity2007-07-164-20/+20
| | | | | | Remove unnecessary ones, and rearange the remaining in the standard order. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Remove kvmfs in favor of the anonymous inodes sourceAvi Kivity2007-07-161-132/+11
| | | | | | | | kvm uses a pseudo filesystem, kvmfs, to generate inodes, a job that the new anonymous inodes source does much better. Cc: Davide Libenzi <davidel@xmailserver.org> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: SVM: Reliably detect if SVM was disabled by BIOSJoerg Roedel2007-07-162-0/+9
| | | | | | | | | | This patch adds an implementation to the svm is_disabled function to detect reliably if the BIOS disabled the SVM feature in the CPU. This fixes the issues with kernel panics when loading the kvm-amd module on machines where SVM is available but disabled. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: VMX: Remove unnecessary code in vmx_tlb_flush()Avi Kivity2007-07-161-1/+0
| | | | | | | | A vmexit implicitly flushes the tlb; the code is bogus. Noted by Shaohua Li. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fix Wrong tlb flush orderShaohua Li2007-07-161-1/+1
| | | | | | | Need to flush the tlb after updating a pte, not before. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: VMX: Reinitialize the real-mode tss when entering real modeAvi Kivity2007-07-161-0/+4
| | | | | | | Protected mode code may have corrupted the real-mode tss, so re-initialize it when switching to real mode. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Avoid useless memory write when possibleLuca Tettamanti2007-07-161-2/+4
| | | | | | | | When writing to normal memory and the memory area is unchanged the write can be safely skipped, avoiding the costly kvm_mmu_pte_write. Signed-Off-By: Luca Tettamanti <kronos.it@gmail.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix x86 emulator writebackLuca Tettamanti2007-07-161-4/+5
| | | | | | | | | | | | | | | When the old value and new one are the same the emulator skips the write; this is undesirable when the destination is a MMIO area and the write shall be performed regardless of the previous value. This optimization breaks e.g. a Linux guest APIC compiled without X86_GOOD_APIC. Remove the check and perform the writeback stage in the emulation unless it's explicitly disabled (currently push and some 2 bytes instructions may disable the writeback). Signed-Off-By: Luca Tettamanti <kronos.it@gmail.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Add support for in-kernel pio handlersEddie Dong2007-07-162-1/+37
| | | | | | | Useful for the PIC and PIT. Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: VMX: Fix interrupt checking on lightweight exitGregory Haskins2007-07-161-3/+3
| | | | | | | | With kernel-injected interrupts, we need to check for interrupts on lightweight exits too. Signed-off-by: Gregory Haskins <ghaskins@novell.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Adds support for in-kernel mmio handlersGregory Haskins2007-07-162-12/+142
| | | | | Signed-off-by: Gregory Haskins <ghaskins@novell.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Implement emulation of instruction "ret" (opcode 0xc3)Nitin A Kamble2007-07-161-4/+8
| | | | | Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Implement emulation of "pop reg" instruction (opcode 0x58-0x5f)Nitin A Kamble2007-07-161-2/+15
| | | | | | | For use in real mode. Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: VMX: Ensure vcpu time stamp counter is monotonousAvi Kivity2007-07-161-0/+9
| | | | | | | | | | | | If the time stamp counter goes backwards, a guest delay loop can become infinite. This can happen if a vcpu is migrated to another cpu, where the counter has a lower value than the first cpu. Since we're doing an IPI to the first cpu anyway, we can use that to pick up the old tsc, and use that to calculate the adjustment we need to make to the tsc offset. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Initialize the BSP bit in the APIC_BASE msr correctlyAvi Kivity2007-07-162-6/+6
| | | | | | Needs to be set on vcpu 0 only. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: VMX: Replace memset(<addr>, 0, PAGESIZE) with clear_page(<addr>)Shani Moideen2007-07-161-3/+3
| | | | | Signed-off-by: Shani Moideen <shani.moideen@wipro.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: SVM: Replace memset(<addr>, 0, PAGESIZE) with clear_page(<addr>)Shani Moideen2007-07-161-2/+2
| | | | | Signed-off-by: Shani Moideen <shani.moideen@wipro.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Flush remote tlbs when reducing shadow pte permissionsAvi Kivity2007-07-165-15/+84
| | | | | | | | | | | When a vcpu causes a shadow tlb entry to have reduced permissions, it must also clear the tlb on remote vcpus. We do that by: - setting a bit on the vcpu that requests a tlb flush before the next entry - if the vcpu is currently executing, we send an ipi to make sure it exits before we continue Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Keep an upper bound of initialized vcpusAvi Kivity2007-07-162-0/+6
| | | | | | | That way, we don't need to loop for KVM_MAX_VCPUS for a single vcpu vm. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Emulate hlt on real mode for IntelAvi Kivity2007-07-163-2/+12
| | | | | | | This has two use cases: the bios can't boot from disk, and guest smp bootstrap. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Move duplicate halt handling code into kvm_main.cAvi Kivity2007-07-164-12/+14
| | | | | | Will soon have a thid user. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Enable guest smpAvi Kivity2007-07-161-1/+1
| | | | | | | As we don't support guest tlb shootdown yet, this is only reliable for real-mode guests. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix adding an smp virtual machine to the vm listAvi Kivity2007-07-161-3/+3
| | | | | | | If we add the vm once per vcpu, we corrupt the list if the guest has multiple vcpus. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix vcpu freeing for guest smpAvi Kivity2007-07-162-2/+17
| | | | | | | | A vcpu can pin up to four mmu shadow pages, which means the freeing loop will never terminate. Fix by first unpinning shadow pages on all vcpus, then freeing shadow pages. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Remove unnecessary initialization and checks in mark_page_dirty()Nguyen Anh Quynh2007-07-161-2/+2
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Replace C code with call to ARRAY_SIZE() macro.Robert P. J. Day2007-07-161-1/+1
| | | | | Signed-off-by: Robert P. J. Day <rpjday@mindspring.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Lazy guest cr3 switchingAvi Kivity2007-07-164-21/+40
| | | | | | | | | Switch guest paging context may require us to allocate memory, which might fail. Instead of wiring up error paths everywhere, make context switching lazy and actually do the switch before the next guest entry, where we can return an error if allocation fails. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Remove unused large page markerAvi Kivity2007-07-162-3/+0
| | | | | | | This has not been used for some time, as the same information is available in the page header. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Don't cache guest access bits in the shadow page tableAvi Kivity2007-07-162-9/+0
| | | | | | | | | This was once used to avoid accessing the guest pte when upgrading the shadow pte from read-only to read-write. But usually we need to set the guest pte dirty or accessed bits anyway, so this wasn't really exploited. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Simpify accessed/dirty/present/nx bit handlingAvi Kivity2007-07-162-10/+2
| | | | | | | | Always set the accessed and dirty bit (since having them cleared causes a read-modify-write cycle), always set the present bit, and copy the nx bit from the guest. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Remove cr0.wp tricksAvi Kivity2007-07-161-11/+0
| | | | | | No longer needed as we do everything in one place. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Make setting shadow ptes atomic on i386Avi Kivity2007-07-163-4/+15
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Make shadow pte updates atomicAvi Kivity2007-07-161-17/+20
| | | | | | | | | | | With guest smp, a second vcpu might see partial updates when the first vcpu services a page fault. So delay all updates until we have figured out what the pte should look like. Note that on i386, this is still not completely atomic as a 64-bit write will be split into two on a 32-bit machine. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Move shadow pte modifications from set_pte/set_pde to set_pde_common()Avi Kivity2007-07-161-2/+1
| | | | | | We want all shadow pte modifications in one place. Signed-off-by: Avi Kivity <avi@qumranet.com>