summaryrefslogtreecommitdiffstats
path: root/virt (follow)
Commit message (Collapse)AuthorAgeFilesLines
* KVM: Fix __set_bit() race in mark_page_dirty() during dirty loggingTakuya Yoshikawa2012-02-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | It is possible that the __set_bit() in mark_page_dirty() is called simultaneously on the same region of memory, which may result in only one bit being set, because some callers do not take mmu_lock before mark_page_dirty(). This problem is hard to produce because when we reach mark_page_dirty() beginning from, e.g., tdp_page_fault(), mmu_lock is being held during __direct_map(): making kvm-unit-tests' dirty log api test write to two pages concurrently was not useful for this reason. So we have confirmed that there can actually be race condition by checking if some callers really reach there without holding mmu_lock using spin_is_locked(): probably they were from kvm_write_guest_page(). To fix this race, this patch changes the bit operation to the atomic version: note that nr_dirty_pages also suffers from the race but we do not need exactly correct numbers for now. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* module_param: make bool parameters really bool (drivers & misc)Rusty Russell2012-01-131-1/+1
| | | | | | | | | | | | module_param(bool) used to counter-intuitively take an int. In fddd5201 (mid-2009) we allowed bool or int/unsigned int using a messy trick. It's time to remove the int/unsigned int option. For this version it'll simply give a warning, but it'll break next kernel version. Acked-by: Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds2012-01-101-4/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (53 commits) iommu/amd: Set IOTLB invalidation timeout iommu/amd: Init stats for iommu=pt iommu/amd: Remove unnecessary cache flushes in amd_iommu_resume iommu/amd: Add invalidate-context call-back iommu/amd: Add amd_iommu_device_info() function iommu/amd: Adapt IOMMU driver to PCI register name changes iommu/amd: Add invalid_ppr callback iommu/amd: Implement notifiers for IOMMUv2 iommu/amd: Implement IO page-fault handler iommu/amd: Add routines to bind/unbind a pasid iommu/amd: Implement device aquisition code for IOMMUv2 iommu/amd: Add driver stub for AMD IOMMUv2 support iommu/amd: Add stat counter for IOMMUv2 events iommu/amd: Add device errata handling iommu/amd: Add function to get IOMMUv2 domain for pdev iommu/amd: Implement function to send PPR completions iommu/amd: Implement functions to manage GCR3 table iommu/amd: Implement IOMMUv2 TLB flushing routines iommu/amd: Add support for IOMMUv2 domain mode iommu/amd: Add amd_iommu_domain_direct_map function ...
| *-. Merge branches 'iommu/fixes', 'arm/omap' and 'x86/amd' into nextJoerg Roedel2012-01-091-4/+4
| |\ \ | | | | | | | | | | | | | | | | Conflicts: drivers/pci/hotplug/acpiphp_glue.c
| | | * iommu/core: split mapping to page sizes as supported by the hardwareOhad Ben-Cohen2011-11-101-4/+4
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When mapping a memory region, split it to page sizes as supported by the iommu hardware. Always prefer bigger pages, when possible, in order to reduce the TLB pressure. The logic to do that is now added to the IOMMU core, so neither the iommu drivers themselves nor users of the IOMMU API have to duplicate it. This allows a more lenient granularity of mappings; traditionally the IOMMU API took 'order' (of a page) as a mapping size, and directly let the low level iommu drivers handle the mapping, but now that the IOMMU core can split arbitrary memory regions into pages, we can remove this limitation, so users don't have to split those regions by themselves. Currently the supported page sizes are advertised once and they then remain static. That works well for OMAP and MSM but it would probably not fly well with intel's hardware, where the page size capabilities seem to have the potential to be different between several DMA remapping devices. register_iommu() currently sets a default pgsize behavior, so we can convert the IOMMU drivers in subsequent patches. After all the drivers are converted, the temporary default settings will be removed. Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted to deal with bytes instead of page order. Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review! Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Cc: David Brown <davidb@codeaurora.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <Joerg.Roedel@amd.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Cc: KyongHo Cho <pullip.cho@samsung.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: kvm@vger.kernel.org Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | KVM: ensure that debugfs entries have been createdHamo2011-12-271-3/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | by checking the return value from kvm_init_debug, we can ensure that the entries under debugfs for KVM have been created correctly. Signed-off-by: Yang Bai <hamo.by@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* | | KVM: drop bsp_vcpu pointer from kvm structGleb Natapov2011-12-272-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | Drop bsp_vcpu pointer from kvm struct since its only use is incorrect anyway. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* | | KVM: Use memdup_user instead of kmalloc/copy_from_userSasha Levin2011-12-271-17/+12
| | | | | | | | | | | | | | | | | | | | | | | | Switch to using memdup_user when possible. This makes code more smaller and compact, and prevents errors. Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: Use kmemdup() instead of kmalloc/memcpySasha Levin2011-12-271-4/+3
| | | | | | | | | | | | | | | | | | | | | Switch to kmemdup() in two places to shorten the code and avoid possible bugs. Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: Allow aligned byte and word writes to IOAPIC registers.Julian Stecklina2011-12-271-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes byte accesses to IOAPIC_REG_SELECT as mandated by at least the ICH10 and Intel Series 5 chipset specs. It also makes ioapic_mmio_write consistent with ioapic_mmio_read, which also allows byte and word accesses. Signed-off-by: Julian Stecklina <js@alien8.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: introduce a table to map slot id to index in memslots arrayXiao Guangrong2011-12-271-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | The operation of getting dirty log is frequent when framebuffer-based displays are used(for example, Xwindow), so, we introduce a mapping table to speed up id_to_memslot() Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: sort memslots by its size and use line searchXiao Guangrong2011-12-271-22/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sort memslots base on its size and use line search to find it, so that the larger memslots have better fit The idea is from Avi Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: introduce id_to_memslot functionXiao Guangrong2011-12-271-4/+9
| | | | | | | | | | | | | | | | | | | | | Introduce id_to_memslot to get memslot by slot id Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: introduce kvm_for_each_memslot macroXiao Guangrong2011-12-272-16/+15
| | | | | | | | | | | | | | | | | | | | | Introduce kvm_for_each_memslot to walk all valid memslot Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: introduce update_memslots functionXiao Guangrong2011-12-271-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | Introduce update_memslots to update slot which will be update to kvm->memslots Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: introduce KVM_MEM_SLOTS_NUM macroXiao Guangrong2011-12-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Introduce KVM_MEM_SLOTS_NUM macro to instead of KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: Count the number of dirty pages for dirty loggingTakuya Yoshikawa2011-12-271-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | Needed for the next patch which uses this number to decide how to write protect a slot. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
* | | KVM: Use kmemdup rather than duplicating its implementationThomas Meyer2011-12-271-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use kmemdup rather than duplicating its implementation The semantic patch that makes this change is available in scripts/coccinelle/api/memdup.cocci. More information about semantic patching is available at http://coccinelle.lip6.fr/ Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* | | KVM: make checks stricter in coalesced_mmio_in_range()Dan Carpenter2011-12-271-3/+9
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | My testing version of Smatch complains that addr and len come from the user and they can wrap. The path is: -> kvm_vm_ioctl() -> kvm_vm_ioctl_unregister_coalesced_mmio() -> coalesced_mmio_in_range() I don't know what the implications are of wrapping here, but we may as well fix it, if only to silence the warning. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* | KVM: Device assignment permission checksAlex Williamson2011-12-251-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only allow KVM device assignment to attach to devices which: - Are not bridges - Have BAR resources (assume others are special devices) - The user has permissions to use Assigning a bridge is a configuration error, it's not supported, and typically doesn't result in the behavior the user is expecting anyway. Devices without BAR resources are typically chipset components that also don't have host drivers. We don't want users to hold such devices captive or cause system problems by fencing them off into an iommu domain. We determine "permission to use" by testing whether the user has access to the PCI sysfs resource files. By default a normal user will not have access to these files, so it provides a good indication that an administration agent has granted the user access to the device. [Yang Bai: add missing #include] [avi: fix comment style] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Yang Bai <hamo.by@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* | KVM: Remove ability to assign a device without iommu supportAlex Williamson2011-12-251-9/+9
|/ | | | | | | | | This option has no users and it exposes a security hole that we can allow devices to be assigned without iommu protection. Make KVM_DEV_ASSIGN_ENABLE_IOMMU a mandatory option. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* kvm: iommu.c file requires the full module.h present.Paul Gortmaker2011-11-011-0/+1
| | | | | | | | | | | | | This file has things like module_param_named() and MODULE_PARM_DESC() so it needs the full module.h header present. Without it, you'll get: CC arch/x86/kvm/../../../virt/kvm/iommu.o virt/kvm/iommu.c:37: error: expected ‘)’ before ‘bool’ virt/kvm/iommu.c:39: error: expected ‘)’ before string constant make[3]: *** [arch/x86/kvm/../../../virt/kvm/iommu.o] Error 1 make[2]: *** [arch/x86/kvm] Error 2 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* kvm: fix implicit use of stat.h header filePaul Gortmaker2011-11-011-0/+1
| | | | | | | | This was coming in via an implicit module.h (and its sub-includes) before, but we'll be cleaning that up shortly. Call out the stat.h include requirement in advance. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds2011-10-301-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (33 commits) iommu/core: Remove global iommu_ops and register_iommu iommu/msm: Use bus_set_iommu instead of register_iommu iommu/omap: Use bus_set_iommu instead of register_iommu iommu/vt-d: Use bus_set_iommu instead of register_iommu iommu/amd: Use bus_set_iommu instead of register_iommu iommu/core: Use bus->iommu_ops in the iommu-api iommu/core: Convert iommu_found to iommu_present iommu/core: Add bus_type parameter to iommu_domain_alloc Driver core: Add iommu_ops to bus_type iommu/core: Define iommu_ops and register_iommu only with CONFIG_IOMMU_API iommu/amd: Fix wrong shift direction iommu/omap: always provide iommu debug code iommu/core: let drivers know if an iommu fault handler isn't installed iommu/core: export iommu_set_fault_handler() iommu/omap: Fix build error with !IOMMU_SUPPORT iommu/omap: Migrate to the generic fault report mechanism iommu/core: Add fault reporting mechanism iommu/core: Use PAGE_SIZE instead of hard-coded value iommu/core: use the existing IS_ALIGNED macro iommu/msm: ->unmap() should return order of unmapped page ... Fixup trivial conflicts in drivers/iommu/Makefile: "move omap iommu to dedicated iommu folder" vs "Rename the DMAR and INTR_REMAP config options" just happened to touch lines next to each other.
| * iommu/core: Convert iommu_found to iommu_presentJoerg Roedel2011-10-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | With per-bus iommu_ops the iommu_found function needs to work on a bus_type too. This patch adds a bus_type parameter to that function and converts all call-places. The function is also renamed to iommu_present because the function now checks if an iommu is present for a given bus and does not check for a global iommu anymore. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * iommu/core: Add bus_type parameter to iommu_domain_allocJoerg Roedel2011-10-211-1/+1
| | | | | | | | | | | | | | | | This is necessary to store a pointer to the bus-specific iommu_ops in the iommu-domain structure. It will be used later to call into bus-specific iommu-ops. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | Merge branch 'kvm-updates/3.2' of ↵Linus Torvalds2011-10-306-118/+200
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm * 'kvm-updates/3.2' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm: (75 commits) KVM: SVM: Keep intercepting task switching with NPT enabled KVM: s390: implement sigp external call KVM: s390: fix register setting KVM: s390: fix return value of kvm_arch_init_vm KVM: s390: check cpu_id prior to using it KVM: emulate lapic tsc deadline timer for guest x86: TSC deadline definitions KVM: Fix simultaneous NMIs KVM: x86 emulator: convert push %sreg/pop %sreg to direct decode KVM: x86 emulator: switch lds/les/lss/lfs/lgs to direct decode KVM: x86 emulator: streamline decode of segment registers KVM: x86 emulator: simplify OpMem64 decode KVM: x86 emulator: switch src decode to decode_operand() KVM: x86 emulator: qualify OpReg inhibit_byte_regs hack KVM: x86 emulator: switch OpImmUByte decode to decode_imm() KVM: x86 emulator: free up some flag bits near src, dst KVM: x86 emulator: switch src2 to generic decode_operand() KVM: x86 emulator: expand decode flags to 64 bits KVM: x86 emulator: split dst decode to a generic decode_operand() KVM: x86 emulator: move memop, memopp into emulation context ...
| * | KVM: Split up MSI-X assigned device IRQ handlerJan Kiszka2011-09-251-13/+19
| | | | | | | | | | | | | | | | | | | | | | | | The threaded IRQ handler for MSI-X has almost nothing in common with the INTx/MSI handler. Move its code into a dedicated handler. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: Avoid needless registrations of IRQ ack notifier for assigned devicesJan Kiszka2011-09-251-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | We only perform work in kvm_assigned_dev_ack_irq if the guest IRQ is of INTx type. This completely avoids the callback invocation in non-INTx cases by registering the IRQ ack notifier only for INTx. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: Clean up unneeded void pointer castsJan Kiszka2011-09-251-6/+6
| | | | | | | | | | | | | | | Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: Intelligent device lookup on I/O busSasha Levin2011-09-254-15/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the method of dealing with an IO operation on a bus (PIO/MMIO) is to call the read or write callback for each device registered on the bus until we find a device which handles it. Since the number of devices on a bus can be significant due to ioeventfds and coalesced MMIO zones, this leads to a lot of overhead on each IO operation. Instead of registering devices, we now register ranges which points to a device. Lookup is done using an efficient bsearch instead of a linear search. Performance test was conducted by comparing exit count per second with 200 ioeventfds created on one byte and the guest is trying to access a different byte continuously (triggering usermode exits). Before the patch the guest has achieved 259k exits per second, after the patch the guest does 274k exits per second. Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * | KVM: Make coalesced mmio use a device per zoneSasha Levin2011-09-252-75/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration and lookups. Suggested-by: Avi Kivity <avi@redhat.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: MMIO: Lock coalesced device when checking for available entrySasha Levin2011-09-251-15/+27
| |/ | | | | | | | | | | | | | | | | | | | | | | | | Move the check whether there are available entries to within the spinlock. This allows working with larger amount of VCPUs and reduces premature exits when using a large number of VCPUs. Cc: Avi Kivity <avi@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* / pci: Add flag indicating device has been assigned by KVMGreg Rose2011-09-232-0/+6
|/ | | | | | | | | | | | | | | | Device drivers that create and destroy SR-IOV virtual functions via calls to pci_enable_sriov() and pci_disable_sriov can cause catastrophic failures if they attempt to destroy VFs while they are assigned to guest virtual machines. By adding a flag for use by the KVM module to indicate that a device is assigned a device driver can check that flag and avoid destroying VFs while they are assigned and avoid system failures. CC: Ian Campbell <ijc@hellion.org.uk> CC: Konrad Wilk <konrad.wilk@oracle.com> Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* KVM: IOMMU: Disable device assignment without interrupt remappingAlex Williamson2011-07-241-0/+18
| | | | | | | | | | | | | | | | | | | | IOMMU interrupt remapping support provides a further layer of isolation for device assignment by preventing arbitrary interrupt block DMA writes by a malicious guest from reaching the host. By default, we should require that the platform provides interrupt remapping support, with an opt-in mechanism for existing behavior. Both AMD IOMMU and Intel VT-d2 hardware support interrupt remapping, however we currently only have software support on the Intel side. Users wishing to re-enable device assignment when interrupt remapping is not supported on the platform can use the "allow_unsafe_assigned_interrupts=1" module option. [avi: break long lines] Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: mmio page fault supportXiao Guangrong2011-07-241-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | The idea is from Avi: | We could cache the result of a miss in an spte by using a reserved bit, and | checking the page fault error code (or seeing if we get an ept violation or | ept misconfiguration), so if we get repeated mmio on a page, we don't need to | search the slot list/tree. | (https://lkml.org/lkml/2011/2/22/221) When the page fault is caused by mmio, we cache the info in the shadow page table, and also set the reserved bits in the shadow page table, so if the mmio is caused again, we can quickly identify it and emulate it directly Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it can be reduced by this feature, and also avoid walking guest page table for soft mmu. [jan: fix operator precedence issue] Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: filter out the mmio pfn from the fault pfnXiao Guangrong2011-07-241-2/+14
| | | | | | | | | | | If the page fault is caused by mmio, the gfn can not be found in memslots, and 'bad_pfn' is returned on gfn_to_hva path, so we can use 'bad_pfn' to identify the mmio page fault. And, to clarify the meaning of mmio pfn, we return fault page instead of bad page when the gfn is not allowd to prefetch Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: introduce kvm_read_guest_cachedGleb Natapov2011-07-121-0/+20
| | | | | | | | | | | | | Introduce kvm_read_guest_cached() function in addition to write one we already have. [ by glauber: export function signature in kvm header ] Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Glauber Costa <glommer@redhat.com> Acked-by: Rik van Riel <riel@redhat.com> Tested-by: Eric Munson <emunson@mgebm.net> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Fix off-by-one in overflow check of KVM_ASSIGN_SET_MSIX_NRJan Kiszka2011-07-121-1/+1
| | | | | | | | KVM_MAX_MSIX_PER_DEV implies that up to that many MSI-X entries can be requested. But the kernel so far rejected already the upper limit. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Add compat ioctl for KVM_SET_SIGNAL_MASKAlexander Graf2011-07-121-1/+51
| | | | | | | | | | | | | | | KVM has an ioctl to define which signal mask should be used while running inside VCPU_RUN. At least for big endian systems, this mask is different on 32-bit and 64-bit systems (though the size is identical). Add a compat wrapper that converts the mask to whatever the kernel accepts, allowing 32-bit kvm user space to set signal masks. This patch fixes qemu with --enable-io-thread on ppc64 hosts when running 32-bit user land. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Clean up error handling during VCPU creationJan Kiszka2011-07-121-5/+6
| | | | | | | | | | So far kvm_arch_vcpu_setup is responsible for freeing the vcpu struct if it fails. Move this confusing resonsibility back into the hands of kvm_vm_ioctl_create_vcpu. Only kvm_arch_vcpu_setup of x86 is affected, all other archs cannot fail. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: use __copy_to_user/__clear_user to write guest pageXiao Guangrong2011-07-121-2/+2
| | | | | | | | Simply use __copy_to_user/__clear_user to write guest page since we have already verified the user address when the memslot is set Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* Merge branch 'kvm-updates/3.0' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2011-06-081-6/+9
|\ | | | | | | | | | | | | * 'kvm-updates/3.0' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: Initialize kvm before registering the mmu notifier KVM: x86: use proper port value when checking io instruction permission KVM: add missing void __user * cast to access_ok() call
| * KVM: Initialize kvm before registering the mmu notifierMike Waychison2011-06-061-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It doesn't make sense to ever see a half-initialized kvm structure on mmu notifier callbacks. Previously, 85722cda changed the ordering to ensure that the mmu_lock was initialized before mmu notifier registration, but there is still a race where the mmu notifier could come in and try accessing other portions of struct kvm before they are intialized. Solve this by moving the mmu notifier registration to occur after the structure is completely initialized. Google-Bug-Id: 452199 Signed-off-by: Mike Waychison <mikew@google.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: add missing void __user * cast to access_ok() callHeiko Carstens2011-05-261-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fa3d315a "KVM: Validate userspace_addr of memslot when registered" introduced this new warning onn s390: kvm_main.c: In function '__kvm_set_memory_region': kvm_main.c:654:7: warning: passing argument 1 of '__access_ok' makes pointer from integer without a cast arch/s390/include/asm/uaccess.h:53:19: note: expected 'const void *' but argument is of type '__u64' Add the missing cast to get rid of it again... Cc: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* | Merge branch 'linux-next' of ↵Linus Torvalds2011-05-241-4/+14
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6 * 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: (27 commits) PCI: Don't use dmi_name_in_vendors in quirk PCI: remove unused AER functions PCI/sysfs: move bus cpuaffinity to class dev_attrs PCI: add rescan to /sys/.../pci_bus/.../ PCI: update bridge resources to get more big ranges when allocating space (again) KVM: Use pci_store/load_saved_state() around VM device usage PCI: Add interfaces to store and load the device saved state PCI: Track the size of each saved capability data area PCI/e1000e: Add and use pci_disable_link_state_locked() x86/PCI: derive pcibios_last_bus from ACPI MCFG PCI: add latency tolerance reporting enable/disable support PCI: add OBFF enable/disable support PCI: add ID-based ordering enable/disable support PCI hotplug: acpiphp: assume device is in state D0 after powering on a slot. PCI: Set PCIE maxpayload for card during hotplug insertion PCI/ACPI: Report _OSC control mask returned on failure to get control x86/PCI: irq and pci_ids patch for Intel Panther Point DeviceIDs PCI: handle positive error codes PCI: check pci_vpd_pci22_wait() return PCI: Use ICH6_GPIO_EN in ich6_lpc_acpi_gpio ... Fix up trivial conflicts in include/linux/pci_ids.h: commit a6e5e2be4461 moved the intel SMBUS ID definitons to the i2c-i801.c driver.
| * KVM: Use pci_store/load_saved_state() around VM device usageAlex Williamson2011-05-211-4/+14
| | | | | | | | | | | | | | | | | | | | | | Store the device saved state so that we can reload the device back to the original state when it's unassigned. This has the benefit that the state survives across pci_reset_function() calls via the PCI sysfs reset interface while the VM is using the device. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Avi Kivity <avi@redhat.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
* | KVM: Fix kvm mmu_notifier initialization orderOGAWA Hirofumi2011-05-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Like the following, mmu_notifier can be called after registering immediately. So, kvm have to initialize kvm->mmu_lock before it. BUG: spinlock bad magic on CPU#0, kswapd0/342 lock: ffff8800af8c4000, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 Pid: 342, comm: kswapd0 Not tainted 2.6.39-rc5+ #1 Call Trace: [<ffffffff8118ce61>] spin_bug+0x9c/0xa3 [<ffffffff8118ce91>] do_raw_spin_lock+0x29/0x13c [<ffffffff81024923>] ? flush_tlb_others_ipi+0xaf/0xfd [<ffffffff812e22f3>] _raw_spin_lock+0x9/0xb [<ffffffffa0582325>] kvm_mmu_notifier_clear_flush_young+0x2c/0x66 [kvm] [<ffffffff810d3ff3>] __mmu_notifier_clear_flush_young+0x2b/0x57 [<ffffffff810c8761>] page_referenced_one+0x88/0xea [<ffffffff810c89bf>] page_referenced+0x1fc/0x256 [<ffffffff810b2771>] shrink_page_list+0x187/0x53a [<ffffffff810b2ed7>] shrink_inactive_list+0x1e0/0x33d [<ffffffff810acf95>] ? determine_dirtyable_memory+0x15/0x27 [<ffffffff812e90ee>] ? call_function_single_interrupt+0xe/0x20 [<ffffffff810b3356>] shrink_zone+0x322/0x3de [<ffffffff810a9587>] ? zone_watermark_ok_safe+0xe2/0xf1 [<ffffffff810b3928>] kswapd+0x516/0x818 [<ffffffff810b3412>] ? shrink_zone+0x3de/0x3de [<ffffffff81053d17>] kthread+0x7d/0x85 [<ffffffff812e9394>] kernel_thread_helper+0x4/0x10 [<ffffffff81053c9a>] ? __init_kthread_worker+0x37/0x37 [<ffffffff812e9390>] ? gs_change+0xb/0xb Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
* | KVM: Validate userspace_addr of memslot when registeredTakuya Yoshikawa2011-05-221-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This way, we can avoid checking the user space address many times when we read the guest memory. Although we can do the same for write if we check which slots are writable, we do not care write now: reading the guest memory happens more often than writing. [avi: change VERIFY_READ to VERIFY_WRITE] Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
* | KVM: ioapic: Fix an error field referenceLiu Yuan2011-05-221-1/+1
| | | | | | | | | | | | | | | | Function ioapic_debug() in the ioapic_deliver() misnames one filed by reference. This patch correct it. Signed-off-by: Liu Yuan <tailai.ly@taobao.com> Signed-off-by: Avi Kivity <avi@redhat.com>