summaryrefslogtreecommitdiffstats
path: root/arch/parisc (unfollow)
Commit message (Collapse)AuthorFilesLines
2019-05-27arm64: trim includes in dma-mapping.cChristoph Hellwig1-10/+0
With most of the previous functionality now elsewhere a lot of the headers included in this file are not needed. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27arm64: switch copyright boilerplace to SPDX in dma-mapping.cChristoph Hellwig1-14/+1
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Switch copyright boilerplace to SPDXChristoph Hellwig2-24/+2
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Don't depend on CONFIG_DMA_DIRECT_REMAPChristoph Hellwig2-8/+9
For entirely dma coherent architectures there is no requirement to ever remap dma coherent allocation. Move all the remap and pool code under IS_ENABLED() checks and drop the Kconfig dependency. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_mmapChristoph Hellwig1-35/+11
Inline __iommu_dma_mmap_pfn into the main function, and use the fact that __iommu_dma_get_pages return NULL for remapped contigous allocations to simplify the code flow a bit. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_get_sgtableChristoph Hellwig1-28/+17
Inline __iommu_dma_get_sgtable_page into the main function, and use the fact that __iommu_dma_get_pages return NULL for remapped contigous allocations to simplify the code flow a bit. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_alloc, part 2Christoph Hellwig1-30/+35
All the logic in iommu_dma_alloc that deals with page allocation from the CMA or page allocators can be split into a self-contained helper, and we can than map the result of that or the atomic pool allocation with the iommu later. This also allows reusing __iommu_dma_free to tear down the allocations and MMU mappings when the IOMMU mapping fails. Based on a patch from Robin Murphy. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Cleanup variable naming in iommu_dma_allocRobin Murphy1-23/+22
Most importantly clear up the size / iosize confusion. Also rename addr to cpu_addr to match the surrounding code and make the intention a little more clear. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [hch: split from a larger patch] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Split iommu_dma_freeRobin Murphy1-4/+8
Most of it can double up to serve the failure cleanup path for iommu_dma_alloc(). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Merge the CMA and alloc_pages allocation pathsChristoph Hellwig1-20/+12
Instead of having a separate code path for the non-blocking alloc_pages and CMA allocations paths merge them into one. There is a slight behavior change here in that we try the page allocator if CMA fails. This matches what dma-direct and other iommu drivers do and will be needed to use the dma-iommu code on architectures without DMA remapping later on. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Don't remap CMA unnecessarilyRobin Murphy1-7/+12
Always remapping CMA allocations was largely a bodge to keep the freeing logic manageable when it was split between here and an arch wrapper. Now that it's all together and streamlined, we can relax that limitation. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_allocRobin Murphy1-30/+30
Shuffle around the self-contained atomic and non-contiguous cases to return early and get out of the way of the CMA case that we're about to work on next. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [hch: slight changes to the code flow] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_freeRobin Murphy1-40/+33
The freeing logic was made particularly horrible by part of it being opaque to the arch wrapper, which led to a lot of convoluted repetition to ensure each path did everything in the right order. Now that it's all private, we can pick apart and consolidate the logically-distinct steps of freeing the IOMMU mapping, the underlying pages, and the CPU remap (if necessary) into something much more manageable. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [various cosmetic changes to the code flow] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Remove __iommu_dma_freeChristoph Hellwig1-19/+2
We only have a single caller of this function left, so open code it there. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor the page array remapping allocatorChristoph Hellwig1-28/+26
Move the call to dma_common_pages_remap into __iommu_dma_alloc and rename it to iommu_dma_alloc_remap. This creates a self-contained helper for remapped pages allocation and mapping. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Factor out remapped pages lookupRobin Murphy1-12/+20
Since we duplicate the find_vm_area() logic a few times in places where we only care aboute the pages, factor out a helper to abstract it. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [hch: don't warn when not finding a region, as we'll rely on that later] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Squash __iommu_dma_{map,unmap}_page helpersRobin Murphy1-18/+7
The remaining internal callsites don't care about having prototypes compatible with the relevant dma_map_ops callbacks, so the extra level of indirection just wastes space and complictaes things. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}Robin Murphy1-15/+14
Most of the callers don't care, and the couple that do already have the domain to hand for other reasons are in slow paths where the (trivial) overhead of a repeated lookup will be utterly immaterial. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [hch: dropped the hunk touching iommu_dma_get_msi_page to avoid a conflict with another series] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Move __iommu_dma_mapChristoph Hellwig1-23/+23
Moving this function up to its unmap counterpart helps to keep related code together for the following changes. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: move the arm64 wrappers to common codeChristoph Hellwig4-457/+378
There is nothing really arm64 specific in the iommu_dma_ops implementation, so move it to dma-iommu.c and keep a lot of symbols self-contained. Note the implementation does depend on the DMA_DIRECT_REMAP infrastructure for now, so we'll have to make the DMA_IOMMU support depend on it, but this will be relaxed soon. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Use for_each_sg in iommu_dma_allocChristoph Hellwig1-9/+5
arch_dma_prep_coherent can handle physically contiguous ranges larger than PAGE_SIZE just fine, which means we don't need a page-based iterator. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Remove the flush_page callbackChristoph Hellwig3-14/+5
We now have a arch_dma_prep_coherent architecture hook that is used for the generic DMA remap allocator, and we should use the same interface for the dma-iommu code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Cleanup dma-iommu.hChristoph Hellwig1-4/+2
No need for a __KERNEL__ guard outside uapi and add a missing comment describing the #else cpp statement. Last but not least include <linux/errno.h> instead of the asm version, which is frowned upon. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27Linux 5.2-rc2v5.2-rc2Linus Torvalds1-2/+2
2019-05-26random: fix soft lockup when trying to read from an uninitialized blocking poolTheodore Ts'o1-3/+13
Fixes: eb9d1bf079bb: "random: only read from /dev/random after its pool has received 128 bits" Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-05-26tracing: Silence GCC 9 array bounds warningMiguel Ojeda3-10/+20
Starting with GCC 9, -Warray-bounds detects cases when memset is called starting on a member of a struct but the size to be cleared ends up writing over further members. Such a call happens in the trace code to clear, at once, all members after and including `seq` on struct trace_iterator: In function 'memset', inlined from 'ftrace_dump' at kernel/trace/trace.c:8914:3: ./include/linux/string.h:344:9: warning: '__builtin_memset' offset [8505, 8560] from the object at 'iter' is out of the bounds of referenced subobject 'seq' with type 'struct trace_seq' at offset 4368 [-Warray-bounds] 344 | return __builtin_memset(p, c, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order to avoid GCC complaining about it, we compute the address ourselves by adding the offsetof distance instead of referring directly to the member. Since there are two places doing this clear (trace.c and trace_kdb.c), take the chance to move the workaround into a single place in the internal header. Link: http://lkml.kernel.org/r/20190523124535.GA12931@gmail.com Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [ Removed unnecessary parenthesis around "iter" ] Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-05-25ext4: fix dcache lookup of !casefolded directoriesGabriel Krisman Bertazi1-1/+1
Found by visual inspection, this wasn't caught by my xfstest, since it's effect is ignoring positive dentries in the cache the fallback just goes to the disk. it was introduced in the last iteration of the case-insensitive patch. d_compare should return 0 when the entries match, so make sure we are correctly comparing the entire string if the encoding feature is set and we are on a case-INsensitive directory. Fixes: b886ee3e778e ("ext4: Support case-insensitive file name lookups") Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2019-05-24locking/lock_events: Use this_cpu_add() when necessaryWaiman Long1-2/+40
The kernel test robot has reported that the use of __this_cpu_add() causes bug messages like: BUG: using __this_cpu_add() in preemptible [00000000] code: ... Given the imprecise nature of the count and the possibility of resetting the count and doing the measurement again, this is not really a big problem to use the unprotected __this_cpu_*() functions. To make the preemption checking code happy, the this_cpu_*() functions will be used if CONFIG_DEBUG_PREEMPT is defined. The imprecise nature of the locking counts are also documented with the suggestion that we should run the measurement a few times with the counts reset in between to get a better picture of what is going on under the hood. Fixes: a8654596f0371 ("locking/rwsem: Enable lock event counting") Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-24KVM: x86: fix return value for reserved EFERPaolo Bonzini1-1/+1
Commit 11988499e62b ("KVM: x86: Skip EFER vs. guest CPUID checks for host-initiated writes", 2019-04-02) introduced a "return false" in a function returning int, and anyway set_efer has a "nonzero on error" conventon so it should be returning 1. Reported-by: Pavel Machek <pavel@denx.de> Fixes: 11988499e62b ("KVM: x86: Skip EFER vs. guest CPUID checks for host-initiated writes") Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24tools/kvm_stat: fix fields filter for child eventsStefan Raspl2-4/+14
The fields filter would not work with child fields, as the respective parents would not be included. No parents displayed == no childs displayed. To reproduce, run on s390 (would work on other platforms, too, but would require a different filter name): - Run 'kvm_stat -d' - Press 'f' - Enter 'instruct' Notice that events like instruction_diag_44 or instruction_diag_500 are not displayed - the output remains empty. With this patch, we will filter by matching events and their parents. However, consider the following example where we filter by instruction_diag_44: kvm statistics - summary regex filter: instruction_diag_44 Event Total %Total CurAvg/s exit_instruction 276 100.0 12 instruction_diag_44 256 92.8 11 Total 276 12 Note that the parent ('exit_instruction') displays the total events, but the childs listed do not match its total (256 instead of 276). This is intended (since we're filtering all but one child), but might be confusing on first sight. Signed-off-by: Stefan Raspl <raspl@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guardThomas Huth2-0/+4
struct kvm_nested_state is only available on x86 so far. To be able to compile the code on other architectures as well, we need to wrap the related code with #ifdefs. Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: selftests: aarch64: compile with warnings onAndrew Jones1-4/+5
aarch64 fixups needed to compile with warnings as errors. Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: selftests: aarch64: fix default vm modeAndrew Jones1-1/+1
VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its use in vm_create_default() with a mode that works and represents a good AArch64 default. (We didn't ever see a problem with this because we don't have any unit tests using vm_create_default(), but it's good to get it fixed in advance.) Reported-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: selftests: aarch64: dirty_log_test: fix unaligned memslot sizeAndrew Jones1-1/+1
The memory slot size must be aligned to the host's page size. When testing a guest with a 4k page size on a host with a 64k page size, then 3 guest pages are not host page size aligned. Since we just need a nearly arbitrary number of extra pages to ensure the memslot is not aligned to a 64 host-page boundary for this test, then we can use 16, as that's 64k aligned, but not 64 * 64k aligned. Fixes: 76d58e0f07ec ("KVM: fix KVM_CLEAR_DIRTY_LOG for memory slots of unaligned size", 2019-04-17) Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGIONChristian Borntraeger1-14/+21
kselftests exposed a problem in the s390 handling for memory slots. Right now we only do proper memory slot handling for creation of new memory slots. Neither MOVE, nor DELETION are handled properly. Let us implement those. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: x86/pmu: do not mask the value that is written to fixed PMUsPaolo Bonzini1-5/+8
According to the SDM, for MSR_IA32_PERFCTR0/1 "the lower-order 32 bits of each MSR may be written with any value, and the high-order 8 bits are sign-extended according to the value of bit 31", but the fixed counters in real hardware are limited to the width of the fixed counters ("bits beyond the width of the fixed-function counter are reserved and must be written as zeros"). Fix KVM to do the same. Reported-by: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: x86/pmu: mask the result of rdpmc according to the width of the countersPaolo Bonzini4-13/+15
This patch will simplify the changes in the next, by enforcing the masking of the counters to RDPMC and RDMSR. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24x86/kvm/pmu: Set AMD's virt PMU version to 1Borislav Petkov1-1/+1
After commit: 672ff6cff80c ("KVM: x86: Raise #GP when guest vCPU do not support PMU") my AMD guests started #GPing like this: general protection fault: 0000 [#1] PREEMPT SMP CPU: 1 PID: 4355 Comm: bash Not tainted 5.1.0-rc6+ #3 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 RIP: 0010:x86_perf_event_update+0x3b/0xa0 with Code: pointing to RDPMC. It is RDPMC because the guest has the hardware watchdog CONFIG_HARDLOCKUP_DETECTOR_PERF enabled which uses perf. Instrumenting kvm_pmu_rdpmc() some, showed that it fails due to: if (!pmu->version) return 1; which the above commit added. Since AMD's PMU leaves the version at 0, that causes the #GP injection into the guest. Set pmu->version arbitrarily to 1 and move it above the non-applicable struct kvm_pmu members. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Cc: kvm@vger.kernel.org Cc: Liran Alon <liran.alon@oracle.com> Cc: Mihai Carabas <mihai.carabas@oracle.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86@kernel.org Cc: stable@vger.kernel.org Fixes: 672ff6cff80c ("KVM: x86: Raise #GP when guest vCPU do not support PMU") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: x86: do not spam dmesg with VMCS/VMCB dumpsPaolo Bonzini2-8/+27
Userspace can easily set up invalid processor state in such a way that dmesg will be filled with VMCS or VMCB dumps. Disable this by default using a module parameter. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: Check irqchip mode before assign irqfdPeter Xu3-0/+17
When assigning kvm irqfd we didn't check the irqchip mode but we allow KVM_IRQFD to succeed with all the irqchip modes. However it does not make much sense to create irqfd even without the kernel chips. Let's provide a arch-dependent helper to check whether a specific irqfd is allowed by the arch. At least for x86, it should make sense to check: - when irqchip mode is NONE, all irqfds should be disallowed, and, - when irqchip mode is SPLIT, irqfds that are with resamplefd should be disallowed. For either of the case, previously we'll silently ignore the irq or the irq ack event if the irqchip mode is incorrect. However that can cause misterious guest behaviors and it can be hard to triage. Let's fail KVM_IRQFD even earlier to detect these incorrect configurations. CC: Paolo Bonzini <pbonzini@redhat.com> CC: Radim Krčmář <rkrcmar@redhat.com> CC: Alex Williamson <alex.williamson@redhat.com> CC: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: svm/avic: fix off-by-one in checking host APIC IDSuthikulpanit, Suravee1-1/+5
Current logic does not allow VCPU to be loaded onto CPU with APIC ID 255. This should be allowed since the host physical APIC ID field in the AVIC Physical APIC table entry is an 8-bit value, and APIC ID 255 is valid in system with x2APIC enabled. Instead, do not allow VCPU load if the host APIC ID cannot be represented by an 8-bit value. Also, use the more appropriate AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK instead of AVIC_MAX_PHYSICAL_ID_COUNT. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: selftests: do not blindly clobber registers in guest asmPaolo Bonzini1-24/+30
The guest_code of sync_regs_test is assuming that the compiler will not touch %r11 outside the asm that increments it, which is a bit brittle. Instead, we can increment a variable and use a dummy asm to ensure the increment is not optimized away. However, we also need to use a callee-save register or the compiler will insert a save/restore around the vmexit, breaking the whole idea behind the test. (Yes, "if it ain't broken...", but I would like the test to be clean before it is copied into the upcoming s390 selftests). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: selftests: Remove duplicated TEST_ASSERT in hyperv_cpuid.cThomas Huth1-3/+0
The check for entry->index == 0 is done twice. One time should be sufficient. Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: LAPIC: Expose per-vCPU timer_advance_ns to userspaceWanpeng Li1-0/+18
Expose per-vCPU timer_advance_ns to userspace, so it is able to query the auto-adjusted value. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: Liran Alon <liran.alon@oracle.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: LAPIC: Fix lapic_timer_advance_ns parameter overflowWanpeng Li1-1/+1
After commit c3941d9e0 (KVM: lapic: Allow user to disable adaptive tuning of timer advancement), '-1' enables adaptive tuning starting from default advancment of 1000ns. However, we should expose an int instead of an overflow uint module parameter. Before patch: /sys/module/kvm/parameters/lapic_timer_advance_ns:4294967295 After patch: /sys/module/kvm/parameters/lapic_timer_advance_ns:-1 Fixes: c3941d9e0 (KVM: lapic: Allow user to disable adaptive tuning of timer advancement) Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: Liran Alon <liran.alon@oracle.com> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: vmx: Fix -Wmissing-prototypes warningsYi Wang1-0/+1
We get a warning when build kernel W=1: arch/x86/kvm/vmx/vmx.c:6365:6: warning: no previous prototype for ‘vmx_update_host_rsp’ [-Wmissing-prototypes] void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) Add the missing declaration to fix this. Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24KVM: nVMX: Fix using __this_cpu_read() in preemptible contextWanpeng Li1-2/+2
BUG: using __this_cpu_read() in preemptible [00000000] code: qemu-system-x86/4590 caller is nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel] CPU: 4 PID: 4590 Comm: qemu-system-x86 Tainted: G OE 5.1.0-rc4+ #1 Call Trace: dump_stack+0x67/0x95 __this_cpu_preempt_check+0xd2/0xe0 nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel] nested_vmx_run+0xda/0x2b0 [kvm_intel] handle_vmlaunch+0x13/0x20 [kvm_intel] vmx_handle_exit+0xbd/0x660 [kvm_intel] kvm_arch_vcpu_ioctl_run+0xa2c/0x1e50 [kvm] kvm_vcpu_ioctl+0x3ad/0x6d0 [kvm] do_vfs_ioctl+0xa5/0x6e0 ksys_ioctl+0x6d/0x80 __x64_sys_ioctl+0x1a/0x20 do_syscall_64+0x6f/0x6c0 entry_SYSCALL_64_after_hwframe+0x49/0xbe Accessing per-cpu variable should disable preemption, this patch extends the preemption disable region for __this_cpu_read(). Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W") Cc: stable@vger.kernel.org Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: fix compilation on s390Paolo Bonzini1-0/+2
s390 does not have memremap, even though in this particular case it would be useful. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: x86: Include CPUID leaf 0x8000001e in kvm's supported CPUIDJim Mattson1-0/+1
Kvm now supports extended CPUID functions through 0x8000001f. CPUID leaf 0x8000001e is AMD's Processor Topology Information leaf. This contains similar information to CPUID leaf 0xb (Intel's Extended Topology Enumeration leaf), and should be included in the output of KVM_GET_SUPPORTED_CPUID, even though userspace is likely to override some of this information based upon the configuration of the particular VM. Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Borislav Petkov <bp@suse.de> Fixes: 8765d75329a38 ("KVM: X86: Extend CPUID range to include new leaf") Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Marc Orr <marcorr@google.com> Reviewed-by: Borislav Petkov <bp@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-05-24kvm: x86: Include multiple indices with CPUID leaf 0x8000001dJim Mattson1-4/+3
Per the APM, "CPUID Fn8000_001D_E[D,C,B,A]X reports cache topology information for the cache enumerated by the value passed to the instruction in ECX, referred to as Cache n in the following description. To gather information for all cache levels, software must repeatedly execute CPUID with 8000_001Dh in EAX and ECX set to increasing values beginning with 0 until a value of 00h is returned in the field CacheType (EAX[4:0]) indicating no more cache descriptions are available for this processor." The termination condition is the same as leaf 4, so we can reuse that code block for leaf 0x8000001d. Fixes: 8765d75329a38 ("KVM: X86: Extend CPUID range to include new leaf") Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Marc Orr <marcorr@google.com> Reviewed-by: Borislav Petkov <bp@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>