summaryrefslogtreecommitdiffstats
path: root/block/bounce.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2019-09-24KVM: x86: Exit to userspace on emulation skip failureSean Christopherson2-4/+9
Kill a few birds with one stone by forcing an exit to userspace on skip emulation failure. This removes a reference to EMULATE_FAIL, fixes a bug in handle_ept_misconfig() where it would exit to userspace without setting run->exit_reason, and fixes a theoretical bug in SVM's task_switch_interception() where it would overwrite run->exit_reason on a return of EMULATE_USER_EXIT. Note, this technically doesn't fully fix task_switch_interception() as it now incorrectly handles EMULATE_FAIL, but in practice there is no bug as EMULATE_FAIL will never be returned for EMULTYPE_SKIP. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Move #UD injection for failed emulation into emulation codeSean Christopherson1-9/+5
Immediately inject a #UD and return EMULATE done if emulation fails when handling an intercepted #UD. This helps pave the way for removing EMULATE_FAIL altogether. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Add explicit flag for forced emulation on #UDSean Christopherson2-2/+4
Add an explicit emulation type for forced #UD emulation and use it to detect that KVM should unconditionally inject a #UD instead of falling into its standard emulation failure handling. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Move #GP injection for VMware into x86_emulate_instruction()Sean Christopherson4-23/+14
Immediately inject a #GP when VMware emulation fails and return EMULATE_DONE instead of propagating EMULATE_FAIL up the stack. This helps pave the way for removing EMULATE_FAIL altogether. Rename EMULTYPE_VMWARE to EMULTYPE_VMWARE_GP to document that the x86 emulator is called to handle VMware #GP interception, e.g. why a #GP is injected on emulation failure for EMULTYPE_VMWARE_GP. Drop EMULTYPE_NO_UD_ON_FAIL as a standalone type. The "no #UD on fail" is used only in the VMWare case and is obsoleted by having the emulator itself reinject #GP. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Don't attempt VMWare emulation on #GP with non-zero error codeSean Christopherson2-2/+20
The VMware backdoor hooks #GP faults on IN{S}, OUT{S}, and RDPMC, none of which generate a non-zero error code for their #GP. Re-injecting #GP instead of attempting emulation on a non-zero error code will allow a future patch to move #GP injection (for emulation failure) into kvm_emulate_instruction() without having to plumb in the error code. Reviewed-and-tested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Refactor kvm_vcpu_do_singlestep() to remove out paramSean Christopherson1-6/+6
Return the single-step emulation result directly instead of via an out param. Presumably at some point in the past kvm_vcpu_do_singlestep() could be called with *r==EMULATE_USER_EXIT, but that is no longer the case, i.e. all callers are happy to overwrite their own return variable. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Clean up handle_emulation_failure()Sean Christopherson1-6/+4
When handling emulation failure, return the emulation result directly instead of capturing it in a local variable. Future patches will move additional cases into handle_emulation_failure(), clean up the cruft before so there isn't an ugly mix of setting a local variable and returning directly. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Relocate MMIO exit stats countingSean Christopherson3-3/+2
Move the stat.mmio_exits update into x86_emulate_instruction(). This is both a bug fix, e.g. the current update flows will incorrectly increment mmio_exits on emulation failure, and a preparatory change to set the stage for eliminating EMULATE_DONE and company. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: nVMX: Check Host Address Space Size on vmentry of nested guestsKrish Sadhukhan1-0/+28
According to section "Checks Related to Address-Space Size" in Intel SDM vol 3C, the following checks are performed on vmentry of nested guests: If the logical processor is outside IA-32e mode (if IA32_EFER.LMA = 0) at the time of VM entry, the following must hold: - The "IA-32e mode guest" VM-entry control is 0. - The "host address-space size" VM-exit control is 0. If the logical processor is in IA-32e mode (if IA32_EFER.LMA = 1) at the time of VM entry, the "host address-space size" VM-exit control must be 1. If the "host address-space size" VM-exit control is 0, the following must hold: - The "IA-32e mode guest" VM-entry control is 0. - Bit 17 of the CR4 field (corresponding to CR4.PCIDE) is 0. - Bits 63:32 in the RIP field are 0. If the "host address-space size" VM-exit control is 1, the following must hold: - Bit 5 of the CR4 field (corresponding to CR4.PAE) is 1. - The RIP field contains a canonical address. On processors that do not support Intel 64 architecture, checks are performed to ensure that the "IA-32e mode guest" VM-entry control and the "host address-space size" VM-exit control are both 0. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: selftests: hyperv_cpuid: add check for NoNonArchitecturalCoreSharing bitVitaly Kuznetsov1-0/+27
The bit is supposed to be '1' when SMT is not supported or forcefully disabled and '0' otherwise. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: hyper-v: set NoNonArchitecturalCoreSharing CPUID bit when SMT is ↵Vitaly Kuznetsov2-1/+10
impossible Hyper-V 2019 doesn't expose MD_CLEAR CPUID bit to guests when it cannot guarantee that two virtual processors won't end up running on sibling SMT threads without knowing about it. This is done as an optimization as in this case there is nothing the guest can do to protect itself against MDS and issuing additional flush requests is just pointless. On bare metal the topology is known, however, when Hyper-V is running nested (e.g. on top of KVM) it needs an additional piece of information: a confirmation that the exposed topology (wrt vCPU placement on different SMT threads) is trustworthy. NoNonArchitecturalCoreSharing (CPUID 0x40000004 EAX bit 18) is described in TLFS as follows: "Indicates that a virtual processor will never share a physical core with another virtual processor, except for virtual processors that are reported as sibling SMT threads." From KVM we can give such guarantee in two cases: - SMT is unsupported or forcefully disabled (just 'disabled' doesn't work as it can become re-enabled during the lifetime of the guest). - vCPUs are properly pinned so the scheduler won't put them on sibling SMT threads (when they're not reported as such). This patch reports NoNonArchitecturalCoreSharing bit in to userspace in the first case. The second case is outside of KVM's domain of responsibility (as vCPU pinning is actually done by someone who manages KVM's userspace - e.g. libvirt pinning QEMU threads). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24cpu/SMT: create and export cpu_smt_possible()Vitaly Kuznetsov2-2/+11
KVM needs to know if SMT is theoretically possible, this means it is supported and not forcefully disabled ('nosmt=force'). Create and export cpu_smt_possible() answering this question. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: hyperv: Fix Direct Synthetic timers assert an interrupt w/o lapic_in_kernelWanpeng Li1-2/+10
Reported by syzkaller: kasan: GPF could be caused by NULL-ptr deref or user memory access general protection fault: 0000 [#1] PREEMPT SMP KASAN RIP: 0010:__apic_accept_irq+0x46/0x740 arch/x86/kvm/lapic.c:1029 Call Trace: kvm_apic_set_irq+0xb4/0x140 arch/x86/kvm/lapic.c:558 stimer_notify_direct arch/x86/kvm/hyperv.c:648 [inline] stimer_expiration arch/x86/kvm/hyperv.c:659 [inline] kvm_hv_process_stimers+0x594/0x1650 arch/x86/kvm/hyperv.c:686 vcpu_enter_guest+0x2b2a/0x54b0 arch/x86/kvm/x86.c:7896 vcpu_run+0x393/0xd40 arch/x86/kvm/x86.c:8152 kvm_arch_vcpu_ioctl_run+0x636/0x900 arch/x86/kvm/x86.c:8360 kvm_vcpu_ioctl+0x6cf/0xaf0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2765 The testcase programs HV_X64_MSR_STIMERn_CONFIG/HV_X64_MSR_STIMERn_COUNT, in addition, there is no lapic in the kernel, the counters value are small enough in order that kvm_hv_process_stimers() inject this already-expired timer interrupt into the guest through lapic in the kernel which triggers the NULL deferencing. This patch fixes it by don't advertise direct mode synthetic timers and discarding the inject when lapic is not in kernel. syzkaller source: https://syzkaller.appspot.com/x/repro.c?x=1752fe0a600000 Reported-by: syzbot+dff25ee91f0c7d5c1695@syzkaller.appspotmail.com Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: Manually flush collapsible SPTEs only when toggling flagsSean Christopherson1-1/+6
Zapping collapsible sptes, a.k.a. 4k sptes that can be promoted into a large page, is only necessary when changing only the dirty logging flag of a memory region. If the memslot is also being moved, then all sptes for the memslot are zapped when it is invalidated. When a memslot is being created, it is impossible for there to be existing dirty mappings, e.g. KVM can have MMIO sptes, but not present, and thus dirty, sptes. Note, the comment and logic are shamelessly borrowed from MIPS's version of kvm_arch_commit_memory_region(). Fixes: 3ea3b7fa9af06 ("kvm: mmu: lazy collapse small sptes into large sptes") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: selftests: Remove duplicate guest mode handlingPeter Xu3-47/+26
Remove the duplication code in run_test() of dirty_log_test because after some reordering of functions now we can directly use the outcome of vm_create(). Meanwhile, with the new VM_MODE_PXXV48_4K, we can safely revert b442324b58 too where we stick the x86_64 PA width to 39 bits for dirty_log_test. Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: selftests: Introduce VM_MODE_PXXV48_4KPeter Xu6-14/+67
The naming VM_MODE_P52V48_4K is explicit but unclear when used on x86_64 machines, because x86_64 machines are having various physical address width rather than some static values. Here's some examples: - Intel Xeon E3-1220: 36 bits - Intel Core i7-8650: 39 bits - AMD EPYC 7251: 48 bits All of them are using 48 bits linear address width but with totally different physical address width (and most of the old machines should be less than 52 bits). Let's create a new guest mode called VM_MODE_PXXV48_4K for current x86_64 tests and make it as the default to replace the old naming of VM_MODE_P52V48_4K because it shows more clearly that the PA width is not really a constant. Meanwhile we also stop assuming all the x86 machines are having 52 bits PA width but instead we fetch the real vm->pa_bits from CPUID 0x80000008 during runtime. We currently make this exclusively used by x86_64 but no other arch. As a slight touch up, moving DEBUG macro from dirty_log_test.c to kvm_util.h so lib can use it too. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: selftests: Create VM earlier for dirty log testPeter Xu1-3/+16
Since we've just removed the dependency of vm type in previous patch, now we can create the vm much earlier. Note that to move it earlier we used an approximation of number of extra pages but it should be fine. This prepares for the follow up patches to finally remove the duplication of guest mode parsings. Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: selftests: Move vm type into _vm_create() internallyPeter Xu3-20/+17
Rather than passing the vm type from the top level to the end of vm creation, let's simply keep that as an internal of kvm_vm struct and decide the type in _vm_create(). Several reasons for doing this: - The vm type is only decided by physical address width and currently only used in aarch64, so we've got enough information as long as we're passing vm_guest_mode into _vm_create(), - This removes a loop dependency between the vm->type and creation of vms. That's why now we need to parse vm_guest_mode twice sometimes, once in run_test() and then again in _vm_create(). The follow up patches will move on to clean up that as well so we can have a single place to decide guest machine types and so. Note that this patch will slightly change the behavior of aarch64 tests in that previously most vm_create() callers will directly pass in type==0 into _vm_create() but now the type will depend on vm_guest_mode, however it shouldn't affect any user because all vm_create() users of aarch64 will be using VM_MODE_DEFAULT guest mode (which is VM_MODE_P40V48_4K) so at last type will still be zero. Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: announce KVM_CAP_HYPERV_ENLIGHTENED_VMCS support only when it is ↵Vitaly Kuznetsov1-2/+4
available It was discovered that after commit 65efa61dc0d5 ("selftests: kvm: provide common function to enable eVMCS") hyperv_cpuid selftest is failing on AMD. The reason is that the commit changed _vcpu_ioctl() to vcpu_ioctl() in the test and this one can't fail. Instead of fixing the test is seems to make more sense to not announce KVM_CAP_HYPERV_ENLIGHTENED_VMCS support if it is definitely missing (on svm and in case kvm_intel.nested=0). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM: x86: svm: remove unneeded nested_enable_evmcs() hookVitaly Kuznetsov1-8/+1
Since commit 5158917c7b019 ("KVM: x86: nVMX: Allow nested_enable_evmcs to be NULL") the code in x86.c is prepared to see nested_enable_evmcs being NULL and in VMX case it actually is when nesting is disabled. Remove the unneeded stub from SVM code. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM/Hyper-V/VMX: Add direct tlb flush supportVitaly Kuznetsov4-0/+47
Hyper-V provides direct tlb flush function which helps L1 Hypervisor to handle Hyper-V tlb flush request from L2 guest. Add the function support for VMX. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24KVM/Hyper-V: Add new KVM capability KVM_CAP_HYPERV_DIRECT_TLBFLUSHTianyu Lan4-0/+23
Hyper-V direct tlb flush function should be enabled for guest that only uses Hyper-V hypercall. User space hypervisor(e.g, Qemu) can disable KVM identification in CPUID and just exposes Hyper-V identification to make sure the precondition. Add new KVM capability KVM_CAP_ HYPERV_DIRECT_TLBFLUSH for user space to enable Hyper-V direct tlb function and this function is default to be disabled in KVM. Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24x86/Hyper-V: Fix definition of struct hv_vp_assist_pageTianyu Lan1-5/+15
The struct hv_vp_assist_page was defined incorrectly. The "vtl_control" should be u64[3], "nested_enlightenments _control" should be a u64 and there are 7 reserved bytes following "enlighten_vmentry". Fix the definition. Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-24kvm: x86: Add Intel PMU MSRs to msrs_to_save[]Jim Mattson1-0/+41
These MSRs should be enumerated by KVM_GET_MSR_INDEX_LIST, so that userspace knows that these MSRs may be part of the vCPU state. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Eric Hankland <ehankland@google.com> Reviewed-by: Peter Shier <pshier@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-22firmware: bcm47xx_nvram: _really_ correct size_t printf formatLinus Torvalds1-1/+1
Commit feb4eb060c3a ("firmware: bcm47xx_nvram: Correct size_t printf format") was wrong, and changed a printout of 'header.len' - which is an u32 type - to use '%zu'. It apparently did pattern matching on the other case, where it printed out 'nvram_len', which is indeed of type 'size_t'. Rather than undoing the change, this just makes it use the variable that the change seemed to expect to be used. Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Paul Burton <paul.burton@mips.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-22modules: make MODULE_IMPORT_NS() work even when modular builds are disabledLinus Torvalds1-2/+2
It's an unusual configuration, and was apparently never tested, and not caught in linux-next because of a combination of travels and it making it into the tree too late. The fix is to simply move the #define to outside the CONFIG_MODULE section, since MODULE_INFO() will do the right thing. Cc: Martijn Coenen <maco@android.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Matthias Maennich <maennich@google.com> Cc: Jessica Yu <jeyu@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-21PCI: Add pci_irq_vector() and other stubs when !CONFIG_PCIHerbert Xu1-13/+26
Add stub functions pci_alloc_irq_vectors_affinity() and pci_irq_vector() when CONFIG_PCI is off so drivers can use them without resorting to ifdefs. Also move the PCI_IRQ_* macros outside of the ifdefs so they are always available. Fixes: 625f269a5a7a ("crypto: inside-secure - add support for PCI based FPGA development board") Link: https://lore.kernel.org/r/20190904122600.GA28660@gondor.apana.org.au Reported-by: kbuild test robot <lkp@intel.com> Reported-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2019-09-20MIPS: Detect bad _PFN_SHIFT valuesPaul Burton1-0/+6
2 recent commits have fixed issues where _PFN_SHIFT grew too large due to the introduction of too many pgprot bits in our PTEs for some MIPS32 kernel configurations. Tracking down such issues can be tricky, so add a BUILD_BUG_ON() to help. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-09-20MIPS: Disable pte_special() for MIPS32 with RiXiPaul Burton2-2/+14
Commit 61cbfff4b1a7 ("MIPS: pte_special()/pte_mkspecial() support") added a _PAGE_SPECIAL bit to the pgprot bits of our PTEs. Unfortunately for MIPS32 configurations with RiXi support this pushed the number of pgprot bits to 13. Since the PFN field in EntryLo begins at bit 12 this results in us shifting the most significant bit of the physical address beyond the end of the PTE, leading any mapped access to a physical address above 2GB to incorrectly access an address 2GB lower than intended. For now, disable the pte_special() support for MIPS32 configurations that support RiXi. Fixes: 61cbfff4b1a7 ("MIPS: pte_special()/pte_mkspecial() support") Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Dmitry Korotin <dkorotin@wavecomp.com> Cc: linux-mips@vger.kernel.org
2019-09-20arm64: tegra: Add PCIe slot supply information in p2972-0000 platformVidya Sagar2-1/+27
Add 3.3V and 12V supplies regulators information of x16 PCIe slot in p2972-0000 platform which is owned by C5 controller and also enable C5 controller. Signed-off-by: Vidya Sagar <vidyas@nvidia.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Andrew Murray <andrew.murray@arm.com>
2019-09-20arm64: tegra: Add configuration for PCIe C5 sideband signalsVidya Sagar1-1/+37
Add support to configure PCIe C5's sideband signals PERST# and CLKREQ# as output and bi-directional signals respectively which unlike other PCIe controllers sideband signals are not configured by default. Signed-off-by: Vidya Sagar <vidyas@nvidia.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Andrew Murray <andrew.murray@arm.com>
2019-09-20PCI: tegra: Add support to enable slot regulatorsVidya Sagar1-0/+83
Add support to get regulator information of 3.3V and 12V supplies of a PCIe slot from the respective controller's device-tree node and enable those supplies. This is required in platforms like p2972-0000 where the supplies to x16 slot owned by C5 controller need to be enabled before attempting to enumerate the devices. Signed-off-by: Vidya Sagar <vidyas@nvidia.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Andrew Murray <andrew.murray@arm.com> Acked-by: Thierry Reding <treding@nvidia.com>
2019-09-20PCI: tegra: Add support to configure sideband pinsVidya Sagar1-2/+10
Add support to configure sideband signal pins when the information is present in the respective controller device-tree node. Signed-off-by: Vidya Sagar <vidyas@nvidia.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> [bhelgaas: fold in YueHaibing's fix for build error without CONFIG_PINCTRL; https://lore.kernel.org/r/20190920014807.38288-1-yuehaibing@huawei.com] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Andrew Murray <andrew.murray@arm.com> Acked-by: Thierry Reding <treding@nvidia.com>
2019-09-20lz4: do not export static symbolLinus Torvalds1-1/+0
Kbuild now complains (rightly) about it. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-20crypto: hisilicon - avoid unused function warningArnd Bergmann1-5/+2
The only caller of hisi_zip_vf_q_assign() is hidden in an #ifdef, so the function causes a warning when CONFIG_PCI_IOV is disabled: drivers/crypto/hisilicon/zip/zip_main.c:740:12: error: unused function 'hisi_zip_vf_q_assign' [-Werror,-Wunused-function] Replace the #ifdef with an IS_ENABLED() check that leads to the function being dropped based on the configuration. Fixes: 79e09f30eeba ("crypto: hisilicon - add SRIOV support for ZIP") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20hwrng: core - don't wait on add_early_randomness()Laurent Vivier1-1/+1
add_early_randomness() is called by hwrng_register() when the hardware is added. If this hardware and its module are present at boot, and if there is no data available the boot hangs until data are available and can't be interrupted. For instance, in the case of virtio-rng, in some cases the host can be not able to provide enough entropy for all the guests. We can have two easy ways to reproduce the problem but they rely on misconfiguration of the hypervisor or the egd daemon: - if virtio-rng device is configured to connect to the egd daemon of the host but when the virtio-rng driver asks for data the daemon is not connected, - if virtio-rng device is configured to connect to the egd daemon of the host but the egd daemon doesn't provide data. The guest kernel will hang at boot until the virtio-rng driver provides enough data. To avoid that, call rng_get_data() in non-blocking mode (wait=0) from add_early_randomness(). Signed-off-by: Laurent Vivier <lvivier@redhat.com> Fixes: d9e797261933 ("hwrng: add randomness to system from rng...") Cc: <stable@vger.kernel.org> Reviewed-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20crypto: hisilicon - Fix return value check in hisi_zip_acompress()Yunfeng Ye1-2/+2
The return valude of add_comp_head() is int, but @head_size is size_t, which is a unsigned type. size_t head_size; ... if (head_size < 0) // it will never work return -ENOMEM Modify the type of @head_size to int, then change the type to size_t when invoke hisi_zip_create_req() as a parameter. Fixes: 62c455ca853e ("crypto: hisilicon - add HiSilicon ZIP accelerator support") Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Acked-by: Zhou Wang <wangzhou1@hisilicon.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20crypto: hisilicon - Matching the dma address for dma_pool_free()Yunfeng Ye1-25/+19
When dma_pool_zalloc() fail in sec_alloc_and_fill_hw_sgl(), dma_pool_free() is invoked, but the parameters that sgl_current and sgl_current->next_sgl is not match. Using sec_free_hw_sgl() instead of the original free routine. Fixes: 915e4e8413da ("crypto: hisilicon - SEC security accelerator driver") Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20crypto: hisilicon - Fix double free in sec_free_hw_sgl()Yunfeng Ye1-6/+7
There are two problems in sec_free_hw_sgl(): First, when sgl_current->next is valid, @hw_sgl will be freed in the first loop, but it free again after the loop. Second, sgl_current and sgl_current->next_sgl is not match when dma_pool_free() is invoked, the third parameter should be the dma address of sgl_current, but sgl_current->next_sgl is the dma address of next chain, so use sgl_current->next_sgl is wrong. Fix this by deleting the last dma_pool_free() in sec_free_hw_sgl(), modifying the condition for while loop, and matching the address for dma_pool_free(). Fixes: 915e4e8413da ("crypto: hisilicon - SEC security accelerator driver") Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20crypto: inside-secure - Fix unused variable warning when CONFIG_PCI=nPascal van Leeuwen1-11/+29
This patch fixes an unused variable warning from the compiler when the driver is being compiled without PCI support in the kernel. Fixes: 625f269a5a7a ("crypto: inside-secure - add support for...") Signed-off-by: Pascal van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20crypto: talitos - fix missing break in switch statementGustavo A. R. Silva1-0/+1
Add missing break statement in order to prevent the code from falling through to case CRYPTO_ALG_TYPE_AHASH. Fixes: aeb4c132f33d ("crypto: talitos - Convert to new AEAD interface") Cc: stable@vger.kernel.org Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-20clk: Drop !clk checks in debugfs dumpingStephen Boyd1-12/+0
These recursive functions have checks for !clk being passed in, but the callers are always looping through lists and therefore the pointers can't be NULL. Drop the checks to simplify the code. Signed-off-by: Stephen Boyd <sboyd@kernel.org> Link: https://lkml.kernel.org/r/20190826234729.145593-1-sboyd@kernel.org
2019-09-19Hexagon: change maintainer to Brian CainBrian Cain1-2/+1
Signed-off-by: Brian Cain <bcain@codeaurora.org> Signed-off-by: Richard Kuo <rkuo@codeaurora.org>
2019-09-19selftests/ftrace: Update kprobe event error testcaseMasami Hiramatsu1-0/+1
Update kprobe event error testcase to test if it correctly finds the exact same probe event. Link: http://lkml.kernel.org/r/156879695513.31056.1580235733738840126.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-09-19tracing/probe: Reject exactly same probe eventMasami Hiramatsu3-17/+90
Reject exactly same probe events as existing probes. Multiprobe allows user to define multiple probes on same event. If user appends a probe which exactly same definition (same probe address and same arguments) on existing event, the event will record same probe information twice. That can be confusing users, so reject it. Link: http://lkml.kernel.org/r/156879694602.31056.5533024778165036763.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-09-19tracing/probe: Fix to allow user to enable events on unloaded modulesMasami Hiramatsu1-12/+5
Fix to allow user to enable probe events on unloaded modules. This operations was allowed before commit 60d53e2c3b75 ("tracing/probe: Split trace_event related data from trace_probe"), because if users need to probe module init functions, they have to enable those probe events before loading module. Link: http://lkml.kernel.org/r/156879693733.31056.9331322616994665167.stgit@devnote2 Cc: stable@vger.kernel.org Fixes: 60d53e2c3b75 ("tracing/probe: Split trace_event related data from trace_probe") Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-09-19of: restore old handling of cells_name=NULL in of_*_phandle_with_args()Uwe Kleine-König1-2/+33
Before commit e42ee61017f5 ("of: Let of_for_each_phandle fallback to non-negative cell_count") the iterator functions calling of_for_each_phandle assumed a cell count of 0 if cells_name was NULL. This corner case was missed when implementing the fallback logic in e42ee61017f5 and resulted in an endless loop. Restore the old behaviour of of_count_phandle_with_args() and of_parse_phandle_with_args() and add a check to of_phandle_iterator_init() to prevent a similar failure as a safety precaution. of_parse_phandle_with_args_map() doesn't need a similar fix as cells_name isn't NULL there. Affected drivers are: - drivers/base/power/domain.c - drivers/base/power/domain.c - drivers/clk/ti/clk-dra7-atl.c - drivers/hwmon/ibmpowernv.c - drivers/i2c/muxes/i2c-demux-pinctrl.c - drivers/iommu/mtk_iommu.c - drivers/net/ethernet/freescale/fman/mac.c - drivers/opp/of.c - drivers/perf/arm_dsu_pmu.c - drivers/regulator/of_regulator.c - drivers/remoteproc/imx_rproc.c - drivers/soc/rockchip/pm_domains.c - sound/soc/fsl/imx-audmix.c - sound/soc/fsl/imx-audmix.c - sound/soc/meson/axg-card.c - sound/soc/samsung/tm2_wm5110.c - sound/soc/samsung/tm2_wm5110.c Thanks to Geert Uytterhoeven for reporting the issue, Peter Rosin for helping pinpoint the actual problem and the testers for confirming this fix. Fixes: e42ee61017f5 ("of: Let of_for_each_phandle fallback to non-negative cell_count") Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Rob Herring <robh@kernel.org>
2019-09-19powerpc/mm/mce: Keep irqs disabled during lockless page table walkAneesh Kumar K.V1-8/+12
__find_linux_mm_pte() returns a page table entry pointer after walking the page table without holding locks. To make it safe against a THP split and/or collapse, we disable interrupts around the lockless page table walk. However we need to keep interrupts disabled as long as we use the page table entry pointer that is returned. Fix addr_to_pfn() to do that. Fixes: ba41e1e1ccb9 ("powerpc/mce: Hookup derror (load/store) UE errors") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Rearrange code slightly and tweak change log wording] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190918145328.28602-1-aneesh.kumar@linux.ibm.com
2019-09-18HID: core: fix dmesg flooding if report field larger than 32bitJoshua Clayton1-2/+2
Only warn once of oversize hid report value field On HP spectre x360 convertible the message: hid-sensor-hub 001F:8087:0AC2.0002: hid_field_extract() called with n (192) > 32! (kworker/1:2) is continually printed many times per second, crowding out all else. Protect dmesg by printing the warning only one time. The size of the hid report field data structure should probably be increased. The data structure is treated as a u32 in Linux, but an unlimited number of bits in the USB hid spec, so there is some rearchitecture needed now that devices are sending more than 32 bits. Signed-off-by: Joshua Clayton <stillcompiling@gmail.com> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2019-09-18HID: core: Add printk_once variants to hid_warn() etcJoshua Clayton1-0/+11
hid_warn_once() is needed. Add the others as part of the block. Signed-off-by: Joshua Clayton <stillcompiling@gmail.com> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>