summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* iommu/iova: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->fqSebastian Andrzej Siewior2017-11-061-3/+1
| | | | | | | | | | | | | | | | | get_cpu_ptr() disabled preemption and returns the ->fq object of the current CPU. raw_cpu_ptr() does the same except that it not disable preemption which means the scheduler can move it to another CPU after it obtained the per-CPU object. In this case this is not bad because the data structure itself is protected with a spin_lock. This change shouldn't matter however on RT it does because the sleeping lock can't be accessed with disabled preemption. Cc: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Reported-by: vinadhy@gmail.com Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
*-------. Merge branches 'iommu/fixes', 'arm/omap', 'arm/exynos', 'x86/amd', ↵Joerg Roedel2017-10-1320-321/+518
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | 'x86/vt-d' and 'core' into next
| | | | | * iommu/iova: Make rcache flush optional on IOVA allocation failureTomasz Nowicki2017-10-125-13/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since IOVA allocation failure is not unusual case we need to flush CPUs' rcache in hope we will succeed in next round. However, it is useful to decide whether we need rcache flush step because of two reasons: - Not scalability. On large system with ~100 CPUs iterating and flushing rcache for each CPU becomes serious bottleneck so we may want to defer it. - free_cpu_cached_iovas() does not care about max PFN we are interested in. Thus we may flush our rcaches and still get no new IOVA like in the commonly used scenario: if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev)) iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift); if (!iova) iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift); 1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get PCI devices a SAC address 2. alloc_iova() fails due to full 32-bit space 3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas() throws entries away for nothing and alloc_iova() fails again 4. Next alloc_iova_fast() call cannot take advantage of rcache since we have just defeated caches. In this case we pick the slowest option to proceed. This patch reworks flushed_rcache local flag to be additional function argument instead and control rcache flush step. Also, it updates all users to do the flush as the last chance. Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB syncRobin Murphy2017-10-023-6/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the core API issues its own post-unmap TLB sync call, push that operation out from the io-pgtable-arm-v7s internals into the users. For now, we leave the invalidation implicit in the unmap operation, since none of the current users would benefit much from any change to that. Note that the conversion of msm_iommu is implicit, since that apparently has no specific TLB sync operation anyway. CC: Yong Wu <yong.wu@mediatek.com> CC: Rob Clark <robdclark@gmail.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/io-pgtable-arm: Convert to IOMMU API TLB syncRobin Murphy2017-10-024-11/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the core API issues its own post-unmap TLB sync call, push that operation out from the io-pgtable-arm internals into the users. For now, we leave the invalidation implicit in the unmap operation, since none of the current users would benefit much from any change to that. CC: Magnus Damm <damm+renesas@opensource.se> CC: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Don't try to copy anchor nodesRobin Murphy2017-10-021-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anchor nodes are not reserved IOVAs in the way that copy_reserved_iova() cares about - while the failure from reserve_iova() is benign since the target domain will already have its own anchor, we still don't want to be triggering spurious warnings. Reported-by: kernel test robot <fengguang.wu@intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Fixes: bb68b2fbfbd6 ('iommu/iova: Add rbtree anchor node') Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Try harder to allocate from rcache magazineRobin Murphy2017-09-281-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When devices with different DMA masks are using the same domain, or for PCI devices where we usually try a speculative 32-bit allocation first, there is a fair possibility that the top PFN of the rcache stack at any given time may be unsuitable for the lower limit, prompting a fallback to allocating anew from the rbtree. Consequently, we may end up artifically increasing pressure on the 32-bit IOVA space as unused IOVAs accumulate lower down in the rcache stacks, while callers with 32-bit masks also impose unnecessary rbtree overhead. In such cases, let's try a bit harder to satisfy the allocation locally first - scanning the whole stack should still be relatively inexpensive. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Make rcache limit_pfn handling more robustRobin Murphy2017-09-281-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When popping a pfn from an rcache, we are currently checking it directly against limit_pfn for viability. Since this represents iova->pfn_lo, it is technically possible for the corresponding iova->pfn_hi to be greater than limit_pfn. Although we generally get away with it in practice since limit_pfn is typically a power-of-two boundary and the IOVAs are size-aligned, it's pretty trivial to make the iova_rcache_get() path take the allocation size into account for complete safety. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Simplify domain destructionRobin Murphy2017-09-281-39/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All put_iova_domain() should have to worry about is freeing memory - by that point the domain must no longer be live, so the act of cleaning up doesn't need to be concurrency-safe or maintain the rbtree in a self-consistent state. There's no need to waste time with locking or emptying the rcache magazines, and we can just use the postorder traversal helper to clear out the remaining rbtree entries in-place. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Simplify cached node logicRobin Murphy2017-09-271-34/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic of __get_cached_rbnode() is a little obtuse, but then __get_prev_node_of_cached_rbnode_or_last_node_and_update_limit_pfn() wouldn't exactly roll off the tongue... Now that we have the invariant that there is always a valid node to start searching downwards from, everything gets a bit easier to follow if we simplify that function to do what it says on the tin and return the cached node (or anchor node as appropriate) directly. In turn, we can then deduplicate the rb_prev() and limit_pfn logic into the main loop itself, further reduce the amount of code under the lock, and generally make the inner workings a bit less subtle. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Add rbtree anchor nodeRobin Murphy2017-09-272-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a permanent dummy IOVA reservation to the rbtree, such that we can always access the top of the address space instantly. The immediate benefit is that we remove the overhead of the rb_last() traversal when not using the cached node, but it also paves the way for further simplifications. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Make dma_32bit_pfn implicitZhen Lei2017-09-278-41/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the cached node optimisation can apply to all allocations, the couple of users which were playing tricks with dma_32bit_pfn in order to benefit from it can stop doing so. Conversely, there is also no need for all the other users to explicitly calculate a 'real' 32-bit PFN, when init_iova_domain() can happily do that itself from the page granularity. CC: Thierry Reding <thierry.reding@gmail.com> CC: Jonathan Hunter <jonathanh@nvidia.com> CC: David Airlie <airlied@linux.ie> CC: Sudeep Dutt <sudeep.dutt@intel.com> CC: Ashutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Nate Watterson <nwatters@codeaurora.org> [rm: use iova_shift(), rewrote commit message] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Extend rbtree node cachingRobin Murphy2017-09-272-33/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cached node mechanism provides a significant performance benefit for allocations using a 32-bit DMA mask, but in the case of non-PCI devices or where the 32-bit space is full, the loss of this benefit can be significant - on large systems there can be many thousands of entries in the tree, such that walking all the way down to find free space every time becomes increasingly awful. Maintain a similar cached node for the whole IOVA space as a superset of the 32-bit space so that performance can remain much more consistent. Inspired by work by Zhen Lei <thunder.leizhen@huawei.com>. Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Optimise the padding calculationZhen Lei2017-09-271-27/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mask for calculating the padding size doesn't change, so there's no need to recalculate it every loop iteration. Furthermore, Once we've done that, it becomes clear that we don't actually need to calculate a padding size at all - by flipping the arithmetic around, we can just combine the upper limit, size, and mask directly to check against the lower limit. For an arm64 build, this alone knocks 20% off the object code size of the entire alloc_iova() function! Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Nate Watterson <nwatters@codeaurora.org> [rm: simplified more of the arithmetic, rewrote commit message] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * iommu/iova: Optimise rbtree searchingZhen Lei2017-09-271-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Checking the IOVA bounds separately before deciding which direction to continue the search (if necessary) results in redundantly comparing both pfns twice each. GCC can already determine that the final comparison op is redundant and optimise it down to 3 in total, but we can go one further with a little tweak of the ordering (which makes the intent of the code that much cleaner as a bonus). Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Nate Watterson <nwatters@codeaurora.org> [rm: rewrote commit message to clarify] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | * | iommu/vt-d: Delete unnecessary check in domain_context_mapping_one()Christos Gkekas2017-10-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Variable did_old is unsigned so checking whether it is greater or equal to zero is not necessary. Signed-off-by: Christos Gkekas <chris.gekas@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | * | iommu/vt-d: Don't register bus-notifier under dmar_global_lockJoerg Roedel2017-10-063-2/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The notifier function will take the dmar_global_lock too, so lockdep complains about inverse locking order when the notifier is registered under the dmar_global_lock. Reported-by: Jan Kiszka <jan.kiszka@siemens.com> Fixes: 59ce0515cdaf ('iommu/vt-d: Update DRHD/RMRR/ATSR device scope caches when PCI hotplug happens') Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | * | | iommu/amd: Enforce alignment for MSI IRQsJoerg Roedel2017-10-101-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make use of the new alignment capability of alloc_irq_index() to enforce IRQ index alignment for MSI. Reported-by: Thomas Gleixner <tglx@linutronix.de> Fixes: 2b324506341cb ('iommu/amd: Add routines to manage irq remapping tables') Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | * | | iommu/amd: Add align parameter to alloc_irq_index()Joerg Roedel2017-10-101-8/+14
| | | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For multi-MSI IRQ ranges the IRQ index needs to be aligned to the power-of-two of the requested IRQ count. Extend the alloc_irq_index() function to allow such an allocation. Reported-by: Thomas Gleixner <tglx@linutronix.de> Fixes: 2b324506341cb ('iommu/amd: Add routines to manage irq remapping tables') Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | * | | iommu/exynos: Rework runtime PM links managementMarek Szyprowski2017-09-191-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | add_device is a bit more suitable for establishing runtime PM links than the xlate callback. This change also makes it possible to implement proper cleanup - in remove_device callback. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| * | | | iommu/omap: Add support to program multiple iommusSuman Anna2017-09-192-103/+285
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A client user instantiates and attaches to an iommu_domain to program the OMAP IOMMU associated with the domain. The iommus programmed by a client user are bound with the iommu_domain through the user's device archdata. The OMAP IOMMU driver currently supports only one IOMMU per IOMMU domain per user. The OMAP IOMMU driver has been enhanced to support allowing multiple IOMMUs to be programmed by a single client user. This support is being added mainly to handle the DSP subsystems on the DRA7xx SoCs, which have two MMUs within the same subsystem. These MMUs provide translations for a processor core port and an internal EDMA port. This support allows both the MMUs to be programmed together, but with each one retaining it's own internal state objects. The internal EDMA block is managed by the software running on the DSPs, and this design provides on-par functionality with previous generation OMAP DSPs where the EDMA and the DSP core shared the same MMU. The multiple iommus are expected to be provided through a sentinel terminated array of omap_iommu_arch_data objects through the client user's device archdata. The OMAP driver core is enhanced to loop through the array of attached iommus and program them for all common operations. The sentinel-terminated logic is used so as to not change the omap_iommu_arch_data structure. NOTE: 1. The IOMMU group and IOMMU core registration is done only for the DSP processor core MMU even though both MMUs are represented by their own platform device and are probed individually. The IOMMU device linking uses this registered MMU device. The struct iommu_device for the second MMU is not used even though memory for it is allocated. 2. The OMAP IOMMU debugfs code still continues to operate on individual IOMMU objects. Signed-off-by: Suman Anna <s-anna@ti.com> [t-kristo@ti.com: ported support to 4.13 based kernel] Signed-off-by: Tero Kristo <t-kristo@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| * | | | iommu/omap: Change the attach detection logicSuman Anna2017-09-191-6/+11
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The OMAP IOMMU driver allows only a single device (eg: a rproc device) to be attached per domain. The current attach detection logic relies on a check for an attached iommu for the respective client device. Change this logic to use the client device pointer instead in preparation for supporting multiple iommu devices to be bound to a single iommu domain, and thereby to a client device. Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* | | | iommu/amd: Finish TLB flush in amd_iommu_unmap()Joerg Roedel2017-10-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function only sends the flush command to the IOMMU(s), but does not wait for its completion when it returns. Fix that. Fixes: 601367d76bd1 ('x86/amd-iommu: Remove iommu_flush_domain function') Cc: stable@vger.kernel.org # >= 2.6.33 Signed-off-by: Joerg Roedel <jroedel@suse.de>
* | | | iommu/exynos: Remove initconst attribute to avoid potential kernel oopsMarek Szyprowski2017-10-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Exynos SYSMMU registers standard platform device with sysmmu_of_match table, what means that this table is accessed every time a new platform device is registered in a system. This might happen also after the boot, so the table must not be attributed as initconst to avoid potential kernel oops caused by access to freed memory. Fixes: 6b21a5db3642 ("iommu/exynos: Support for device tree") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* | | | iommu/amd: Do not disable SWIOTLB if SME is activeTom Lendacky2017-10-101-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When SME memory encryption is active it will rely on SWIOTLB to handle DMA for devices that cannot support the addressing requirements of having the encryption mask set in the physical address. The IOMMU currently disables SWIOTLB if it is not running in passthrough mode. This is not desired as non-PCI devices attempting DMA may fail. Update the code to check if SME is active and not disable SWIOTLB. Fixes: 2543a786aa25 ("iommu/amd: Allow the AMD IOMMU to work with memory encryption") Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* | | | Linux 4.14-rc4v4.14-rc4Linus Torvalds2017-10-091-1/+1
| | | |
* | | | Merge tag 'scsi-fixes' of ↵Linus Torvalds2017-10-078-31/+36
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: - a couple of serious fixes: use after free and blacklist for WRITE SAME - one error leg fix: write_pending failure - one user experience problem: do not override max_sectors_kb - one minor unused function removal * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: ibmvscsis: Fix write_pending failure path scsi: libiscsi: Remove iscsi_destroy_session scsi: libiscsi: Fix use-after-free race during iscsi_session_teardown scsi: sd: Do not override max_sectors_kb sysfs setting scsi: sd: Implement blacklist option for WRITE SAME w/ UNMAP
| * | | | scsi: ibmvscsis: Fix write_pending failure pathBryant G. Ly2017-10-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For write_pending if the queue is down or client failed then return -EIO so that LIO can properly process the completed command. Prior we returned 0 since LIO could not handle it properly. Now with commit fa7e25cf13a6 ("target: Fix unknown fabric callback queue-full errors") that patch addresses LIO's ability to handle things right. Signed-off-by: Bryant G. Ly <bgly@us.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
| * | | | scsi: libiscsi: Remove iscsi_destroy_sessionKhazhismel Kumykov2017-10-032-17/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iscsi_session_teardown was the only user of this function. Function currently is just short for iscsi_remove_session + iscsi_free_session. Signed-off-by: Khazhismel Kumykov <khazhy@google.com> Acked-by: Chris Leech <cleech@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
| * | | | scsi: libiscsi: Fix use-after-free race during iscsi_session_teardownKhazhismel Kumykov2017-10-031-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Session attributes exposed through sysfs were freed before the device was destroyed, resulting in a potential use-after-free. Free these attributes after removing the device. Signed-off-by: Khazhismel Kumykov <khazhy@google.com> Acked-by: Chris Leech <cleech@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
| * | | | scsi: sd: Do not override max_sectors_kb sysfs settingMartin K. Petersen2017-10-031-5/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A user may lower the max_sectors_kb setting in sysfs to accommodate certain workloads. Previously we would always set the max I/O size to either the block layer default or the optional preferred I/O size reported by the device. Keep the current heuristics for the initial setting of max_sectors_kb. For subsequent invocations, only update the current queue limit if it exceeds the capabilities of the hardware. Cc: <stable@vger.kernel.org> Reported-by: Don Brace <don.brace@microsemi.com> Reviewed-by: Martin Wilck <mwilck@suse.com> Tested-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
| * | | | scsi: sd: Implement blacklist option for WRITE SAME w/ UNMAPMartin K. Petersen2017-10-034-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SBC-4 states: "A MAXIMUM UNMAP LBA COUNT field set to a non-zero value indicates the maximum number of LBAs that may be unmapped by an UNMAP command" "A MAXIMUM WRITE SAME LENGTH field set to a non-zero value indicates the maximum number of contiguous logical blocks that the device server allows to be unmapped or written in a single WRITE SAME command." Despite the spec being clear on the topic, some devices incorrectly expect WRITE SAME commands with the UNMAP bit set to be limited to the value reported in MAXIMUM UNMAP LBA COUNT in the Block Limits VPD. Implement a blacklist option that can be used to accommodate devices with this behavior. Cc: <stable@vger.kernel.org> Reported-by: Bill Kuzeja <William.Kuzeja@stratus.com> Reported-by: Ewan D. Milne <emilne@redhat.com> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Tested-by: Laurence Oberman <loberman@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
* | | | | Merge branch 'i2c/for-current-4.14' of ↵Linus Torvalds2017-10-075-10/+14
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux Pull i2c fixes from Wolfram Sang: "I2C has three driver fixes for the newly introduced drivers and one ID addition for the i801 driver" * 'i2c/for-current-4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: i2c-stm32f7: make structure stm32f7_setup static const i2c: ensure termination of *_device_id tables i2c: i801: Add support for Intel Cedar Fork i2c: stm32f7: fix setup structure
| * | | | | i2c: i2c-stm32f7: make structure stm32f7_setup static constColin Ian King2017-10-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The structure stm32f7_setup is local to the source and does not need to be in global scope, make it static const. Cleans up sparse warning: symbol 'stm32f7_setup' was not declared. Should it be static? Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
| * | | | | i2c: ensure termination of *_device_id tablesThomas Meyer2017-10-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure (of/i2c/platform)_device_id tables are NULL terminated. Found by coccinelle spatch "misc/of_table.cocci" Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
| * | | | | i2c: i801: Add support for Intel Cedar ForkJarkko Nikula2017-10-053-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add PCI ID for Intel Cedar Fork PCH. Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com> Reviewed-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
| * | | | | i2c: stm32f7: fix setup structurePierre-Yves MORDRET2017-10-051-9/+6
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I2C drive setup structure is not properly allocated. Make it static instead of pointer to store driver data. Fixes: aeb068c5721485 ("i2c: i2c-stm32f7: add driver") Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
* | | | | Merge tag 'mmc-v4.14-rc3' of ↵Linus Torvalds2017-10-0711-162/+81
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc Pull MMC fixes from Ulf Hansson: "MMC core: - Fix driver strength selection when selecting hs400es - Delete bounce buffer handling: This change fixes a problem related to how bounce buffers are being allocated. However, instead of trying to fix that, let's just remove the mmc bounce buffer code altogether, as it has practically no use. MMC host: - meson-gx: A couple of fixes related to clock/phase/tuning - sdhci-xenon: Fix clock resource by adding an optional bus clock" * tag 'mmc-v4.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: mmc: sdhci-xenon: Fix clock resource by adding an optional bus clock mmc: meson-gx: include tx phase in the tuning process mmc: meson-gx: fix rx phase reset mmc: meson-gx: make sure the clock is rounded down mmc: Delete bounce buffer handling mmc: core: add driver strength selection when selecting hs400es
| * | | | | mmc: sdhci-xenon: Fix clock resource by adding an optional bus clockGregory CLEMENT2017-10-043-9/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Armada 7K/8K we need to explicitly enable the bus clock. The bus clock is optional because not all the SoCs need them but at least for Armada 7K/8K it is actually mandatory. The binding documentation is updating accordingly. Without this patch the kernel hand during boot if the mvpp2.2 network driver was not present in the kernel. Indeed the clock needed by the xenon controller was set by the network driver. Fixes: 3a3748dba881 ("mmc: sdhci-xenon: Add Marvell Xenon SDHC core functionality)" CC: Stable <stable@vger.kernel.org> Tested-by: Zhoujie Wu <zjwu@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
| * | | | | mmc: meson-gx: include tx phase in the tuning processJerome Brunet2017-10-041-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It has been reported that some platforms (odroid-c2) may require a different tx phase setting to operate at high speed (hs200 and hs400) To improve the situation, this patch includes tx phase in the tuning process. Fixes: d341ca88eead ("mmc: meson-gx: rework tuning function") Reported-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
| * | | | | mmc: meson-gx: fix rx phase resetJerome Brunet2017-10-041-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Resetting the phase when POWER_ON is set the set_ios() call means that the phase is reset almost every time the set_ios() is called, while the expected behavior was to reset the phase on a power cycle. This had gone unnoticed until now because in all mode (except hs400) the tuning is done after the last to set_ios(). In such case, the tuning result is used anyway. In HS400, there are a few calls to set_ios() after the tuning is done, overwriting the tuning result. Resetting the phase on POWER_UP instead of POWER_ON solve the problem. Fixes: d341ca88eead ("mmc: meson-gx: rework tuning function") Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
| * | | | | mmc: meson-gx: make sure the clock is rounded downJerome Brunet2017-10-041-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using CLK_DIVIDER_ROUND_CLOSEST is unsafe as the mmc clock could be rounded to a rate higher the specified rate. Removing this flag ensure that, if the rate needs to be rounded, it will be rounded down. Fixes: 51c5d8447bd7 ("MMC: meson: initial support for GX platforms") Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
| * | | | | mmc: Delete bounce buffer handlingLinus Walleij2017-10-046-132/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In may, Steven sent a patch deleting the bounce buffer handling and the CONFIG_MMC_BLOCK_BOUNCE option. I chose the less invasive path of making it a runtime config option, and we merged that successfully for kernel v4.12. The code is however just standing in the way and taking up space for seemingly no gain on any systems in wide use today. Pierre says the code was there to improve speed on TI SDHCI controllers on certain HP laptops and possibly some Ricoh controllers as well. Early SDHCI controllers lacked the scatter-gather feature, which made software bounce buffers a significant speed boost. We are clearly talking about the list of SDHCI PCI-based MMC/SD card readers found in the pci_ids[] list in drivers/mmc/host/sdhci-pci-core.c. The TI SDHCI derivative is not supported by the upstream kernel. This leaves the Ricoh. What we can however notice is that the x86 defconfigs in the kernel did not enable CONFIG_MMC_BLOCK_BOUNCE option, which means that any such laptop would have to have a custom configured kernel to actually take advantage of this bounce buffer speed-up. It simply seems like there was a speed optimization for the Ricoh controllers that noone was using. (I have not checked the distro defconfigs but I am pretty sure the situation is the same there.) Bounce buffers increased performance on the OMAP HSMMC at one point, and was part of the original submission in commit a45c6cb81647 ("[ARM] 5369/1: omap mmc: Add new omap hsmmc controller for 2430 and 34xx, v3") This optimization was removed in commit 0ccd76d4c236 ("omap_hsmmc: Implement scatter-gather emulation") which found that scatter-gather emulation provided even better performance. The same was introduced for SDHCI in commit 2134a922c6e7 ("sdhci: scatter-gather (ADMA) support") I am pretty positively convinced that software scatter-gather emulation will do for any host controller what the bounce buffers were doing. Essentially, the bounce buffer was a reimplementation of software scatter-gather-emulation in the MMC subsystem, and it should be done away with. Cc: Pierre Ossman <pierre@ossman.eu> Cc: Juha Yrjola <juha.yrjola@solidboot.com> Cc: Steven J. Hill <Steven.Hill@cavium.com> Cc: Shawn Lin <shawn.lin@rock-chips.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Suggested-by: Steven J. Hill <Steven.Hill@cavium.com> Suggested-by: Shawn Lin <shawn.lin@rock-chips.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
| * | | | | mmc: core: add driver strength selection when selecting hs400esChanho Min2017-10-021-17/+19
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The driver strength selection is missed and required when selecting hs400es. So, It is added here. Fixes: 81ac2af65793ecf ("mmc: core: implement enhanced strobe support") Cc: stable@vger.kernel.org Signed-off-by: Hankyung Yu <hankyung.yu@lge.com> Signed-off-by: Chanho Min <chanho.min@lge.com> Reviewed-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Shawn Lin <shawn.lin@rock-chips.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
* | | | | Merge tag 'hwmon-for-linus-v4.14-rc4' of ↵Linus Torvalds2017-10-071-8/+11
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging Pull hwmon fix from Guenter Roeck: "Fix up error path in xgene driver" * tag 'hwmon-for-linus-v4.14-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging: hwmon: (xgene) Fix up error handling path mixup in 'xgene_hwmon_probe()'
| * | | | | hwmon: (xgene) Fix up error handling path mixup in 'xgene_hwmon_probe()'Christophe Jaillet2017-10-011-8/+11
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2ca492e22cb7 has moved the call to 'kfifo_alloc()' from after the main 'if' statement to before it. But it has not updated the error handling paths accordingly. Fix all that: - if 'kfifo_alloc()' fails we can return directly - direct returns after 'kfifo_alloc()' must now go to 'out_mbox_free' - 'goto out_mbox_free' must be replaced by 'goto out', otherwise the '[pcc_]mbox_free_channel()' call will be missed. Fixes: 2ca492e22cb7 ("hwmon: (xgene) Fix crash when alarm occurs before driver probe") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
* | | | | Merge tag 'clk-fixes-for-linus' of ↵Linus Torvalds2017-10-073-5/+23
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk fixes from Stephen Boyd: - build fix to export the clk_bulk_prepare() symbol - suspend fix for Samsung Exynos SoCs where we need to keep clks on across suspend - two critical clk markings for clks that shouldn't ever turn off on Rockchip SoCs - a fix for a copy-paste mistake on Rockchip rk3128 causing some clks to touch the same bit and trample over one another * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: samsung: exynos4: Enable VPLL and EPLL clocks for suspend/resume cycle clk: Export clk_bulk_prepare() clk: rockchip: add sclk_timer5 as critical clock on rk3128 clk: rockchip: fix up rk3128 pvtm and mipi_24m gate regs error clk: rockchip: add pclk_pmu as critical clock on rk3128
| * | | | | clk: samsung: exynos4: Enable VPLL and EPLL clocks for suspend/resume cycleMarek Szyprowski2017-10-041-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6edfa11cb396 ("clk: samsung: Add enable/disable operation for PLL36XX clocks") added enable/disable operations to PLL clocks. Prior that VPLL and EPPL clocks were always enabled because the enable bit was never touched. Those clocks have to be enabled during suspend/resume cycle, because otherwise board fails to enter sleep mode. This patch enables them unconditionally before entering system suspend state. System restore function will set them to the previous state saved in the register cache done before that unconditional enable. Fixes: 6edfa11cb396 ("clk: samsung: Add enable/disable operation for PLL36XX clocks") CC: stable@vger.kernel.org # v4.13 Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
| * | | | | Merge tag 'v4.14-rockchip-clkfixes-1' of ↵Stephen Boyd2017-09-291-5/+7
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip into clk-fixes Pull Rockchip clk driver fixes from Heiko Stuebner: Some smallish fixes for the rk3128 clock support including some register errors and some clocks that should be critical for safe usage. * tag 'v4.14-rockchip-clkfixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip: clk: rockchip: add sclk_timer5 as critical clock on rk3128 clk: rockchip: fix up rk3128 pvtm and mipi_24m gate regs error clk: rockchip: add pclk_pmu as critical clock on rk3128
| | * | | | | clk: rockchip: add sclk_timer5 as critical clock on rk3128Elaine Zhang2017-09-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sclk_timer5 is for arm arch counter, so need always on. but no dts node to handle this clk, so make it as critical clock Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com> Signed-off-by: Heiko Stuebner <heiko@sntech.de>