summaryrefslogtreecommitdiffstats
path: root/drivers/dma/ioat (follow)
Commit message (Collapse)AuthorAgeFilesLines
* net_dma: simple removalDan Williams2014-09-284-10/+0
| | | | | | | | | | | | | | | | | | | | | | | Per commit "77873803363c net_dma: mark broken" net_dma is no longer used and there is no plan to fix it. This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards. Reverting the remainder of the net_dma induced changes is deferred to subsequent patches. Marked for stable due to Roman's report of a memory leak in dma_pin_iovec_pages(): https://lkml.org/lkml/2014/9/3/177 Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: David Whipple <whipple@securedatainnovations.ch> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: <stable@vger.kernel.org> Reported-by: Roman Gushchin <klamm@yandex-team.ru> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: Use time_before_jiffies()Manuel Schölling2014-08-211-1/+2
| | | | | | | | | To be future-proof and for better readability the time comparisons are modified to use time_before_jiffies() instead of plain, error-prone math. Signed-off-by: Manuel Schölling <manuel.schoelling@gmx.de> [djbw: use time_before_jiffies() to make argument order more clear] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: Use pci_enable_msix_exact() instead of pci_enable_msix()Alexander Gordeev2014-04-101-1/+1
| | | | | | | | | | | | | | | | | | As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Function pci_enable_msix() returns a tri-state value while pci_enable_msi_exact() is a canonical zero/-errno variant. The former is being phased out in favor of the latter. In case of 'ioat' there (should be) no difference. Cc: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* drivers: dma: Include appropriate header file in dca.cRashika2014-04-101-0/+1
| | | | | | | | | | | | | | | Includes an appropriate header file dma_v2.h in ioat/dca.c because functions ioat2_dca_init() and ioat3_dca_init() have their function declarations in dma_v2.h. This eliminates the following warning in ioat/dca.c: drivers/dma/ioat/dca.c:410:22: warning: no previous prototype for ‘ioat2_dca_init’ [-Wmissing-prototypes] drivers/dma/ioat/dca.c:624:22: warning: no previous prototype for ‘ioat3_dca_init’ [-Wmissing-prototypes] Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* drivers: dma: Mark functions as static in dma_v3.cRashika2014-04-101-3/+3
| | | | | | | | | | | | | | | | Mark the functions ioat3_prep_xor_val(), ioat3_prep_pq_val() and ioat3_prep_pqxor_val() as static in dma_v3.c because they are not used outside this file. This eliminates the following warnings in dma_v3.c: drivers/dma/ioat/dma_v3.c:741:1: warning: no previous prototype for ‘ioat3_prep_xor_val’ [-Wmissing-prototypes] drivers/dma/ioat/dma_v3.c:1092:1: warning: no previous prototype for ‘ioat3_prep_pq_val’ [-Wmissing-prototypes] drivers/dma/ioat/dma_v3.c:1134:1: warning: no previous prototype for ‘ioat3_prep_pqxor_val’ [-Wmissing-prototypes] Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat/dca: Use dev_is_pci() to check whether it is pci deviceYijing Wang2014-04-101-6/+6
| | | | | | | | Use PCI standard marco dev_is_pci() instead of directly compare pci_bus_type to check whether it is pci device. Signed-off-by: Yijing Wang <wangyijing@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: fix tasklet tear downDan Williams2014-02-254-13/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 77873803363c "net_dma: mark broken" we no longer pin dma engines active for the network-receive-offload use case. As a result the ->free_chan_resources() that occurs after the driver self test no longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A late firing irq can lead to ksoftirqd spinning indefinitely due to the tasklet_disable() performed by ->free_chan_resources(). Only ->alloc_chan_resources() can clear this condition in affected kernels. This problem has been present since commit 3e037454bcfa "I/OAT: Add support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the NET_DMA use case is deprecated we can revisit moving the driver to use threaded irqs. For now, just tear down the irq and tasklet properly by: 1/ Disable the irq from triggering the tasklet 2/ Disable the irq from re-arming 3/ Flush inflight interrupts 4/ Flush the timer 5/ Flush inflight tasklets References: https://lkml.org/lkml/2014/1/27/282 https://lkml.org/lkml/2014/2/19/672 Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: <stable@vger.kernel.org> Reported-by: Mike Galbraith <bitbucket@online.de> Reported-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Tested-by: Mike Galbraith <bitbucket@online.de> Tested-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* drivers/dma/ioat/dma.c: check DMA mapping error in ioat_dma_self_test()Jiang Liu2014-01-021-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Check DMA mapping return values in function ioat_dma_self_test() to get rid of following warning message. ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1203 at lib/dma-debug.c:937 check_unmap+0x4c0/0x9a0() ioatdma 0000:00:04.0: DMA-API: device driver failed to check map error[device address=0x000000085191b000] [size=2000 bytes] [mapped as single] Modules linked in: ioatdma(+) mac_hid wmi acpi_pad lp parport hidd_generic usbhid hid ixgbe isci dca libsas ahci ptp libahci scsi_transport_sas meegaraid_sas pps_core mdio CPU: 0 PID: 1203 Comm: systemd-udevd Not tainted 3.13.0-rc4+ #8 Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRIVTIIN1.86B.0044.L09.1311181644 11/18/2013 Call Trace: dump_stack+0x4d/0x66 warn_slowpath_common+0x7d/0xa0 warn_slowpath_fmt+0x4c/0x50 check_unmap+0x4c0/0x9a0 debug_dma_unmap_page+0x81/0x90 ioat_dma_self_test+0x3d2/0x680 [ioatdma] ioat3_dma_self_test+0x12/0x30 [ioatdma] ioat_probe+0xf4/0x110 [ioatdma] ioat3_dma_probe+0x268/0x410 [ioatdma] ioat_pci_probe+0x122/0x1b0 [ioatdma] local_pci_probe+0x45/0xa0 pci_device_probe+0xd9/0x130 driver_probe_device+0x171/0x490 __driver_attach+0x93/0xa0 bus_for_each_dev+0x6b/0xb0 driver_attach+0x1e/0x20 bus_add_driver+0x1f8/0x2b0 driver_register+0x81/0x110 __pci_register_driver+0x60/0x70 ioat_init_module+0x89/0x1000 [ioatdma] do_one_initcall+0xe2/0x250 load_module+0x2313/0x2a00 SyS_init_module+0xd9/0x130 system_call_fastpath+0x1a/0x1f ---[ end trace 990c591681d27c31 ]--- Mapped at: debug_dma_map_page+0xbe/0x180 ioat_dma_self_test+0x1ab/0x680 [ioatdma] ioat3_dma_self_test+0x12/0x30 [ioatdma] ioat_probe+0xf4/0x110 [ioatdma] ioat3_dma_probe+0x268/0x410 [ioatdma] Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge commit 'dmaengine-3.13-v2' of ↵Vinod Koul2013-11-166-336/+65
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengine Pull dmaengine changes from Dan 1/ Bartlomiej and Dan finalized a rework of the dma address unmap implementation. 2/ In the course of testing 1/ a collection of enhancements to dmatest fell out. Notably basic performance statistics, and fixed / enhanced test control through new module parameters 'run', 'wait', 'noverify', and 'verbose'. Thanks to Andriy and Linus for their review. 3/ Testing the raid related corner cases of 1/ triggered bugs in the recently added 16-source operation support in the ioatdma driver. 4/ Some minor fixes / cleanups to mv_xor and ioatdma. Conflicts: drivers/dma/dmatest.c Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioat: fix ioat3_irq_reinitDan Williams2013-11-142-26/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The implementation of ioat3_irq_reinit has two bugs: 1/ The mode is incorrectly set to MSIX for the MSI case 2/ The 'dev_id' parameter to free_irq is the ioatdma_device not the channel in the msi and intx case Include a small cleanup to clarify that ioat3_irq_reinit is only for bwd hardware Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * ioat: kill msix_single_vector supportDan Williams2013-11-143-32/+3
| | | | | | | | | | | | | | | | | | | | | | | | Once we have determined that we will not have all of our desired msix vectors there is no point in attempting a single msix allocation. The driver will already need to read registers to determine the source of the interrupt the fact that it is msix is moot. Fallback directly to msi. Reported-by: Alexander Gordeev <agordeev@redhat.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * ioatdma: clean up sed pool kmem_cacheDan Williams2013-11-144-42/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | Use a single cache for all sed allocations. No need to make it per channel. This also avoids the slub_debug warnings for multiple caches with the same name. Switching to dmam_pool_create() to fix leaking the dma pools on initialization failure and lets us kill ioat3_dma_remove(). Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * ioatdma: fix selection of 16 vs 8 source pathDan Williams2013-11-141-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When performing continuations there are implied sources that need to be added to the source count. Quoting dma_set_maxpq: /* dma_maxpq - reduce maxpq in the face of continued operations * @dma - dma device with PQ capability * @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set * * When an engine does not support native continuation we need 3 extra * source slots to reuse P and Q with the following coefficients: * 1/ {00} * P : remove P from Q', but use it as a source for P' * 2/ {01} * Q : use Q to continue Q' calculation * 3/ {00} * Q : subtract Q from P' to cancel (2) * * In the case where P is disabled we only need 1 extra source: * 1/ {01} * Q : use Q to continue Q' calculation */ ...fix the selection of the 16 source path to take these implied sources into account. Note this also kills the BUG_ON(src_cnt < 9) check in __ioat3_prep_pq16_lock(). Besides not accounting for implied sources the check is redundant given we already made the path selection. Cc: <stable@vger.kernel.org> Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * ioatdma: fix sed pool selectionDan Williams2013-11-141-15/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The array to lookup the sed pool based on the number of sources (pq16_idx_to_sedi) is 16 entries and expects a max source index. However, we pass the total source count which runs off the end of the array when src_cnt == 16. The minimal fix is to just pass src_cnt-1, but given we know the source count is > 8 we can just calculate the sed pool by (src_cnt - 2) >> 3. Cc: Dave Jiang <dave.jiang@intel.com> Cc: <stable@vger.kernel.org> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * ioatdma: Fix bug in selftest after removal of DMA_MEMSET.Dave Jiang2013-11-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | Commit 48a9db4 (3.11) removed the memset op in the xor selftest for ioatdma. The issue is that with the removal of that op, it never replaced the memset with a CPU memset. The memory being operated on is expected to be zeroes but was not. This is causing the xor selftest to fail. Cc: <stable@vger.kernel.org> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * dmaengine: remove DMA unmap flagsBartlomiej Zolnierkiewicz2013-11-142-11/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove no longer needed DMA unmap flags: - DMA_COMPL_SKIP_SRC_UNMAP - DMA_COMPL_SKIP_DEST_UNMAP - DMA_COMPL_SRC_UNMAP_SINGLE - DMA_COMPL_DEST_UNMAP_SINGLE Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Jon Mason <jon.mason@intel.com> Acked-by: Mark Brown <broonie@linaro.org> [djbw: clean up straggling skip unmap flags in ntb] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * dmaengine: remove DMA unmap from driversBartlomiej Zolnierkiewicz2013-11-144-195/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | Remove support for DMA unmapping from drivers as it is no longer needed (DMA core code is now handling it). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> [djbw: fix up chan2parent() unused warning in drivers/dma/dw/core.c] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * dmaengine: prepare for generic 'unmap' dataDan Williams2013-11-143-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a hook for a common dma unmap implementation to enable removal of the per driver custom unmap code. (A reworked version of Bartlomiej Zolnierkiewicz's patches to remove the custom callbacks and the size increase of dma_async_tx_descriptor for drivers that don't care about raid). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> [bzolnier: prepare pl330 driver for adding missing unmap while at it] Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | dmaengine: ioat: use DMA_COMPLETE for dma completion statusVinod Koul2013-10-252-6/+6
|/ | | | | | Acked-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* ioatdma: silence GCC warningsPaul Bolle2013-08-231-1/+1
| | | | | | | | | | | | | | | | Building dma_v3.o triggers a GCC warning: drivers/dma/ioat/dma_v3.c: In function ‘__ioat3_prep_pq16_lock’: drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds] drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds] This warning is caused by pq16_set_src(). It uses "int idx" as an index to an eight element array. Changing "idx" to "unsigned" silences this warning. Apparently GCC can then determine that "idx" will never be negative. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
* ioatdma: disable RAID on non-Atom platforms and reenable unaligned copiesBrice Goglin2013-08-231-23/+1
| | | | | | | | | | | Disable RAID on non-Atom platform and remove related fixups such as the 64-byte alignement restriction on legacy DMA operations (introduced in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr> Acked-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Jon Mason <jon.mason@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
* drivers/dma: remove unused support for MEMSET operationsBartlomiej Zolnierkiewicz2013-07-044-142/+3
| | | | | | | | | | | | | | | | | | | | There have never been any real users of MEMSET operations since they have been introduced in January 2007 by commit 7405f74badf4 ("dmaengine: refactor dmaengine around dma_async_tx_descriptor"). Therefore remove support for them for now, it can be always brought back when needed. [sebastian.hesselbarth@gmail.com: fix drivers/dma/mv_xor] Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Cc: Vinod Koul <vinod.koul@intel.com> Acked-by: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Olof Johansson <olof@lixom.net> Cc: Kevin Hilman <khilman@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-05-097-137/+950
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull slave-dmaengine updates from Vinod Koul: "This time we have dmatest improvements from Andy along with dw_dmac fixes. He has also done support for acpi for dmanegine. Also we have bunch of fixes going in DT support for dmanegine for various folks. Then Haswell and other ioat changes from Dave and SUDMAC support from Shimoda." * 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (53 commits) dma: tegra: implement suspend/resume callbacks dma:of: Use a mutex to protect the of_dma_list dma: of: Fix of_node reference leak dmaengine: sirf: move driver init from module_init to subsys_initcall sudmac: add support for SUDMAC dma: sh: add Kconfig at_hdmac: move to generic DMA binding ioatdma: ioat3_alloc_sed can be static ioatdma: Adding write back descriptor error status support for ioatdma 3.3 ioatdma: S1200 platforms ioatdma channel 2 and 3 falsely advertise RAID cap ioatdma: Adding support for 16 src PQ ops and super extended descriptors ioatdma: Removing hw bug workaround for CB3.x .2 and earlier dw_dmac: add ACPI support dmaengine: call acpi_dma_request_slave_channel as well dma: acpi-dma: introduce ACPI DMA helpers dma: of: Remove unnecessary list_empty check DMA: OF: Check properties value before running be32_to_cpup() on it DMA: of: Constant names ioatdma: skip silicon bug workaround for pq_align for cb3.3 ioatdma: Removing PQ val disable for cb3.3 ...
| * ioatdma: ioat3_alloc_sed can be staticFengguang Wu2013-04-161-2/+2
| | | | | | | | | | | | | | Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Adding write back descriptor error status support for ioatdma 3.3Dave Jiang2013-04-154-25/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | v3.3 provides support for write back descriptor error status. This allows reporting of errors in a descriptor field. In supporting this, certain errors such as P/Q validation errors no longer halts the channel. The DMA engine can continue to execute until the end of the chain and allow software to report the "errors" up the stack. We are also going to mask those error interrupts and handle them when the "chain" has completed at the end. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: S1200 platforms ioatdma channel 2 and 3 falsely advertise RAID capDave Jiang2013-04-151-0/+15
| | | | | | | | | | | | | | | | This workaround checks for channel 2&3 and remove RAID cap. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Adding support for 16 src PQ ops and super extended descriptorsDave Jiang2013-04-156-22/+438
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | v3.3 introduced 16 sources PQ operations. This also introduced super extended descriptors to support the 16 srcs operations. This patch adds support for the 16 sources ops and in turn adds the super extended descriptors for those ops. 5 SED pools are created depending on the descriptor sizes. An SED can be a 64 bytes sized descriptor or larger and must be physically contiguous. A kmem cache pool is created for allocating the software descriptor that manages the hardware descriptor. The super extended descriptor will take place of extended descriptor under certain operations and be "attached" to the op descriptor during operation. This is a new feature for ioatdma v3.3. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Removing hw bug workaround for CB3.x .2 and earlierDave Jiang2013-04-151-11/+20
| | | | | | | | | | | | | | | | | | | | | | CB3.2 and earlier hardware has silicon bugs that are no longer needed with the new hardware. We don't have to use a NULL op to signal interrupt for RAID ops any longer. This code make sure the legacy workarounds only happen on legacy hardware. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: skip silicon bug workaround for pq_align for cb3.3Dave Jiang2013-04-151-2/+10
| | | | | | | | | | | | | | | | The alignment workaround is only necessary for cb3.2 or earlier platforms. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Removing PQ val disable for cb3.3Dave Jiang2013-04-153-12/+125
| | | | | | | | | | | | | | | | | | The PQ Val ops work on the newer hardware so we should actually provide support for it and remove the disabling bits. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: skip legacy reset bits since v3.3 plattform doesn't need itDave Jiang2013-04-151-13/+21
| | | | | | | | | | | | | | | | | | Make it so only 3.2 and earlier platform need the PCI config register clearings since this implementation does not have the registers. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: channel reset scheme fixup on Intel Atom S1200 platformsDave Jiang2013-04-153-83/+171
| | | | | | | | | | | | | | | | | | | | | | | | The Intel Atom S1200 family ioatdma changed the channel reset behavior. It does a reset similar to PCI FLR by resetting all the MSIX registers. We have to re-init msix interrupts because of this. This workaround is only specific to this platform and is not expected to carry over to the later generations. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Add 64bit chansts register read for ioat v3.3.Dave Jiang2013-04-151-1/+21
| | | | | | | | | | | | | | | | | | The channel status register for v3.3 is now 64bit. Use readq if available on v3.3 platforms. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Adding PCI IDs for Intel Atom S1200 product family ioatdma devicesDave Jiang2013-04-152-0/+12
| | | | | | | | | | | | | | | | | | | | These should be good for the IOAT DMA devices on the Intel Atom S1269, S1279, and S1289 platforms. We are also adding IOAT v3.3 definition for the new DMA engine. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: Adding Haswell devid for ioatdmaDave Jiang2013-04-153-6/+55
| | | | | | | | | | | | | | | | | | | | Adding Haswell PCI device IDs for ioatdma and simplify the detection of certain Xeon CPUs that has alignment bugs so that modifications can be changed at a single place going forward. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: allow all channels to have irq coalescing supportDave Jiang2013-04-151-9/+3
| | | | | | | | | | | | | | | | | | | | Looks like only the RAID channels are allowed to have irq coalescing support in the existing code. Fixing that. The ioat3 cleanup code can handle memcpy ops anyways Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: make debug output more readableDave Jiang2013-04-152-2/+3
| | | | | | | | | | | | | | | | | | Making OP field a hex instead of integer to make it more readable. Also add the dump out of the NEXT field. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* | ioat/dca: Update DCA BIOS workarounds to use TAINT_FIRMWARE_WORKAROUNDAlexander Duyck2013-03-221-3/+8
|/ | | | | | | | | | | | | | | This patch is meant to be a follow-up for a patch originally submitted under the title "ioat: Do not enable DCA if tag map is invalid". It was brought to my attention that the preferred approach for BIOS workarounds is to set the taint flag for TAINT_FIRMWARE_WORKAROUND for systems that require BIOS workarounds. This change makes it so that the DCA workarounds for broken BIOSes will now use WARN_TAINT_ONCE(1, TAINT_FIRMWARE_WORKAROUND, ...) instead of just printing a message via dev_err. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-02-266-136/+227
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull slave-dmaengine updates from Vinod Koul: "This is fairly big pull by my standards as I had missed last merge window. So we have the support for device tree for slave-dmaengine, large updates to dw_dmac driver from Andy for reusing on different architectures. Along with this we have fixes on bunch of the drivers" Fix up trivial conflicts, usually due to #include line movement next to each other. * 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits) Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT" ARM: dts: pl330: Add #dma-cells for generic dma binding support DMA: PL330: Register the DMA controller with the generic DMA helpers DMA: PL330: Add xlate function DMA: PL330: Add new pl330 filter for DT case. dma: tegra20-apb-dma: remove unnecessary assignment edma: do not waste memory for dma_mask dma: coh901318: set residue only if dma is in progress dma: coh901318: avoid unbalanced locking dmaengine.h: remove redundant else keyword dma: of-dma: protect list write operation by spin_lock dmaengine: ste_dma40: do not remove descriptors for cyclic transfers dma: of-dma.c: fix memory leakage dw_dmac: apply default dma_mask if needed dmaengine: ioat - fix spare sparse complain dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING dw_dmac: add support for Lynxpoint DMA controllers dw_dmac: return proper residue value dw_dmac: fill individual length of descriptor ...
| * dmaengine: ioat - fix spare sparse complainFengguang Wu2013-02-131-1/+1
| | | | | | | | | | | | | | | | | | >> drivers/dma/ioat/dma_v3.c:371:6: sparse: symbol 'ioat3_timer_event' was not declared. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDINGDave Jiang2013-02-123-97/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a race that can hit during __cleanup() when the ioat->head pointer is incremented during descriptor submission. The __cleanup() can clear the PENDING flag when it does not see any active descriptors. This causes new submitted descriptors to be ignored because the COMPLETION_PENDING flag is cleared. This was introduced when code was adapted from ioatdma v1 to ioatdma v2. For v2 and v3, IOAT_COMPLETION_PENDING flag will be abandoned and a new flag IOAT_CHAN_ACTIVE will be utilized. This flag will also be protected under the prep_lock when being modified in order to avoid the race. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioat: remove chanerr mask setting for IOAT v3.xDave Jiang2013-01-081-6/+1
| | | | | | | | | | | | | | | | | | | | The existing code set a value in the PCI_CHANERRMSK_INT register for a workaround to address a pre-silicon bug on the Intel 5520 IO hub that has been fixed when the hardware was released. There is no need for this code. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
| * ioat: Add alignment workaround for IVB platformsDave Jiang2013-01-083-12/+32
| | | | | | | | | | | | | | | | | | The PCI IDs for IvyBridge IOAT DMA needs to go into a header file since dma_v3.c looks them up for certain hardware workarounds. Need to add to the alignment workaround for IOAT 3.2 since it wasn't fixed in IVB. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <djbw@fb.com>
| * ioat3: add missing DMA unmap to ioat_xor_val_self_test()Bartlomiej Zolnierkiewicz2013-01-081-17/+59
| | | | | | | | | | | | | | | | | | | | | | Make ioat_xor_val_self_test() do DMA unmapping itself and fix handling of failure cases. Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
| * ioat: add missing DMA unmap to ioat_dma_self_test()Bartlomiej Zolnierkiewicz2013-01-081-4/+7
| | | | | | | | | | | | | | | | | | | | | | Make ioat_dma_self_test() do DMA unmapping itself and fix handling of failure cases. Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Dan Williams <djbw@fb.com>
* | Merge branch 'fixes' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-01-241-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull slave-dmaengine fixes from Vinod Koul: "A few fixes on slave dmanengine. There are trivial fixes in imx-dma, tegra-dma & ioat driver" * 'fixes' of git://git.infradead.org/users/vkoul/slave-dma: dma: tegra: implement flags parameters for cyclic transfer dmaengine: imx-dma: Disable use of hw_chain to fix sg_dma transfers. ioat: Fix DMA memory sync direction correct flag
| * | ioat: Fix DMA memory sync direction correct flagShuah Khan2013-01-061-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ioat does DMA memory sync with DMA_TO_DEVICE direction on a buffer allocated for DMA_FROM_DEVICE dma, resulting in the following warning from dma debug. Fixed the dma_sync_single_for_device() call to use the correct direction. [ 226.288947] WARNING: at lib/dma-debug.c:990 check_sync+0x132/0x550() [ 226.288948] Hardware name: ProLiant DL380p Gen8 [ 226.288951] ioatdma 0000:00:04.0: DMA-API: device driver syncs DMA memory with different direction [device address=0x00000000ffff7000] [size=4096 bytes] [mapped with DMA_FROM_DEVICE] [synced with DMA_TO_DEVICE] [ 226.288953] Modules linked in: iTCO_wdt(+) sb_edac(+) ioatdma(+) microcode serio_raw pcspkr edac_core hpwdt(+) iTCO_vendor_support hpilo(+) dca acpi_power_meter ata_generic pata_acpi sd_mod crc_t10dif ata_piix libata hpsa tg3 netxen_nic(+) sunrpc dm_mirror dm_region_hash dm_log dm_mod [ 226.288967] Pid: 1055, comm: work_for_cpu Tainted: G W 3.3.0-0.20.el7.x86_64 #1 [ 226.288968] Call Trace: [ 226.288974] [<ffffffff810644cf>] warn_slowpath_common+0x7f/0xc0 [ 226.288977] [<ffffffff810645c6>] warn_slowpath_fmt+0x46/0x50 [ 226.288980] [<ffffffff81345502>] check_sync+0x132/0x550 [ 226.288983] [<ffffffff81345c9f>] debug_dma_sync_single_for_device+0x3f/0x50 [ 226.288988] [<ffffffff81661002>] ? wait_for_common+0x72/0x180 [ 226.288995] [<ffffffffa019590f>] ioat_xor_val_self_test+0x3e5/0x832 [ioatdma] [ 226.288999] [<ffffffff811a5739>] ? kfree+0x259/0x270 [ 226.289004] [<ffffffffa0195d77>] ioat3_dma_self_test+0x1b/0x20 [ioatdma] [ 226.289008] [<ffffffffa01952c3>] ioat_probe+0x2f8/0x348 [ioatdma] [ 226.289011] [<ffffffffa0195f51>] ioat3_dma_probe+0x1d5/0x2aa [ioatdma] [ 226.289016] [<ffffffffa0194d12>] ioat_pci_probe+0x139/0x17c [ioatdma] [ 226.289020] [<ffffffff81354b8c>] local_pci_probe+0x5c/0xd0 [ 226.289023] [<ffffffff81083e50>] ? destroy_work_on_stack+0x20/0x20 [ 226.289025] [<ffffffff81083e68>] do_work_for_cpu+0x18/0x30 [ 226.289029] [<ffffffff8108d997>] kthread+0xb7/0xc0 [ 226.289033] [<ffffffff8166cef4>] kernel_thread_helper+0x4/0x10 [ 226.289036] [<ffffffff81662d20>] ? _raw_spin_unlock_irq+0x30/0x50 [ 226.289038] [<ffffffff81663234>] ? retint_restore_args+0x13/0x13 [ 226.289041] [<ffffffff8108d8e0>] ? kthread_worker_fn+0x1a0/0x1a0 [ 226.289044] [<ffffffff8166cef0>] ? gs_change+0x13/0x13 [ 226.289045] ---[ end trace e1618afc7a606089 ]--- [ 226.289047] Mapped at: [ 226.289048] [<ffffffff81345307>] debug_dma_map_page+0x87/0x150 [ 226.289050] [<ffffffffa019653c>] dma_map_page.constprop.18+0x70/0xb34 [ioatdma] [ 226.289054] [<ffffffffa0195702>] ioat_xor_val_self_test+0x1d8/0x832 [ioatdma] [ 226.289058] [<ffffffffa0195d77>] ioat3_dma_self_test+0x1b/0x20 [ioatdma] [ 226.289061] [<ffffffffa01952c3>] ioat_probe+0x2f8/0x348 [ioatdma] Signed-off-by: Shuah Khan <shuah.khan@hp.com> CC: <stable@vger.kernel.org> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
* / Drivers: dma: remove __dev* attributes.Greg Kroah-Hartman2013-01-047-33/+28
|/ | | | | | | | | | | | | | | | | | | | | | | | | | CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitconst, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Viresh Kumar <viresh.linux@gmail.com> Cc: Dan Williams <djbw@fb.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Barry Song <baohua.song@csr.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Jassi Brar <jassisinghbrar@gmail.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-nextLinus Torvalds2012-12-131-0/+23
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull networking changes from David Miller: 1) Allow to dump, monitor, and change the bridge multicast database using netlink. From Cong Wang. 2) RFC 5961 TCP blind data injection attack mitigation, from Eric Dumazet. 3) Networking user namespace support from Eric W. Biederman. 4) tuntap/virtio-net multiqueue support by Jason Wang. 5) Support for checksum offload of encapsulated packets (basically, tunneled traffic can still be checksummed by HW). From Joseph Gasparakis. 6) Allow BPF filter access to VLAN tags, from Eric Dumazet and Daniel Borkmann. 7) Bridge port parameters over netlink and BPDU blocking support from Stephen Hemminger. 8) Improve data access patterns during inet socket demux by rearranging socket layout, from Eric Dumazet. 9) TIPC protocol updates and cleanups from Ying Xue, Paul Gortmaker, and Jon Maloy. 10) Update TCP socket hash sizing to be more in line with current day realities. The existing heurstics were choosen a decade ago. From Eric Dumazet. 11) Fix races, queue bloat, and excessive wakeups in ATM and associated drivers, from Krzysztof Mazur and David Woodhouse. 12) Support DOVE (Distributed Overlay Virtual Ethernet) extensions in VXLAN driver, from David Stevens. 13) Add "oops_only" mode to netconsole, from Amerigo Wang. 14) Support set and query of VEB/VEPA bridge mode via PF_BRIDGE, also allow DCB netlink to work on namespaces other than the initial namespace. From John Fastabend. 15) Support PTP in the Tigon3 driver, from Matt Carlson. 16) tun/vhost zero copy fixes and improvements, plus turn it on by default, from Michael S. Tsirkin. 17) Support per-association statistics in SCTP, from Michele Baldessari. And many, many, driver updates, cleanups, and improvements. Too numerous to mention individually. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1722 commits) net/mlx4_en: Add support for destination MAC in steering rules net/mlx4_en: Use generic etherdevice.h functions. net: ethtool: Add destination MAC address to flow steering API bridge: add support of adding and deleting mdb entries bridge: notify mdb changes via netlink ndisc: Unexport ndisc_{build,send}_skb(). uapi: add missing netconf.h to export list pkt_sched: avoid requeues if possible solos-pci: fix double-free of TX skb in DMA mode bnx2: Fix accidental reversions. bna: Driver Version Updated to 3.1.2.1 bna: Firmware update bna: Add RX State bna: Rx Page Based Allocation bna: TX Intr Coalescing Fix bna: Tx and Rx Optimizations bna: Code Cleanup and Enhancements ath9k: check pdata variable before dereferencing it ath5k: RX timestamp is reported at end of frame ath9k_htc: RX timestamp is reported at end of frame ...
| * ioat: Do not enable DCA if tag map is invalidAlexander Duyck2012-11-151-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I have encountered several systems that have an invalid tag map. This invalid tag map results in only two tags being generated 0x1F which is usually applied to the first core in a Hyper-threaded pair and 0x00 which is applied to the second core in a Hyper-threaded pair. The net result of all this is that DCA causes significant cache thrash because the 0x1F tag will send traffic to the second socket, which the 0x00 tag will not DCA tag the frame resulting in no benefit. For now the best solution from the driver's perspective is to just disable DCA if the tag map is invalid. The correct solution for this would be to have the BIOS on affected systems updated so that the correct tags are generated for a given APIC ID. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Stephen Ko <stephen.s.ko@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>