summaryrefslogtreecommitdiffstats
path: root/drivers/dma/idxd/dma.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* dmaengine: idxd: add wq driver name support for accel-config user toolDave Jiang2023-10-041-0/+6
| | | | | | | | | | | | | | | With the possibility of multiple wq drivers that can be bound to the wq, the user config tool accel-config needs a way to know which wq driver to bind to the wq. Introduce per wq driver_name sysfs attribute where the user can indicate the driver to be bound to the wq. This allows accel-config to just bind to the driver using wq->driver_name. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Link: https://lore.kernel.org/r/20230908201045.4115614-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine/idxd: Re-enable kernel workqueue under DMA APIJacob Pan2023-08-091-2/+3
| | | | | | | | | | | | | | | | | | | | | | | Kernel workqueues were disabled due to flawed use of kernel VA and SVA API. Now that we have the support for attaching PASID to the device's default domain and the ability to reserve global PASIDs from SVA APIs, we can re-enable the kernel work queues and use them under DMA API. We also use non-privileged access for in-kernel DMA to be consistent with the IOMMU settings. Consequently, interrupt for user privilege is enabled for work completion IRQs. Link: https://lore.kernel.org/linux-iommu/20210511194726.GP1002214@nvidia.com/ Tested-by: Tony Zhu <tony.zhu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20230802212427.1497170-9-jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* dmaengine: idxd: Remove the unused function set_completion_address()Jiapeng Chong2022-12-281-6/+0
| | | | | | | | | | | | | | The function set_completion_address is defined in the dma.c file, but not called elsewhere, so remove this unused function. drivers/dma/idxd/dma.c:66:20: warning: unused function 'set_completion_address'. Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3416 Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20221212033514.5831-1-jiapeng.chong@linux.alibaba.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: add missing callback function to support DMA_INTERRUPTDave Jiang2022-05-161-0/+22
| | | | | | | | | | | | When setting DMA_INTERRUPT capability, a callback function dma->device_prep_dma_interrupt() is needed to support this capability. Without setting the callback, dma_async_device_register() will fail dma capability check. Fixes: 4e5a4eb20393 ("dmaengine: idxd: set DMA_INTERRUPT cap bit") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165101232637.3951447.15765792791591763119.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: make idxd_register/unregister_dma_channel() staticDave Jiang2022-05-161-2/+2
| | | | | | | | | | Since idxd_register/unregister_dma_channel() are only called locally, make them static. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165187583222.3287435.12882651040433040246.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: refactor wq driver enable/disable operationsDave Jiang2022-04-221-35/+3
| | | | | | | | | | | Move the core driver operations from wq driver to the drv_enable_wq() and drv_disable_wq() functions. The move should reduce the wq driver's knowledge of the core driver operations and prevent code confusion for future wq drivers. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165047301643.3841827.11222723219862233060.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: move wq irq enabling to after device enableDave Jiang2022-04-201-9/+9
| | | | | | | | | | Move the calling of request_irq() and other related irq setup code until after the WQ is successfully enabled. This reduces the amount of setup/teardown if the wq is not configured correctly and cannot be enabled. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164642777730.179702.1880317757087484299.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: set DMA_INTERRUPT cap bitDave Jiang2022-04-201-0/+1
| | | | | | | | | | | Even though idxd driver has always supported interrupt, it never actually set the DMA_INTERRUPT cap bit. Rectify this mistake so the interrupt capability is advertised. Reported-by: Ben Walker <benjamin.walker@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164971497859.2201379.17925303210723708961.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: change MSIX allocation based on per wq activationDave Jiang2022-01-051-0/+12
| | | | | | | | | | | | | | | | | | Change the driver where WQ interrupt is requested only when wq is being enabled. This new scheme set things up so that request_threaded_irq() is only called when a kernel wq type is being enabled. This also sets up for future interrupt request where different interrupt handler such as wq occupancy interrupt can be setup instead of the wq completion interrupt. Not calling request_irq() until the WQ actually needs an irq also prevents wasting of CPU irq vectors on x86 systems, which is a limited resource. idxd_flush_pending_descs() is moved to device.c since descriptor flushing is now part of wq disable rather than shutdown(). Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163942149487.2412839.6691222855803875848.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: handle invalid interrupt handle descriptorsDave Jiang2021-11-221-4/+10
| | | | | | | | | | | | Handle a descriptor that has been marked with invalid interrupt handle error in status. Create a work item that will resubmit the descriptor. This typically happens when the driver has handled the revoke interrupt handle event and has a new interrupt handle. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163528419601.3925689.4166517602890523193.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: create locked version of idxd_quiesce() callDave Jiang2021-11-221-2/+2
| | | | | | | | | | | | | Add a locked version of idxd_quiesce() call so that the quiesce can be called with a lock in situations where the lock is not held by the caller. In the driver probe/remove path, the lock is already held, so the raw version can be called w/o locking. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163528418980.3925689.5841907054957931211.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: rework descriptor free path on failureDave Jiang2021-11-221-2/+8
| | | | | | | | | | | Refactor the completion function to allow skipping of descriptor freeing on the submission failure path. This completely removes descriptor freeing from the submit failure path and leave the responsibility to the caller. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163528416222.3925689.12859769271667814762.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: fix resource leak on dmaengine driver disableDave Jiang2021-10-281-2/+1
| | | | | | | | | | | | | | | The wq resources needs to be released before the kernel type is reset by __drv_disable_wq(). With dma channels unregistered and wq quiesced, all the wq resources for dmaengine can be freed. There is no need to wait until wq is disabled. With the wq->type being reset to "unknown", the driver is skipping the freeing of the resources. Fixes: 0cda4f6986a3 ("dmaengine: idxd: create dmaengine driver for wq 'device'") Reported-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Tested-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163517405099.3484556.12521975053711345244.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: move out percpu_ref_exit() to ensure it's outside submissionDave Jiang2021-10-011-0/+2
| | | | | | | | | | | | percpu_ref_tryget_live() is safe to call as long as ref is between init and exit according to the function comment. Move percpu_ref_exit() so it is called after the dma channel is no longer valid to ensure this holds true. Fixes: 93a40a6d7428 ("dmaengine: idxd: add percpu_ref to descriptor submission path") Suggested-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163294293832.914350.10326422026738506152.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: fix setting up priv mode for dwqDave Jiang2021-08-291-1/+5
| | | | | | | | | | | | | | DSA spec says WQ priv bit is 0 if the Privileged Mode Enable field of the PCI Express PASID capability is 0 and pasid is enabled. Make sure that the WQCFG priv field is set correctly according to usage type. Reject config if setting up kernel WQ type and no support. Also add the correct priv setup for a descriptor. Fixes: 484f910e93b4 ("dmaengine: idxd: fix wq config registers offset programming") Cc: Ramesh Thomas <ramesh.thomas@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/162939084657.903168.14160019185148244596.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: make submit failure path consistent on desc freeingDave Jiang2021-08-251-3/+1
| | | | | | | | | | | | | The submission path for dmaengine API does not do descriptor freeing on failure. Also, with the abort mechanism, the freeing of descriptor happens when the abort callback is completed. Therefore free descriptor on all error paths for submission call to make things consistent. Also remove the double free that would happen on abort in idxd_dma_tx_submit() call. Fixes: 6b4b87f2c31a ("dmaengine: idxd: fix submission race window") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/162827146072.3459011.10255348500504659810.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmanegine: idxd: add software command statusDave Jiang2021-07-281-0/+4
| | | | | | | | | | | | | | Enabling device and wq returns standard errno and that does not provide enough details to indicate what exactly failed. The hardware command status is only 8bits. Expand the command status to 32bits and use the upper 16 bits to define software errors to provide more details on the exact failure. Bit 31 will be used to indicate the error is software set as the driver is using some of the spec defined hardware error as well. Cc: Ramesh Thomas <ramesh.thomas@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/162681373579.1968485.5891788397526827892.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: move dsa_drv support to compatible modeDave Jiang2021-07-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The original architecture of /sys/bus/dsa invented a scheme whereby a single entry in the list of bus drivers, /sys/bus/drivers/dsa, handled all device types and internally routed them to different different drivers. Those internal drivers were invisible to userspace. With the idxd driver transitioned to a proper bus device-driver model, the legacy behavior needs to be preserved due to it being exposed to user space via sysfs. Create a compat driver to provide the legacy behavior for /sys/bus/dsa/drivers/dsa. This should satisfy user tool accel-config v3.2 or ealier where this behavior is expected. If the distro has a newer accel-config then the legacy mode does not need to be enabled. When the compat driver binds the device (i.e. dsa0) to the dsa driver, it will be bound to the new idxd_drv. The wq device (i.e. wq0.0) will be bound to either the dmaengine_drv or the user_drv. The dsa_drv becomes a routing mechansim for the new drivers. It will not support additional external drivers that are implemented later. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/162637468705.744545.4399080971745974435.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: create dmaengine driver for wq 'device'Dave Jiang2021-07-211-0/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original architecture of /sys/bus/dsa invented a scheme whereby a single entry in the list of bus drivers, /sys/bus/drivers/dsa, handled all device types and internally routed them to different drivers. Those internal drivers were invisible to userspace. Now, as /sys/bus/dsa wants to grow support for alternate drivers for a given device, for example vfio-mdev instead of kernel-internal-dmaengine, a proper bus device-driver model is needed. The first step in that process is separating the existing omnibus/implicit "dsa" driver into proper individual drivers registered on /sys/bus/dsa. Establish the idxd_dmaengine_drv driver that controls the enabling and disabling of the wq and also register and unregister the dma channel. idxd_wq_alloc_resources() and idxd_wq_free_resources() also get moved to the dmaengine driver. The resources (dma descriptors allocation and setup) are only used by the dmaengine driver and should only happen when it loads. The char dev driver (cdev) related bits are left in the __drv_enable_wq() and __drv_disable_wq() calls to be moved when we split out the char dev driver just like how the dmaengine driver is split out. WQ autoload support is not expected currently. With the amount of configuration needed for the device, the wq is always expected to be enabled by a tool (or via sysfs) rather than auto enabled at driver load. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/162637467033.744545.12330636655625405394.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: add 'struct idxd_dev' as wrapper for conf_devDave Jiang2021-07-211-2/+2
| | | | | | | | | | | | | | Add a 'struct idxd_dev' that wraps the 'struct device' for idxd conf_dev that registers with the dsa bus. This is introduced in order to deal with multiple different types of 'devices' that are registered on the dsa_bus when the compat driver needs to route them to the correct driver to attach. The bind() call now can determine the type of device and then do the appropriate driver matching. Reviewed-by Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/162637460065.744545.584492831446090984.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: fix dma device lifetimeDave Jiang2021-04-201-13/+64
| | | | | | | | | | | | | | | The devm managed lifetime is incompatible with 'struct device' objects that resides in idxd context. This is one of the series that clean up the idxd driver 'struct device' lifetime. Remove embedding of dma_device and dma_chan in idxd since it's not the only interface that idxd will use. The freeing of the dma_device will be managed by the ->release() function. Reported-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: bfe1d56091c1 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/161852983001.2203940.14817017492384561719.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* Merge tag 'dmaengine-5.12-rc1' of ↵Linus Torvalds2021-02-241-0/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine Pull dmaengine updates from Vinod Koul: "We have couple of drivers removed a new driver and bunch of new device support and few updates to drivers for this round. New drivers/devices: - Intel LGM SoC DMA driver - Actions Semi S500 DMA controller - Renesas r8a779a0 dma controller - Ingenic JZ4760(B) dma controller - Intel KeemBay AxiDMA controller Removed: - Coh901318 dma driver - Zte zx dma driver - Sirfsoc dma driver Updates: - mmp_pdma, mmp_tdma gained module support - imx-sdma become modern and dropped platform data support - dw-axi driver gained slave and cyclic dma support" * tag 'dmaengine-5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (58 commits) dmaengine: dw-axi-dmac: remove redundant null check on desc dmaengine: xilinx_dma: Alloc tx descriptors GFP_NOWAIT dmaengine: dw-axi-dmac: Virtually split the linked-list dmaengine: dw-axi-dmac: Set constraint to the Max segment size dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA BYTE and HALFWORD registers dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA handshake dmaengine: dw-axi-dmac: Add Intel KeemBay AxiDMA support dmaengine: drivers: Kconfig: add HAS_IOMEM dependency to DW_AXI_DMAC dmaengine: dw-axi-dmac: Add Intel KeemBay DMA register fields dt-binding: dma: dw-axi-dmac: Add support for Intel KeemBay AxiDMA dmaengine: dw-axi-dmac: Support burst residue granularity dmaengine: dw-axi-dmac: Support of_dma_controller_register() dmaegine: dw-axi-dmac: Support device_prep_dma_cyclic() dmaengine: dw-axi-dmac: Support device_prep_slave_sg dmaengine: dw-axi-dmac: Add device_config operation dmaengine: dw-axi-dmac: Add device_synchronize() callback dmaengine: dw-axi-dmac: move dma_pool_create() to alloc_chan_resources() dmaengine: dw-axi-dmac: simplify descriptor management dt-bindings: dma: Add YAML schemas for dw-axi-dmac dmaengine: ti: k3-psil: optimize struct psil_endpoint_config for size ...
| * dmaengine: idxd: set DMA channel to be privateDave Jiang2021-01-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add DMA_PRIVATE attribute flag to idxd DMA channels. The dedicated WQs are expected to be used by a single client and not shared. While doing NTB testing this mistake was discovered, which prevented ntb_transport from requesting DSA wqs as DMA channels via dma_request_channel(). Reported-by: Srinijia Kambham <srinija.kambham@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Tested-by: Srinijia Kambham <srinija.kambham@intel.com> Fixes: 8f47d1a5e545 ("dmaengine: idxd: connect idxd to dmaengine subsystem") Link: https://lore.kernel.org/r/161074758743.2184057.3388557138816350980.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* | dmaengine: move channel device_node deletion to driverDave Jiang2021-01-191-1/+4
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Channel device_node deletion is managed by the device driver rather than the dmaengine core. The deletion was accidentally introduced when making channel unregister dynamic. It causes xilinx_dma module to crash on unload as reported by Radhey. Remove chan->device_node delete in dmaengine and also fix up idxd driver. [ 42.142705] Internal error: Oops: 96000044 [#1] SMP [ 42.147566] Modules linked in: xilinx_dma(-) clk_xlnx_clock_wizard uio_pdrv_genirq [ 42.155139] CPU: 1 PID: 2075 Comm: rmmod Not tainted 5.10.1-00026-g3a2e6dd7a05-dirty #192 [ 42.163302] Hardware name: Enclustra XU5 SOM (DT) [ 42.167992] pstate: 40000005 (nZcv daif -PAN -UAO -TCO BTYPE=--) [ 42.173996] pc : xilinx_dma_chan_remove+0x74/0xa0 [xilinx_dma] [ 42.179815] lr : xilinx_dma_chan_remove+0x70/0xa0 [xilinx_dma] [ 42.185636] sp : ffffffc01112bca0 [ 42.188935] x29: ffffffc01112bca0 x28: ffffff80402ea640 xilinx_dma_chan_remove+0x74/0xa0: __list_del at ./include/linux/list.h:112 (inlined by) __list_del_entry at./include/linux/list.h:135 (inlined by) list_del at ./include/linux/list.h:146 (inlined by) xilinx_dma_chan_remove at drivers/dma/xilinx/xilinx_dma.c:2546 Fixes: e81274cd6b52 ("dmaengine: add support to dynamic register/unregister of channels") Reported-by: Radhey Shyam Pandey <radheys@xilinx.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Tested-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Link: https://lore.kernel.org/r/161099092469.2495902.5064826526660062342.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Cc: stable@vger.kernel.org # 5.9+
* dmaengine: idxd: Add shared workqueue supportDave Jiang2020-10-301-9/+0
| | | | | | | | | | | | | | Add shared workqueue support that includes the support of Shared Virtual memory (SVM) or in similar terms On Demand Paging (ODP). The shared workqueue uses the enqcmds command in kernel and will respond with retry if the workqueue is full. Shared workqueue only works when there is PASID support from the IOMMU. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/160382007499.3911367.26043087963708134.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: cookie bypass for out of order completionDave Jiang2020-06-171-1/+2
| | | | | | | | | | | | | | | | | | | The cookie tracking in dmaengine expects all submissions completed in order. Some DMA devices like Intel DSA can complete submissions out of order, especially if configured with a work queue sharing multiple DMA engines. Add a status DMA_OUT_OF_ORDER that tx_status can be returned for those DMA devices. The user should use callbacks to track the completion rather than the DMA cookie. This would address the issue of dmatest complaining that descriptors are "busy" when the cookie count goes backwards due to out of order completion. Add DMA_COMPLETION_NO_ORDER DMA capability to allow the driver to flag the device's ability to complete operations out of order. Reported-by: Swathi Kovvuri <swathi.kovvuri@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Tested-by: Swathi Kovvuri <swathi.kovvuri@intel.com> Link: https://lore.kernel.org/r/158939557151.20335.12404113976045569870.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: connect idxd to dmaengine subsystemDave Jiang2020-01-241-0/+217
Add plumbing for dmaengine subsystem connection. The driver register a DMA device per DSA device. The channels are dynamically registered when a workqueue is configured to be "kernel:dmanegine" type. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/157965026376.73301.13867988830650740445.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>