summaryrefslogtreecommitdiffstats
path: root/drivers/dma (follow)
Commit message (Collapse)AuthorAgeFilesLines
* remove lots of IS_ERR_VALUE abusesArnd Bergmann2016-05-281-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most users of IS_ERR_VALUE() in the kernel are wrong, as they pass an 'int' into a function that takes an 'unsigned long' argument. This happens to work because the type is sign-extended on 64-bit architectures before it gets converted into an unsigned type. However, anything that passes an 'unsigned short' or 'unsigned int' argument into IS_ERR_VALUE() is guaranteed to be broken, as are 8-bit integers and types that are wider than 'unsigned long'. Andrzej Hajda has already fixed a lot of the worst abusers that were causing actual bugs, but it would be nice to prevent any users that are not passing 'unsigned long' arguments. This patch changes all users of IS_ERR_VALUE() that I could find on 32-bit ARM randconfig builds and x86 allmodconfig. For the moment, this doesn't change the definition of IS_ERR_VALUE() because there are probably still architecture specific users elsewhere. Almost all the warnings I got are for files that are better off using 'if (err)' or 'if (err < 0)'. The only legitimate user I could find that we get a warning for is the (32-bit only) freescale fman driver, so I did not remove the IS_ERR_VALUE() there but changed the type to 'unsigned long'. For 9pfs, I just worked around one user whose calling conventions are so obscure that I did not dare change the behavior. I was using this definition for testing: #define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \ unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO)) which ends up making all 16-bit or wider types work correctly with the most plausible interpretation of what IS_ERR_VALUE() was supposed to return according to its users, but also causes a compile-time warning for any users that do not pass an 'unsigned long' argument. I suggested this approach earlier this year, but back then we ended up deciding to just fix the users that are obviously broken. After the initial warning that caused me to get involved in the discussion (fs/gfs2/dir.c) showed up again in the mainline kernel, Linus asked me to send the whole thing again. [ Updated the 9p parts as per Al Viro - Linus ] Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lkml.org/lkml/2016/1/7/363 Link: https://lkml.org/lkml/2016/5/27/486 Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2016-05-1932-1073/+4730
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dmaengine updates from Vinod Koul: "This time round the update brings in following changes: - new tegra driver for ADMA device - support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI Central Direct Memory Access Engine and few updates to this driver - new cyclic capability to sun6i and few updates - slave-sg support in bcm2835 - updates to many drivers like designware, hsu, mv_xor, pxa, edma, qcom_hidma & bam" * tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (84 commits) dmaengine: ioatdma: disable relaxed ordering for ioatdma dmaengine: of_dma: approximate an average distribution dmaengine: core: Use IS_ENABLED() instead of checking for built-in or module dmaengine: edma: Re-evaluate errors when ccerr is triggered w/o error event dmaengine: qcom_hidma: add support for object hierarchy dmaengine: qcom_hidma: add debugfs hooks dmaengine: qcom_hidma: implement lower level hardware interface dmaengine: vdma: Add clock support Documentation: DT: vdma: Add clock support for dmas dmaengine: vdma: Add config structure to differentiate dmas MAINTAINERS: Update Tegra DMA maintainers dmaengine: tegra-adma: Add support for Tegra210 ADMA Documentation: DT: Add binding documentation for NVIDIA ADMA dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine Documentation: DT: vdma: update binding doc for AXI CDMA dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine Documentation: DT: vdma: update binding doc for AXI DMA dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma dmaengine: slave means at least one of DMA_SLAVE, DMA_CYCLIC dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoC ...
| * Merge branch 'topic/xilinx' into for-linusVinod Koul2016-05-172-368/+1297
| |\
| | * dmaengine: vdma: Add clock supportKedareswara rao Appana2016-05-131-2/+224
| | | | | | | | | | | | | | | | | | | | | | | | | | | Added basic clock support for axi dma's. The clocks are requested at probe and released at remove. Reviewed-by: Shubhrajyoti Datta <shubhraj@xilinx.com> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Add config structure to differentiate dmasKedareswara rao Appana2016-05-131-32/+51
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds config structure in the driver to differentiate AXI DMA's and to add more features(clock support etc..) to these DMA's. Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access EngineKedareswara rao Appana2016-05-121-2/+234
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the AXI Central Direct Memory Access (AXI CDMA) core to the existing vdma driver, AXI CDMA is a soft Xilinx IP core that provides high-bandwidth Direct Memory Access(DMA) between a memory-mapped source address and a memory-mapped destination address. Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access EngineKedareswara rao Appana2016-05-121-42/+432
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the AXI Direct Memory Access (AXI DMA) core in the existing vdma driver, AXI DMA Core is a soft Xilinx IP core that provides high-bandwidth direct memory access between memory and AXI4-Stream type target peripherals. Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dmaKedareswara rao Appana2016-05-121-316/+316
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch renames the xilinx_vdma_ prefix to xilinx_dma for the API's and masks that will be shared b/w three DMA IP cores. Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Use dma_pool_zallocJulia Lawall2016-05-031-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch that makes this transformation is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression d,e; statement S; @@ d = - dma_pool_alloc + dma_pool_zalloc (...); if (!d) S - memset(d, 0, sizeof(*d)); // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Acked-by: Sören Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Fix checkpatch.pl warningsKedareswara rao Appana2016-04-061-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the below checkpatch.pl warnings. WARNING: void function return statements are not generally useful + return; +} WARNING: void function return statements are not generally useful + return; +} WARNING: Missing a blank line after declarations + u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK; + vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR, Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: Moritz Fischer <moritz.fischer@ettus.com> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Fix race condition in Non-SG modeKedareswara rao Appana2016-04-061-6/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When VDMA is configured in Non-sg mode Users can queue descriptors greater than h/w configured frames. Current driver allows the user to queue descriptors upto h/w configured. Which is wrong for non-sg mode configuration. This patch fixes this issue. Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: vdma: Add 64 bit addressing support to the driverKedareswara rao Appana2016-04-062-9/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This VDMA is a soft ip, which can be programmed to support 32 bit addressing or greater than 32 bit addressing. When the VDMA ip is configured for 32 bit address space the buffer address is specified by a single register (0x5C for MM2S and 0xAC for S2MM channel). When the VDMA core is configured for an address space greater than 32 then each buffer address is specified by a combination of two registers. The first register specifies the LSB 32 bits of address, while the next register specifies the MSB 32 bits of address. For example, 5Ch will specify the LSB 32 bits while 60h will specify the MSB 32 bits of the first start address. So we need to program two registers at a time. This patch adds the 64 bit addressing support to the vdma driver. Signed-off-by: Anurag Kumar Vulisha <anuragku@xilinx.com> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/tegra' into for-linusVinod Koul2016-05-174-2/+869
| |\ \
| | * | dmaengine: tegra-adma: Add support for Tegra210 ADMAJon Hunter2016-05-133-0/+855
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the Tegra210 Audio DMA controller that is used for transferring data between system memory and the Audio sub-system. The driver only supports cyclic transfers because this is being solely used for audio. This driver is based upon the work by Dara Ramesh <dramesh@nvidia.com>. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: tegra-apb: proper default init of channel slave_idShardar Shariff Md2016-05-021-2/+14
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initialize default channel slave_id(req_sel) to invalid id (i.e max supported slave id + 1) to avoid overwriting of slave_id during tegra_dma_slave_config() with client data if slave_id is not initialized through DT Signed-off-by: Shardar Shariff Md <smohammed@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/sun6i' into for-linusVinod Koul2016-05-171-66/+188
| |\ \
| | * | dmaengine: sun6i: Add cyclic capabilityJean-Francois Moine2016-05-021-7/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DMA cyclic transfers are required by audio streaming. Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Remove useless checkJean-Francois Moine2016-05-021-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The transfer direction is now checked in set_config. There is no need to check it twice. Signed-off-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Set default maxburst size and bus widthJean-Francois Moine2016-05-021-4/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some DMA clients, as audio, don't set the maxburst size and bus width on the memory side when starting DMA transfers. This patch prevents such transfers to be aborted by providing system default values to the lacking ones. Signed-off-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Simplify lli settingJean-Francois Moine2016-04-261-55/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Checking the DMA config before setting the lli list avoids to do tests inside the setting loop. Signed-off-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Fix impossible settings of burst and bus widthJean-Francois Moine2016-04-261-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the commit 1f9cd915b64bb95f ("dmaengine: sun6i: Fix memcpy operation"), the signed values returned by convert_burst() and convert_buswidth() were stored in an unsigned value. Then, these values were considered as errors when non null. As a result, DMA transfers were rejected when the burst or buswidth had values different from 1, as 8 for the burst or 4 for the bus width. Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Fix the access of the IRQ registerJean-Francois Moine2016-04-261-2/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | The IRQ register number is computed, but this number was not used and the register was the one indexed by the channel index instead. Then, only the first DMA channel was working. Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Jean-Francois Moine <moinejf@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/qcom' into for-linusVinod Koul2016-05-177-43/+1291
| |\ \
| | * | dmaengine: qcom_hidma: add support for object hierarchySinan Kaya2016-05-142-5/+147
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to create a relationship model between the channels and the management object, we are adding support for object hierarchy to the drivers. This patch simplifies the userspace application development. We will not have to traverse different firmware paths based on device tree or ACPI based kernels. No matter what flavor of kernel is used, objects will be represented as platform devices. The new layout is as follows: hidmam_10: hidma-mgmt@0x5A000000 { compatible = "qcom,hidma-mgmt-1.0"; ... hidma_10: hidma@0x5a010000 { compatible = "qcom,hidma-1.0"; ... } } The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects in device tree and calls into the channel driver to create platform devices for each child of the management object. Signed-off-by: Sinan Kaya <okaya@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom_hidma: add debugfs hooksSinan Kaya2016-05-144-2/+224
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add debugfs hooks for debugging the execution behavior of the DMA channel. The debugfs hooks get initialized by the probe function and uninitialized by the remove function. A stats file is created in debugfs. The stats file will show the information about each HIDMA channel as well as each asynchronous job queued and completed at a given time. Signed-off-by: Sinan Kaya <okaya@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom_hidma: implement lower level hardware interfaceSinan Kaya2016-05-144-25/+895
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements the hardware hooks for the HIDMA channel driver. The main functions of interest are: - hidma_ll_init - hidma_ll_request - hidma_ll_queue_request - hidma_ll_hw_start OS layer calls the hidma_ll_init function during probe to set up the hardware. At this moment, the number of supported descriptors are also given. On each request, a descriptor is allocated from the free pool and filled in with the transfer parameters. Multiple requests can be queued into the hardware via the OS interface. When client is ready for requests to be executed, start method is called. Completions are delivered via callbacks via tasklet. Signed-off-by: Sinan Kaya <okaya@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom: bam_dma: rename BAM_MAX_DATA_SIZE defineStanimir Varbanov2016-04-191-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that the define has not been with acurate name and makes confusion while reading the code. The more acurate name should be BAM_FIFO_SIZE. Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org> Reviewed-by: Andy Gross <andy.gross@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom: bam_dma: use correct pipe FIFO sizeStanimir Varbanov2016-04-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pipe fifo size register must instruct the bam hw how many hw descriptors can be pushed to fifo. Currently we instruct the hw with 32KBytes but wrap the tail in bam_start_dma in BAM_P_EVNT_REG on 4095 i.e. 32760. This leads to stalled transactions when the tail wraps. Fix this by use the correct fifo size in BAM_P_FIFO_SIZES register i.e. 32K - 8. Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom: bam_dma: add controlled-remotely dt propertyStanimir Varbanov2016-04-191-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some of the peripherals has bam which is controlled by remote processor, thus the bam dma driver must avoid register writes which initialise bam hw block. Those registers are protected from xPU block and any writes to them will lead to secure violation and system reboot. Adding the contolled_remotely flag in bam driver to avoid not permitted register writes in bam_init function. Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org> Reviewed-by: Andy Gross <andy.gross@linaro.org> Tested-by: Pramod Gurav <gpramod@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom: bam_dma: clear BAM interrupt only if it is raisedStanimir Varbanov2016-04-191-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we write BAM_IRQ_CLR register with zero even when no BAM_IRQ occured. This write has some bad side effects when the BAM instance is for the crypto engine. In case of crypto engine some of the BAM registers are xPU protected and they cannot be controlled by the driver. Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org> Reviewed-by: Andy Gross <andy.gross@linaro.org> Tested-by: Pramod Gurav <gpramod@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: qcom: bam_dma: fix dma free memory on removeStanimir Varbanov2016-04-191-0/+3
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building the driver as a module and when removing the already inserted module gives below: [ 1389.392788] Unable to handle kernel paging request at virtual address ffffffbdc000001c [ 1389.421321] pgd = ffffffc02fa87000 [ 1389.447899] [ffffffbdc000001c] *pgd=0000000000000000, *pud=0000000000000000 [ 1389.460142] Internal error: Oops: 96000006 [#1] PREEMPT SMP [ 1389.466963] Modules linked in: qcom_bam_dma(-) [ 1389.486608] CPU: 2 PID: 2442 Comm: rmmod Not tainted 4.2.0+ #407 [ 1389.493885] Hardware name: Qualcomm Technologies, Inc. APQ 8016 SBC (DT) [ 1389.501196] task: ffffffc035bae2c0 ti: ffffffc0368a8000 task.ti: ffffffc0368a8000 [ 1389.508566] PC is at __free_pages+0xc/0x40 [ 1389.515893] LR is at free_pages.part.93+0x30/0x38 [ 1389.523141] pc : [<ffffffc00016180c>] lr : [<ffffffc00016197c>] pstate: 80000145 [ 1389.530602] sp : ffffffc0368abc20 [ 1389.537931] x29: ffffffc0368abc20 x28: ffffffc0368a8000 [ 1389.549153] x27: 0000000000000000 x26: 0000000000000000 [ 1389.560412] x25: ffffffc000cb2000 x24: 0000000000000170 [ 1389.571530] x23: 0000000000000004 x22: ffffffc036bc5010 [ 1389.582721] x21: ffffffc036bc5010 x20: 0000000000000000 [ 1389.593981] x19: 0000000000000002 x18: 0000007fcbc8e8b0 [ 1389.605301] x17: 0000007f9b8226ec x16: ffffffc0002089e8 [ 1389.616647] x15: 0000007f9b8a0588 x14: 0ffffffffffffffc [ 1389.628039] x13: 0000000000000030 x12: 0000000000000000 [ 1389.639436] x11: 0000000000000008 x10: ffffffc000ecc000 [ 1389.650872] x9 : ffffffc035bae2c0 x8 : ffffffc035bae9a8 [ 1389.662367] x7 : ffffffc035bae9a0 x6 : 0000000000000000 [ 1389.673906] x5 : ffffffbdc000001c x4 : 0000000080000000 [ 1389.685475] x3 : ffffffbdc0000000 x2 : 0000004080000000 [ 1389.697049] x1 : 0000000000000003 x0 : ffffffbdc0000000 The memory has been already freed by bam_free_chan() so fix this by skiping already freed memory. Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org> Reviewed-by: Andy Gross <andy.gross@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/pxa' into for-linusVinod Koul2016-05-171-2/+14
| |\ \
| | * | dmaengine: pxa_dma: remove duplicate const qualifierEric Engestrom2016-04-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Eric Engestrom <eric.engestrom@imgtec.com> Acked-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: pxa: handle bus errorsRobert Jarzmik2016-04-261-1/+13
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the current state, upon bus error the driver will spin endlessly, relaunching the last tx, which will fail again and again : - a bus error happens - pxad_chan_handler() is called - as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is lauched, which is the one triggering the bus error, as it didn't terminate - moreover, the STOP interrupt fires a new, as the STOPIRQEN is still active Break this logic by stopping the automatic relaunch of a dma channel upon a bus error, even if there are still pending issued requests on it. As dma_cookie_status() seems unable to return DMA_ERROR in its current form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx basis, it is chosen in this patch to remember on the channel which transaction failed, and report it in pxad_tx_status(). It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1 was completed while T2 causes a bus error, the status of T3 and T4 will be reported as DMA_IN_PROGRESS, while the channel is actually stopped. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/pl08x' into for-linusVinod Koul2016-05-171-28/+58
| |\ \
| | * | dmaengine: pl08x: allocate OF slave channel data at probe timeLinus Walleij2016-04-061-28/+58
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current OF translation of channels can never work with any DMA client using the DMA channels directly: the only way to get the channels initialized properly is in the dma_async_device_register() call, where chan->dev etc is allocated and initialized. Allocate and initialize all possible DMA channels and only augment a target channel with the periph_buses at of_xlate(). Remove some const settings to make things work. Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Tested-by: Joachim Eastwood <manabian@gmail.com> Tested-by: Johannes Stezenbach <js@sig21.net> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/mv_xor' into for-linusVinod Koul2016-05-173-21/+80
| |\ \
| | * | dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoCGregory CLEMENT2016-05-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Armada 3700 SoC uses the mv_xor driver but don't select anymore the PLAT_ORION symbol. This commit extends the dependency of the mv_xor driver to the more modern SoCs only compatible with ARCH_MVEBU, which allows using it with the Armada 3700 SoC. In the same time it also add the COMPILE_TEST dependency allowing a wider test coverage. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: mv_xor: add support for Armada 3700 SoCMarcin Wojtas2016-05-031-7/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Armada 3700 SoC comprise a single XOR engine compliant with the ones used in older Marvell SoC's like Armada XP or 38x. The only thing that needs modification is the Mbus configuration, which has to be done on two levels: global and in device. The first one is inherited from the bootloader. The latter can be opened in a default way, leaving arbitration to the bus controller. Hence filled mbus_dram_target_info structure is not needed. Patch "dmaengine: mv_xor: optimize performance by using a subset of the XOR channels" introduced limitation for using XOR engines and channels vs number of available CPU's. Those constraints do not however fit Armada 3700 architecture with two possible CPU's and single, dual-channel engine. Hence in this commit an adjustment for setting maximum available channels is added. This patch enables XOR access to DRAM by opening default window to 4GB space with specific attribute. Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: mv_xor: use SoC type instead of directly the operation modeGregory CLEMENT2016-05-032-12/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the main difference between legacy XOR engine and newer one, is the way the engine modes are setup (either in the descriptor or through the controller registers). In order to be able to take into account new generation of the XOR engine for the ARM64 SoC, we need to identify them by type, and then depending to the type the engine setup will be selected. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: mv_xor: make the code 64 bits compliantGregory CLEMENT2016-05-031-2/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix two warnings which appear when building for 64 bits target: drivers/dma/mv_xor.c: In function ‘mv_xor_prep_dma_xor’: drivers/dma/mv_xor.c:480:3: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 6 has type ‘size_t {aka long unsigned int}’ [-Wformat=] "%s src_cnt: %d len: %u dest %pad flags: %ld\n", ^ drivers/dma/mv_xor.c: In function ‘mv_xor_probe’: drivers/dma/mv_xor.c:1223:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] op_in_desc = (int)of_id->data; ^ Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/mpc512x' into for-linusVinod Koul2016-05-171-64/+110
| |\ \
| | * | dmaengine: mpc512x: Fix code styleMario Six2016-04-041-17/+13
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Mario Six <mario.six@gdsys.cc> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: mpc512x: Implement additional chunk sizes for DMA transfersMario Six2016-04-041-36/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch extends the capabilities of the driver to handle DMA transfers to and from devices of 1, 2, 4, 16 (for MPC512x), and 32 byte widths. Signed-off-by: Mario Six <mario.six@gdsys.cc> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: mpc512x: Fix hanging DMA device transfer for MPC8308Mario Six2016-04-041-14/+24
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the MPC8308 has no external request lines to initiate DMA transfers, all transfers must be triggered by software. Because of this, the current implementation of DMA transfers from and to devices on MPC8308 SoCs using major and minor loops is faulty: After the completion of the first major loop, the DMA engine resets the start flag in the channel's TCD, thus halting the transfer. The driver would have to set the start bit again to trigger the next iteration of the major loop; on MPC512x SoCs, this is done via the external request lines, so in this case, the driver doesn't have to interfer in any way. This has the effect that on MPC8308s, every DMA transfer to or from a device hangs after executing the first major loop. The patch fixes this behavior by using just one major loop for the whole DMA transfer on MPC8308s. Signed-off-by: Mario Six <mario.six@gdsys.cc> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/hsu' into for-linusVinod Koul2016-05-172-3/+9
| |\ \
| | * | dmaengine: hsu: set maximum allowed segment size for DMAAndy Shevchenko2016-04-042-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This tells, for example, IOMMU what the maximum size of a segment the DMA controller can send. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: hsu: don't check direction of timeouted channelAndy Shevchenko2016-04-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The timeout capability is only available on the so called DMA write channels, i.e. associated with UART Rx FIFO. It means we don't need to check the direction of the channel to handle timeouts. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: hsu: allow more than 3 descriptorsAndy Shevchenko2016-04-041-2/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current code allows only up to 3 descriptors to be programmed to the hardware since it is used wrong calculations. Change % to min_t() to allow as many descriptors as user supplied. At once it could be programmed up to 4 descriptors due to hardware limitations. The issue was found under stress test, so it might not bother ordinary users. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/dw' into for-linusVinod Koul2016-05-174-333/+293
| |\ \