diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-05 04:56:20 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-05 04:56:20 +0200 |
commit | 828f3e18e1cb98c68fc6db4d5113513d4a267775 (patch) | |
tree | e1d813f2122dee697e896166917d58f46dff82c5 | |
parent | Merge tag 'arm-defconfig-5.8' of git://git.kernel.org/pub/scm/linux/kernel/gi... (diff) | |
parent | clk: sprd: fix compile-testing (diff) | |
download | linux-828f3e18e1cb98c68fc6db4d5113513d4a267775.tar.xz linux-828f3e18e1cb98c68fc6db4d5113513d4a267775.zip |
Merge tag 'arm-drivers-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull ARM/SoC driver updates from Arnd Bergmann:
"These are updates to SoC specific drivers that did not have another
subsystem maintainer tree to go through for some reason:
- Some bus and memory drivers for the MIPS P5600 based Baikal-T1 SoC
that is getting added through the MIPS tree.
- There are new soc_device identification drivers for TI K3, Qualcomm
MSM8939
- New reset controller drivers for NXP i.MX8MP, Renesas RZ/G1H, and
Hisilicon hi6220
- The SCMI firmware interface can now work across ARM SMC/HVC as a
transport.
- Mediatek platforms now use a new driver for their "MMSYS" hardware
block that controls clocks and some other aspects in behalf of the
media and gpu drivers.
- Some Tegra processors have improved power management support,
including getting woken up by the PMIC and cluster power down
during idle.
- A new v4l staging driver for Tegra is added.
- Cleanups and minor bugfixes for TI, NXP, Hisilicon, Mediatek, and
Tegra"
* tag 'arm-drivers-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (155 commits)
clk: sprd: fix compile-testing
bus: bt1-axi: Build the driver into the kernel
bus: bt1-apb: Build the driver into the kernel
bus: bt1-axi: Use sysfs_streq instead of strncmp
bus: bt1-axi: Optimize the return points in the driver
bus: bt1-apb: Use sysfs_streq instead of strncmp
bus: bt1-apb: Use PTR_ERR_OR_ZERO to return from request-regs method
bus: bt1-apb: Fix show/store callback identations
bus: bt1-apb: Include linux/io.h
dt-bindings: memory: Add Baikal-T1 L2-cache Control Block binding
memory: Add Baikal-T1 L2-cache Control Block driver
bus: Add Baikal-T1 APB-bus driver
bus: Add Baikal-T1 AXI-bus driver
dt-bindings: bus: Add Baikal-T1 APB-bus binding
dt-bindings: bus: Add Baikal-T1 AXI-bus binding
staging: tegra-video: fix V4L2 dependency
tee: fix crypto select
drivers: soc: ti: knav_qmss_queue: Make knav_gp_range_ops static
soc: ti: add k3 platforms chipid module driver
dt-bindings: soc: ti: add binding for k3 platforms chipid module
...
150 files changed, 7996 insertions, 1231 deletions
diff --git a/Documentation/devicetree/bindings/arm/arm,scmi.txt b/Documentation/devicetree/bindings/arm/arm,scmi.txt index dc102c4e4a78..1f293ea24cd8 100644 --- a/Documentation/devicetree/bindings/arm/arm,scmi.txt +++ b/Documentation/devicetree/bindings/arm/arm,scmi.txt @@ -14,7 +14,7 @@ Required properties: The scmi node with the following properties shall be under the /firmware/ node. -- compatible : shall be "arm,scmi" +- compatible : shall be "arm,scmi" or "arm,scmi-smc" for smc/hvc transports - mboxes: List of phandle and mailbox channel specifiers. It should contain exactly one or two mailboxes, one for transmitting messages("tx") and another optional for receiving the notifications("rx") if @@ -25,6 +25,7 @@ The scmi node with the following properties shall be under the /firmware/ node. protocol identifier for a given sub-node. - #size-cells : should be '0' as 'reg' property doesn't have any size associated with it. +- arm,smc-id : SMC id required when using smc or hvc transports Optional properties: diff --git a/Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt b/Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt index 301eefbe1618..8d6a9d98e7a6 100644 --- a/Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt +++ b/Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt @@ -1,7 +1,8 @@ Mediatek mmsys controller ============================ -The Mediatek mmsys controller provides various clocks to the system. +The Mediatek mmsys system controller provides clock control, routing control, +and miscellaneous control in mmsys partition. Required Properties: @@ -15,13 +16,13 @@ Required Properties: - "mediatek,mt8183-mmsys", "syscon" - #clock-cells: Must be 1 -The mmsys controller uses the common clk binding from +For the clock control, the mmsys controller uses the common clk binding from Documentation/devicetree/bindings/clock/clock-bindings.txt The available clocks are defined in dt-bindings/clock/mt*-clk.h. Example: -mmsys: clock-controller@14000000 { +mmsys: syscon@14000000 { compatible = "mediatek,mt8173-mmsys", "syscon"; reg = <0 0x14000000 0 0x1000>; #clock-cells = <1>; diff --git a/Documentation/devicetree/bindings/bus/baikal,bt1-apb.yaml b/Documentation/devicetree/bindings/bus/baikal,bt1-apb.yaml new file mode 100644 index 000000000000..d6a3b71ea835 --- /dev/null +++ b/Documentation/devicetree/bindings/bus/baikal,bt1-apb.yaml @@ -0,0 +1,90 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 BAIKAL ELECTRONICS, JSC +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/bus/baikal,bt1-apb.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Baikal-T1 APB-bus + +maintainers: + - Serge Semin <fancer.lancer@gmail.com> + +description: | + Baikal-T1 CPU or DMAC MMIO requests are handled by the AMBA 3 AXI Interconnect + which routes them to the AXI-APB bridge. This interface is a single master + multiple slaves bus in turn serializing IO accesses and routing them to the + addressed APB slave devices. In case of any APB protocol collisions, slave + device not responding on timeout an IRQ is raised with an erroneous address + reported to the APB terminator (APB Errors Handler Block). + +allOf: + - $ref: /schemas/simple-bus.yaml# + +properties: + compatible: + contains: + const: baikal,bt1-apb + + reg: + items: + - description: APB EHB MMIO registers + - description: APB MMIO region with no any device mapped + + reg-names: + items: + - const: ehb + - const: nodev + + interrupts: + maxItems: 1 + + clocks: + items: + - description: APB reference clock + + clock-names: + items: + - const: pclk + + resets: + items: + - description: APB domain reset line + + reset-names: + items: + - const: prst + +unevaluatedProperties: false + +required: + - compatible + - reg + - reg-names + - interrupts + - clocks + - clock-names + +examples: + - | + #include <dt-bindings/interrupt-controller/mips-gic.h> + + bus@1f059000 { + compatible = "baikal,bt1-apb", "simple-bus"; + reg = <0 0x1f059000 0 0x1000>, + <0 0x1d000000 0 0x2040000>; + reg-names = "ehb", "nodev"; + #address-cells = <1>; + #size-cells = <1>; + + ranges; + + interrupts = <GIC_SHARED 16 IRQ_TYPE_LEVEL_HIGH>; + + clocks = <&ccu_sys 1>; + clock-names = "pclk"; + + resets = <&ccu_sys 1>; + reset-names = "prst"; + }; +... diff --git a/Documentation/devicetree/bindings/bus/baikal,bt1-axi.yaml b/Documentation/devicetree/bindings/bus/baikal,bt1-axi.yaml new file mode 100644 index 000000000000..203bc0e5346b --- /dev/null +++ b/Documentation/devicetree/bindings/bus/baikal,bt1-axi.yaml @@ -0,0 +1,107 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 BAIKAL ELECTRONICS, JSC +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/bus/baikal,bt1-axi.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Baikal-T1 AXI-bus + +maintainers: + - Serge Semin <fancer.lancer@gmail.com> + +description: | + AXI3-bus is the main communication bus of Baikal-T1 SoC connecting all + high-speed peripheral IP-cores with RAM controller and with MIPS P5600 + cores. Traffic arbitration is done by means of DW AXI Interconnect (so + called AXI Main Interconnect) routing IO requests from one block to + another: from CPU to SoC peripherals and between some SoC peripherals + (mostly between peripheral devices and RAM, but also between DMA and + some peripherals). In case of any protocol error, device not responding + an IRQ is raised and a faulty situation is reported to the AXI EHB + (Errors Handler Block) embedded on top of the DW AXI Interconnect and + accessible by means of the Baikal-T1 System Controller. + +allOf: + - $ref: /schemas/simple-bus.yaml# + +properties: + compatible: + contains: + const: baikal,bt1-axi + + reg: + minItems: 1 + items: + - description: Synopsys DesignWare AXI Interconnect QoS registers + - description: AXI EHB MMIO system controller registers + + reg-names: + minItems: 1 + items: + - const: qos + - const: ehb + + '#interconnect-cells': + const: 1 + + syscon: + $ref: /schemas/types.yaml#definitions/phandle + description: Phandle to the Baikal-T1 System Controller DT node + + interrupts: + maxItems: 1 + + clocks: + items: + - description: Main Interconnect uplink reference clock + + clock-names: + items: + - const: aclk + + resets: + items: + - description: Main Interconnect reset line + + reset-names: + items: + - const: arst + +unevaluatedProperties: false + +required: + - compatible + - reg + - reg-names + - syscon + - interrupts + - clocks + - clock-names + +examples: + - | + #include <dt-bindings/interrupt-controller/mips-gic.h> + + bus@1f05a000 { + compatible = "baikal,bt1-axi", "simple-bus"; + reg = <0 0x1f05a000 0 0x1000>, + <0 0x1f04d110 0 0x8>; + reg-names = "qos", "ehb"; + #address-cells = <1>; + #size-cells = <1>; + #interconnect-cells = <1>; + + syscon = <&syscon>; + + ranges; + + interrupts = <GIC_SHARED 127 IRQ_TYPE_LEVEL_HIGH>; + + clocks = <&ccu_axi 0>; + clock-names = "aclk"; + + resets = <&ccu_axi 0>; + reset-names = "arst"; + }; +... diff --git a/Documentation/devicetree/bindings/cpufreq/nvidia,tegra20-cpufreq.txt b/Documentation/devicetree/bindings/cpufreq/nvidia,tegra20-cpufreq.txt new file mode 100644 index 000000000000..daeca6ae6b76 --- /dev/null +++ b/Documentation/devicetree/bindings/cpufreq/nvidia,tegra20-cpufreq.txt @@ -0,0 +1,56 @@ +Binding for NVIDIA Tegra20 CPUFreq +================================== + +Required properties: +- clocks: Must contain an entry for the CPU clock. + See ../clocks/clock-bindings.txt for details. +- operating-points-v2: See ../bindings/opp/opp.txt for details. +- #cooling-cells: Should be 2. See ../thermal/thermal.txt for details. + +For each opp entry in 'operating-points-v2' table: +- opp-supported-hw: Two bitfields indicating: + On Tegra20: + 1. CPU process ID mask + 2. SoC speedo ID mask + + On Tegra30: + 1. CPU process ID mask + 2. CPU speedo ID mask + + A bitwise AND is performed against these values and if any bit + matches, the OPP gets enabled. + +- opp-microvolt: CPU voltage triplet. + +Optional properties: +- cpu-supply: Phandle to the CPU power supply. + +Example: + regulators { + cpu_reg: regulator0 { + regulator-name = "vdd_cpu"; + }; + }; + + cpu0_opp_table: opp_table0 { + compatible = "operating-points-v2"; + + opp@456000000 { + clock-latency-ns = <125000>; + opp-microvolt = <825000 825000 1125000>; + opp-supported-hw = <0x03 0x0001>; + opp-hz = /bits/ 64 <456000000>; + }; + + ... + }; + + cpus { + cpu@0 { + compatible = "arm,cortex-a9"; + clocks = <&tegra_car TEGRA20_CLK_CCLK>; + operating-points-v2 = <&cpu0_opp_table>; + cpu-supply = <&cpu_reg>; + #cooling-cells = <2>; + }; + }; diff --git a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt index 9999255ac5b6..47319214b5f6 100644 --- a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt +++ b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt @@ -40,14 +40,30 @@ of the following host1x client modules: Required properties: - compatible: "nvidia,tegra<chip>-vi" - - reg: Physical base address and length of the controller's registers. + - reg: Physical base address and length of the controller registers. - interrupts: The interrupt outputs from the controller. - - clocks: Must contain one entry, for the module clock. + - clocks: clocks: Must contain one entry, for the module clock. See ../clocks/clock-bindings.txt for details. - - resets: Must contain an entry for each entry in reset-names. - See ../reset/reset.txt for details. - - reset-names: Must include the following entries: - - vi + - Tegra20/Tegra30/Tegra114/Tegra124: + - resets: Must contain an entry for each entry in reset-names. + See ../reset/reset.txt for details. + - reset-names: Must include the following entries: + - vi + - Tegra210: + - power-domains: Must include venc powergate node as vi is in VE partition. + - Tegra210 has CSI part of VI sharing same host interface and register space. + So, VI device node should have CSI child node. + + - csi: mipi csi interface to vi + + Required properties: + - compatible: "nvidia,tegra210-csi" + - reg: Physical base address offset to parent and length of the controller + registers. + - clocks: Must contain entries csi, cilab, cilcd, cile, csi_tpg clocks. + See ../clocks/clock-bindings.txt for details. + - power-domains: Must include sor powergate node as csicil is in + SOR partition. - epp: encoder pre-processor @@ -309,13 +325,44 @@ Example: reset-names = "mpe"; }; - vi { - compatible = "nvidia,tegra20-vi"; - reg = <0x54080000 0x00040000>; - interrupts = <0 69 0x04>; - clocks = <&tegra_car TEGRA20_CLK_VI>; - resets = <&tegra_car 100>; - reset-names = "vi"; + vi@54080000 { + compatible = "nvidia,tegra210-vi"; + reg = <0x0 0x54080000 0x0 0x700>; + interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>; + assigned-clocks = <&tegra_car TEGRA210_CLK_VI>; + assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_C4_OUT0>; + + clocks = <&tegra_car TEGRA210_CLK_VI>; + power-domains = <&pd_venc>; + + #address-cells = <1>; + #size-cells = <1>; + + ranges = <0x0 0x0 0x54080000 0x2000>; + + csi@838 { + compatible = "nvidia,tegra210-csi"; + reg = <0x838 0x1300>; + assigned-clocks = <&tegra_car TEGRA210_CLK_CILAB>, + <&tegra_car TEGRA210_CLK_CILCD>, + <&tegra_car TEGRA210_CLK_CILE>, + <&tegra_car TEGRA210_CLK_CSI_TPG>; + assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_P>, + <&tegra_car TEGRA210_CLK_PLL_P>, + <&tegra_car TEGRA210_CLK_PLL_P>; + assigned-clock-rates = <102000000>, + <102000000>, + <102000000>, + <972000000>; + + clocks = <&tegra_car TEGRA210_CLK_CSI>, + <&tegra_car TEGRA210_CLK_CILAB>, + <&tegra_car TEGRA210_CLK_CILCD>, + <&tegra_car TEGRA210_CLK_CILE>, + <&tegra_car TEGRA210_CLK_CSI_TPG>; + clock-names = "csi", "cilab", "cilcd", "cile", "csi_tpg"; + power-domains = <&pd_sor>; + }; }; epp { diff --git a/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt b/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt index f64064f8bdc2..18c0de362451 100644 --- a/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt +++ b/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt @@ -35,6 +35,12 @@ Required properties: Due to above changes, Tegra114 I2C driver makes incompatible with previous hardware driver. Hence, tegra114 I2C controller is compatible with "nvidia,tegra114-i2c". + nvidia,tegra210-i2c-vi: Tegra210 has one I2C controller that is part of the + host1x domain and typically used for camera use-cases. This VI I2C + controller is mostly compatible with the programming model of the + regular I2C controllers with a few exceptions. The I2C registers start + at an offset of 0xc00 (instead of 0), registers are 16 bytes apart + (rather than 4) and the controller does not support slave mode. - reg: Should contain I2C controller registers physical address and length. - interrupts: Should contain I2C controller interrupts. - address-cells: Address cells for I2C device address. diff --git a/Documentation/devicetree/bindings/memory-controllers/baikal,bt1-l2-ctl.yaml b/Documentation/devicetree/bindings/memory-controllers/baikal,bt1-l2-ctl.yaml new file mode 100644 index 000000000000..1fca282f64a2 --- /dev/null +++ b/Documentation/devicetree/bindings/memory-controllers/baikal,bt1-l2-ctl.yaml @@ -0,0 +1,63 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 BAIKAL ELECTRONICS, JSC +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/memory-controllers/baikal,bt1-l2-ctl.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Baikal-T1 L2-cache Control Block + +maintainers: + - Serge Semin <fancer.lancer@gmail.com> + +description: | + By means of the System Controller Baikal-T1 SoC exposes a few settings to + tune the MIPS P5600 CM2 L2 cache performance up. In particular it's possible + to change the Tag, Data and Way-select RAM access latencies. Baikal-T1 + L2-cache controller block is responsible for the tuning. Its DT node is + supposed to be a child of the system controller. + +properties: + compatible: + const: baikal,bt1-l2-ctl + + reg: + maxItems: 1 + + baikal,l2-ws-latency: + $ref: /schemas/types.yaml#/definitions/uint32 + description: Cycles of latency for Way-select RAM accesses + default: 0 + minimum: 0 + maximum: 3 + + baikal,l2-tag-latency: + $ref: /schemas/types.yaml#/definitions/uint32 + description: Cycles of latency for Tag RAM accesses + default: 0 + minimum: 0 + maximum: 3 + + baikal,l2-data-latency: + $ref: /schemas/types.yaml#/definitions/uint32 + description: Cycles of latency for Data RAM accesses + default: 1 + minimum: 0 + maximum: 3 + +additionalProperties: false + +required: + - compatible + +examples: + - | + l2@1f04d028 { + compatible = "baikal,bt1-l2-ctl"; + reg = <0x1f04d028 0x004>; + + baikal,l2-ws-latency = <1>; + baikal,l2-tag-latency = <1>; + baikal,l2-data-latency = <2>; + }; +... diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.yaml b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.yaml new file mode 100644 index 000000000000..49ab09252e52 --- /dev/null +++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.yaml @@ -0,0 +1,82 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra210-emc.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: NVIDIA Tegra210 SoC External Memory Controller + +maintainers: + - Thierry Reding <thierry.reding@gmail.com> + - Jon Hunter <jonathanh@nvidia.com> + +description: | + The EMC interfaces with the off-chip SDRAM to service the request stream + sent from the memory controller. + +properties: + compatible: + const: nvidia,tegra210-emc + + reg: + maxItems: 3 + + clocks: + items: + - description: external memory clock + + clock-names: + items: + - const: emc + + interrupts: + items: + - description: EMC general interrupt + + memory-region: + $ref: /schemas/types.yaml#/definitions/phandle + description: + phandle to a reserved memory region describing the table of EMC + frequencies trained by the firmware + + nvidia,memory-controller: + $ref: /schemas/types.yaml#/definitions/phandle + description: + phandle of the memory controller node + +required: + - compatible + - reg + - clocks + - clock-names + - nvidia,memory-controller + +additionalProperties: false + +examples: + - | + #include <dt-bindings/clock/tegra210-car.h> + #include <dt-bindings/interrupt-controller/arm-gic.h> + + reserved-memory { + #address-cells = <1>; + #size-cells = <1>; + ranges; + + emc_table: emc-table@83400000 { + compatible = "nvidia,tegra210-emc-table"; + reg = <0x83400000 0x10000>; + }; + }; + + external-memory-controller@7001b000 { + compatible = "nvidia,tegra210-emc"; + reg = <0x7001b000 0x1000>, + <0x7001e000 0x1000>, + <0x7001f000 0x1000>; + clocks = <&tegra_car TEGRA210_CLK_EMC>; + clock-names = "emc"; + interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>; + memory-region = <&emc_table>; + nvidia,memory-controller = <&mc>; + }; diff --git a/Documentation/devicetree/bindings/power/amlogic,meson-ee-pwrc.yaml b/Documentation/devicetree/bindings/power/amlogic,meson-ee-pwrc.yaml index 6c6079fe1351..51a6fac892e3 100644 --- a/Documentation/devicetree/bindings/power/amlogic,meson-ee-pwrc.yaml +++ b/Documentation/devicetree/bindings/power/amlogic,meson-ee-pwrc.yaml @@ -23,33 +23,31 @@ description: |+ properties: compatible: enum: + - amlogic,meson8-pwrc + - amlogic,meson8b-pwrc + - amlogic,meson8m2-pwrc + - amlogic,meson-gxbb-pwrc - amlogic,meson-g12a-pwrc - amlogic,meson-sm1-pwrc clocks: - minItems: 2 + minItems: 1 + maxItems: 2 clock-names: + minItems: 1 + maxItems: 2 items: - const: vpu - const: vapb resets: minItems: 11 + maxItems: 12 reset-names: - items: - - const: viu - - const: venc - - const: vcbus - - const: bt656 - - const: rdma - - const: venci - - const: vencp - - const: vdac - - const: vdi6 - - const: vencl - - const: vid_lock + minItems: 11 + maxItems: 12 "#power-domain-cells": const: 1 @@ -59,12 +57,86 @@ properties: allOf: - $ref: /schemas/types.yaml#/definitions/phandle +allOf: + - if: + properties: + compatible: + enum: + - amlogic,meson8b-pwrc + - amlogic,meson8m2-pwrc + then: + properties: + reset-names: + items: + - const: dblk + - const: pic_dc + - const: hdmi_apb + - const: hdmi_system + - const: venci + - const: vencp + - const: vdac + - const: vencl + - const: viu + - const: venc + - const: rdma + required: + - resets + - reset-names + + - if: + properties: + compatible: + enum: + - amlogic,meson-gxbb-pwrc + then: + properties: + reset-names: + items: + - const: viu + - const: venc + - const: vcbus + - const: bt656 + - const: dvin + - const: rdma + - const: venci + - const: vencp + - const: vdac + - const: vdi6 + - const: vencl + - const: vid_lock + required: + - resets + - reset-names + + - if: + properties: + compatible: + enum: + - amlogic,meson-g12a-pwrc + - amlogic,meson-sm1-pwrc + then: + properties: + reset-names: + items: + - const: viu + - const: venc + - const: vcbus + - const: bt656 + - const: rdma + - const: venci + - const: vencp + - const: vdac + - const: vdi6 + - const: vencl + - const: vid_lock + required: + - resets + - reset-names + required: - compatible - clocks - clock-names - - resets - - reset-names - "#power-domain-cells" - amlogic,ao-sysctrl diff --git a/Documentation/devicetree/bindings/power/qcom,rpmpd.yaml b/Documentation/devicetree/bindings/power/qcom,rpmpd.yaml index ba605310abeb..8058955fb3b9 100644 --- a/Documentation/devicetree/bindings/power/qcom,rpmpd.yaml +++ b/Documentation/devicetree/bindings/power/qcom,rpmpd.yaml @@ -23,6 +23,7 @@ properties: - qcom,sc7180-rpmhpd - qcom,sdm845-rpmhpd - qcom,sm8150-rpmhpd + - qcom,sm8250-rpmhpd '#power-domain-cells': const: 1 diff --git a/Documentation/devicetree/bindings/reset/fsl,imx7-src.txt b/Documentation/devicetree/bindings/reset/fsl,imx7-src.txt index c2489e41a801..e10502d9153e 100644 --- a/Documentation/devicetree/bindings/reset/fsl,imx7-src.txt +++ b/Documentation/devicetree/bindings/reset/fsl,imx7-src.txt @@ -9,6 +9,8 @@ Required properties: - For i.MX7 SoCs should be "fsl,imx7d-src", "syscon" - For i.MX8MQ SoCs should be "fsl,imx8mq-src", "syscon" - For i.MX8MM SoCs should be "fsl,imx8mm-src", "fsl,imx8mq-src", "syscon" + - For i.MX8MN SoCs should be "fsl,imx8mn-src", "fsl,imx8mq-src", "syscon" + - For i.MX8MP SoCs should be "fsl,imx8mp-src", "syscon" - reg: should be register base and length as documented in the datasheet - interrupts: Should contain SRC interrupt @@ -49,4 +51,6 @@ Example: For list of all valid reset indices see <dt-bindings/reset/imx7-reset.h> for i.MX7, <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ and -<dt-bindings/reset/imx8mq-reset.h> for i.MX8MM +<dt-bindings/reset/imx8mq-reset.h> for i.MX8MM and +<dt-bindings/reset/imx8mq-reset.h> for i.MX8MN and +<dt-bindings/reset/imx8mp-reset.h> for i.MX8MP diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt index 4fc571e78f01..953add19e937 100644 --- a/Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt @@ -19,6 +19,7 @@ power-domains. "qcom,sc7180-aoss-qmp" "qcom,sdm845-aoss-qmp" "qcom,sm8150-aoss-qmp" + "qcom,sm8250-aoss-qmp" - reg: Usage: required diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt index f8fa71f5d84b..2e2f6dc351c0 100644 --- a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt @@ -65,30 +65,30 @@ which uses apr as communication between Apps and QDSP. compatible = "qcom,apr-v2"; qcom,apr-domain = <APR_DOMAIN_ADSP>; - q6core@3 { + apr-service@3 { compatible = "qcom,q6core"; reg = <APR_SVC_ADSP_CORE>; }; - q6afe@4 { + apr-service@4 { compatible = "qcom,q6afe"; reg = <APR_SVC_AFE>; dais { #sound-dai-cells = <1>; - hdmi@1 { - reg = <1>; + dai@1 { + reg = <HDMI_RX>; }; }; }; - q6asm@7 { + apr-service@7 { compatible = "qcom,q6asm"; reg = <APR_SVC_ASM>; ... }; - q6adm@8 { + apr-service@8 { compatible = "qcom,q6adm"; reg = <APR_SVC_ADM>; ... @@ -106,26 +106,26 @@ have no such dependency. qcom,glink-channels = "apr_audio_svc"; qcom,apr-domain = <APR_DOMAIN_ADSP>; - q6core { + apr-service@3 { compatible = "qcom,q6core"; reg = <APR_SVC_ADSP_CORE>; }; - q6afe: q6afe { + q6afe: apr-service@4 { compatible = "qcom,q6afe"; reg = <APR_SVC_AFE>; qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; ... }; - q6asm: q6asm { + q6asm: apr-service@7 { compatible = "qcom,q6asm"; reg = <APR_SVC_ASM>; qcom,protection-domain = "tms/servreg", "msm/slpi/sensor_pd"; ... }; - q6adm: q6adm { + q6adm: apr-service@8 { compatible = "qcom,q6adm"; reg = <APR_SVC_ADM>; qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; diff --git a/Documentation/devicetree/bindings/soc/ti/k3-socinfo.yaml b/Documentation/devicetree/bindings/soc/ti/k3-socinfo.yaml new file mode 100644 index 000000000000..a1a8423b2e2e --- /dev/null +++ b/Documentation/devicetree/bindings/soc/ti/k3-socinfo.yaml @@ -0,0 +1,40 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/soc/ti/k3-socinfo.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Texas Instruments K3 Multicore SoC platforms chipid module + +maintainers: + - Tero Kristo <t-kristo@ti.com> + - Nishanth Menon <nm@ti.com> + +description: | + Texas Instruments (ARM64) K3 Multicore SoC platforms chipid module is + represented by CTRLMMR_xxx_JTAGID register which contains information about + SoC id and revision. + +properties: + $nodename: + pattern: "^chipid@[0-9a-f]+$" + + compatible: + items: + - const: ti,am654-chipid + + reg: + maxItems: 1 + +required: + - compatible + - reg + +additionalProperties: false + +examples: + - | + chipid@43000014 { + compatible = "ti,am654-chipid"; + reg = <0x43000014 0x4>; + }; diff --git a/MAINTAINERS b/MAINTAINERS index a5dac5c4e78d..3d2a9236284b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -16729,6 +16729,16 @@ M: Laxman Dewangan <ldewangan@nvidia.com> S: Supported F: drivers/spi/spi-tegra* +TEGRA VIDEO DRIVER +M: Thierry Reding <thierry.reding@gmail.com> +M: Jonathan Hunter <jonathanh@nvidia.com> +M: Sowjanya Komatineni <skomatineni@nvidia.com> +L: linux-media@vger.kernel.org +L: linux-tegra@vger.kernel.org +S: Maintained +F: Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt +F: drivers/staging/media/tegra-video/ + TEGRA XUSB PADCTL DRIVER M: JC Kuo <jckuo@nvidia.com> S: Supported diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms index 25cbb556d863..44537dcfd251 100644 --- a/arch/arm64/Kconfig.platforms +++ b/arch/arm64/Kconfig.platforms @@ -248,7 +248,7 @@ config ARCH_TEGRA This enables support for the NVIDIA Tegra SoC family. config ARCH_SPRD - tristate "Spreadtrum SoC platform" + bool "Spreadtrum SoC platform" help Support for Spreadtrum ARM based SoCs diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig index 2d73d4b6c9a6..c8818e3b1079 100644 --- a/drivers/bus/Kconfig +++ b/drivers/bus/Kconfig @@ -38,6 +38,36 @@ config BRCMSTB_GISB_ARB arbiter. This driver provides timeout and target abort error handling and internal bus master decoding. +config BT1_APB + bool "Baikal-T1 APB-bus driver" + depends on MIPS_BAIKAL_T1 || COMPILE_TEST + select REGMAP_MMIO + help + Baikal-T1 AXI-APB bridge is used to access the SoC subsystem CSRs. + IO requests are routed to this bus by means of the DW AMBA 3 AXI + Interconnect. In case of any APB protocol collisions, slave device + not responding on timeout an IRQ is raised with an erroneous address + reported to the APB terminator (APB Errors Handler Block). This + driver provides the interrupt handler to detect the erroneous + address, prints an error message about the address fault, updates an + errors counter. The counter and the APB-bus operations timeout can be + accessed via corresponding sysfs nodes. + +config BT1_AXI + bool "Baikal-T1 AXI-bus driver" + depends on MIPS_BAIKAL_T1 || COMPILE_TEST + select MFD_SYSCON + help + AXI3-bus is the main communication bus connecting all high-speed + peripheral IP-cores with RAM controller and with MIPS P5600 cores on + Baikal-T1 SoC. Traffic arbitration is done by means of DW AMBA 3 AXI + Interconnect (so called AXI Main Interconnect) routing IO requests + from one SoC block to another. This driver provides a way to detect + any bus protocol errors and device not responding situations by + means of an embedded on top of the interconnect errors handler + block (EHB). AXI Interconnect QoS arbitration tuning is currently + unsupported. + config MOXTET tristate "CZ.NIC Turris Mox module configuration bus" depends on SPI_MASTER && OF diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile index 97552b427f12..397e35392bff 100644 --- a/drivers/bus/Makefile +++ b/drivers/bus/Makefile @@ -13,6 +13,8 @@ obj-$(CONFIG_MOXTET) += moxtet.o # DPAA2 fsl-mc bus obj-$(CONFIG_FSL_MC_BUS) += fsl-mc/ +obj-$(CONFIG_BT1_APB) += bt1-apb.o +obj-$(CONFIG_BT1_AXI) += bt1-axi.o obj-$(CONFIG_IMX_WEIM) += imx-weim.o obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o diff --git a/drivers/bus/bt1-apb.c b/drivers/bus/bt1-apb.c new file mode 100644 index 000000000000..b25ff941e7c7 --- /dev/null +++ b/drivers/bus/bt1-apb.c @@ -0,0 +1,421 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC + * + * Authors: + * Serge Semin <Sergey.Semin@baikalelectronics.ru> + * + * Baikal-T1 APB-bus driver + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/platform_device.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/nmi.h> +#include <linux/of.h> +#include <linux/regmap.h> +#include <linux/clk.h> +#include <linux/reset.h> +#include <linux/time64.h> +#include <linux/clk.h> +#include <linux/sysfs.h> + +#define APB_EHB_ISR 0x00 +#define APB_EHB_ISR_PENDING BIT(0) +#define APB_EHB_ISR_MASK BIT(1) +#define APB_EHB_ADDR 0x04 +#define APB_EHB_TIMEOUT 0x08 + +#define APB_EHB_TIMEOUT_MIN 0x000003FFU +#define APB_EHB_TIMEOUT_MAX 0xFFFFFFFFU + +/* + * struct bt1_apb - Baikal-T1 APB EHB private data + * @dev: Pointer to the device structure. + * @regs: APB EHB registers map. + * @res: No-device error injection memory region. + * @irq: Errors IRQ number. + * @rate: APB-bus reference clock rate. + * @pclk: APB-reference clock. + * @prst: APB domain reset line. + * @count: Number of errors detected. + */ +struct bt1_apb { + struct device *dev; + + struct regmap *regs; + void __iomem *res; + int irq; + + unsigned long rate; + struct clk *pclk; + + struct reset_control *prst; + + atomic_t count; +}; + +static const struct regmap_config bt1_apb_regmap_cfg = { + .reg_bits = 32, + .val_bits = 32, + .reg_stride = 4, + .max_register = APB_EHB_TIMEOUT, + .fast_io = true +}; + +static inline unsigned long bt1_apb_n_to_timeout_us(struct bt1_apb *apb, u32 n) +{ + u64 timeout = (u64)n * USEC_PER_SEC; + + do_div(timeout, apb->rate); + + return timeout; + +} + +static inline unsigned long bt1_apb_timeout_to_n_us(struct bt1_apb *apb, + unsigned long timeout) +{ + u64 n = (u64)timeout * apb->rate; + + do_div(n, USEC_PER_SEC); + + return n; + +} + +static irqreturn_t bt1_apb_isr(int irq, void *data) +{ + struct bt1_apb *apb = data; + u32 addr = 0; + + regmap_read(apb->regs, APB_EHB_ADDR, &addr); + + dev_crit_ratelimited(apb->dev, + "APB-bus fault %d: Slave access timeout at 0x%08x\n", + atomic_inc_return(&apb->count), + addr); + + /* + * Print backtrace on each CPU. This might be pointless if the fault + * has happened on the same CPU as the IRQ handler is executed or + * the other core proceeded further execution despite the error. + * But if it's not, by looking at the trace we would get straight to + * the cause of the problem. + */ + trigger_all_cpu_backtrace(); + + regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_PENDING, 0); + + return IRQ_HANDLED; +} + +static void bt1_apb_clear_data(void *data) +{ + struct bt1_apb *apb = data; + struct platform_device *pdev = to_platform_device(apb->dev); + + platform_set_drvdata(pdev, NULL); +} + +static struct bt1_apb *bt1_apb_create_data(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct bt1_apb *apb; + int ret; + + apb = devm_kzalloc(dev, sizeof(*apb), GFP_KERNEL); + if (!apb) + return ERR_PTR(-ENOMEM); + + ret = devm_add_action(dev, bt1_apb_clear_data, apb); + if (ret) { + dev_err(dev, "Can't add APB EHB data clear action\n"); + return ERR_PTR(ret); + } + + apb->dev = dev; + atomic_set(&apb->count, 0); + platform_set_drvdata(pdev, apb); + + return apb; +} + +static int bt1_apb_request_regs(struct bt1_apb *apb) +{ + struct platform_device *pdev = to_platform_device(apb->dev); + void __iomem *regs; + + regs = devm_platform_ioremap_resource_byname(pdev, "ehb"); + if (IS_ERR(regs)) { + dev_err(apb->dev, "Couldn't map APB EHB registers\n"); + return PTR_ERR(regs); + } + + apb->regs = devm_regmap_init_mmio(apb->dev, regs, &bt1_apb_regmap_cfg); + if (IS_ERR(apb->regs)) { + dev_err(apb->dev, "Couldn't create APB EHB regmap\n"); + return PTR_ERR(apb->regs); + } + + apb->res = devm_platform_ioremap_resource_byname(pdev, "nodev"); + if (IS_ERR(apb->res)) + dev_err(apb->dev, "Couldn't map reserved region\n"); + + return PTR_ERR_OR_ZERO(apb->res); +} + +static int bt1_apb_request_rst(struct bt1_apb *apb) +{ + int ret; + + apb->prst = devm_reset_control_get_optional_exclusive(apb->dev, "prst"); + if (IS_ERR(apb->prst)) { + dev_warn(apb->dev, "Couldn't get reset control line\n"); + return PTR_ERR(apb->prst); + } + + ret = reset_control_deassert(apb->prst); + if (ret) + dev_err(apb->dev, "Failed to deassert the reset line\n"); + + return ret; +} + +static void bt1_apb_disable_clk(void *data) +{ + struct bt1_apb *apb = data; + + clk_disable_unprepare(apb->pclk); +} + +static int bt1_apb_request_clk(struct bt1_apb *apb) +{ + int ret; + + apb->pclk = devm_clk_get(apb->dev, "pclk"); + if (IS_ERR(apb->pclk)) { + dev_err(apb->dev, "Couldn't get APB clock descriptor\n"); + return PTR_ERR(apb->pclk); + } + + ret = clk_prepare_enable(apb->pclk); + if (ret) { + dev_err(apb->dev, "Couldn't enable the APB clock\n"); + return ret; + } + + ret = devm_add_action_or_reset(apb->dev, bt1_apb_disable_clk, apb); + if (ret) { + dev_err(apb->dev, "Can't add APB EHB clocks disable action\n"); + return ret; + } + + apb->rate = clk_get_rate(apb->pclk); + if (!apb->rate) { + dev_err(apb->dev, "Invalid clock rate\n"); + return -EINVAL; + } + + return 0; +} + +static void bt1_apb_clear_irq(void *data) +{ + struct bt1_apb *apb = data; + + regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_MASK, 0); +} + +static int bt1_apb_request_irq(struct bt1_apb *apb) +{ + struct platform_device *pdev = to_platform_device(apb->dev); + int ret; + + apb->irq = platform_get_irq(pdev, 0); + if (apb->irq < 0) + return apb->irq; + + ret = devm_request_irq(apb->dev, apb->irq, bt1_apb_isr, IRQF_SHARED, + "bt1-apb", apb); + if (ret) { + dev_err(apb->dev, "Couldn't request APB EHB IRQ\n"); + return ret; + } + + ret = devm_add_action(apb->dev, bt1_apb_clear_irq, apb); + if (ret) { + dev_err(apb->dev, "Can't add APB EHB IRQs clear action\n"); + return ret; + } + + /* Unmask IRQ and clear it' pending flag. */ + regmap_update_bits(apb->regs, APB_EHB_ISR, + APB_EHB_ISR_PENDING | APB_EHB_ISR_MASK, + APB_EHB_ISR_MASK); + + return 0; +} + +static ssize_t count_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct bt1_apb *apb = dev_get_drvdata(dev); + + return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&apb->count)); +} +static DEVICE_ATTR_RO(count); + +static ssize_t timeout_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct bt1_apb *apb = dev_get_drvdata(dev); + unsigned long timeout; + int ret; + u32 n; + + ret = regmap_read(apb->regs, APB_EHB_TIMEOUT, &n); + if (ret) + return ret; + + timeout = bt1_apb_n_to_timeout_us(apb, n); + + return scnprintf(buf, PAGE_SIZE, "%lu\n", timeout); +} + +static ssize_t timeout_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct bt1_apb *apb = dev_get_drvdata(dev); + unsigned long timeout; + int ret; + u32 n; + + if (kstrtoul(buf, 0, &timeout) < 0) + return -EINVAL; + + n = bt1_apb_timeout_to_n_us(apb, timeout); + n = clamp(n, APB_EHB_TIMEOUT_MIN, APB_EHB_TIMEOUT_MAX); + + ret = regmap_write(apb->regs, APB_EHB_TIMEOUT, n); + + return ret ?: count; +} +static DEVICE_ATTR_RW(timeout); + +static ssize_t inject_error_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return scnprintf(buf, PAGE_SIZE, "Error injection: nodev irq\n"); +} + +static ssize_t inject_error_store(struct device *dev, + struct device_attribute *attr, + const char *data, size_t count) +{ + struct bt1_apb *apb = dev_get_drvdata(dev); + + /* + * Either dummy read from the unmapped address in the APB IO area + * or manually set the IRQ status. + */ + if (sysfs_streq(data, "nodev")) + readl(apb->res); + else if (sysfs_streq(data, "irq")) + regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_PENDING, + APB_EHB_ISR_PENDING); + else + return -EINVAL; + + return count; +} +static DEVICE_ATTR_RW(inject_error); + +static struct attribute *bt1_apb_sysfs_attrs[] = { + &dev_attr_count.attr, + &dev_attr_timeout.attr, + &dev_attr_inject_error.attr, + NULL +}; +ATTRIBUTE_GROUPS(bt1_apb_sysfs); + +static void bt1_apb_remove_sysfs(void *data) +{ + struct bt1_apb *apb = data; + + device_remove_groups(apb->dev, bt1_apb_sysfs_groups); +} + +static int bt1_apb_init_sysfs(struct bt1_apb *apb) +{ + int ret; + + ret = device_add_groups(apb->dev, bt1_apb_sysfs_groups); + if (ret) { + dev_err(apb->dev, "Failed to create EHB APB sysfs nodes\n"); + return ret; + } + + ret = devm_add_action_or_reset(apb->dev, bt1_apb_remove_sysfs, apb); + if (ret) + dev_err(apb->dev, "Can't add APB EHB sysfs remove action\n"); + + return ret; +} + +static int bt1_apb_probe(struct platform_device *pdev) +{ + struct bt1_apb *apb; + int ret; + + apb = bt1_apb_create_data(pdev); + if (IS_ERR(apb)) + return PTR_ERR(apb); + + ret = bt1_apb_request_regs(apb); + if (ret) + return ret; + + ret = bt1_apb_request_rst(apb); + if (ret) + return ret; + + ret = bt1_apb_request_clk(apb); + if (ret) + return ret; + + ret = bt1_apb_request_irq(apb); + if (ret) + return ret; + + ret = bt1_apb_init_sysfs(apb); + if (ret) + return ret; + + return 0; +} + +static const struct of_device_id bt1_apb_of_match[] = { + { .compatible = "baikal,bt1-apb" }, + { } +}; +MODULE_DEVICE_TABLE(of, bt1_apb_of_match); + +static struct platform_driver bt1_apb_driver = { + .probe = bt1_apb_probe, + .driver = { + .name = "bt1-apb", + .of_match_table = bt1_apb_of_match + } +}; +module_platform_driver(bt1_apb_driver); + +MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); +MODULE_DESCRIPTION("Baikal-T1 APB-bus driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/bus/bt1-axi.c b/drivers/bus/bt1-axi.c new file mode 100644 index 000000000000..e7a6744acc7b --- /dev/null +++ b/drivers/bus/bt1-axi.c @@ -0,0 +1,314 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC + * + * Authors: + * Serge Semin <Sergey.Semin@baikalelectronics.ru> + * + * Baikal-T1 AXI-bus driver + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/types.h> +#include <linux/bitfield.h> +#include <linux/device.h> +#include <linux/atomic.h> +#include <linux/regmap.h> +#include <linux/platform_device.h> +#include <linux/mfd/syscon.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/nmi.h> +#include <linux/of.h> +#include <linux/clk.h> +#include <linux/reset.h> +#include <linux/sysfs.h> + +#define BT1_AXI_WERRL 0x110 +#define BT1_AXI_WERRH 0x114 +#define BT1_AXI_WERRH_TYPE BIT(23) +#define BT1_AXI_WERRH_ADDR_FLD 24 +#define BT1_AXI_WERRH_ADDR_MASK GENMASK(31, BT1_AXI_WERRH_ADDR_FLD) + +/* + * struct bt1_axi - Baikal-T1 AXI-bus private data + * @dev: Pointer to the device structure. + * @qos_regs: AXI Interconnect QoS tuning registers. + * @sys_regs: Baikal-T1 System Controller registers map. + * @irq: Errors IRQ number. + * @aclk: AXI reference clock. + * @arst: AXI Interconnect reset line. + * @count: Number of errors detected. + */ +struct bt1_axi { + struct device *dev; + + void __iomem *qos_regs; + struct regmap *sys_regs; + int irq; + + struct clk *aclk; + + struct reset_control *arst; + + atomic_t count; +}; + +static irqreturn_t bt1_axi_isr(int irq, void *data) +{ + struct bt1_axi *axi = data; + u32 low = 0, high = 0; + + regmap_read(axi->sys_regs, BT1_AXI_WERRL, &low); + regmap_read(axi->sys_regs, BT1_AXI_WERRH, &high); + + dev_crit_ratelimited(axi->dev, + "AXI-bus fault %d: %s at 0x%x%08x\n", + atomic_inc_return(&axi->count), + high & BT1_AXI_WERRH_TYPE ? "no slave" : "slave protocol error", + high, low); + + /* + * Print backtrace on each CPU. This might be pointless if the fault + * has happened on the same CPU as the IRQ handler is executed or + * the other core proceeded further execution despite the error. + * But if it's not, by looking at the trace we would get straight to + * the cause of the problem. + */ + trigger_all_cpu_backtrace(); + + return IRQ_HANDLED; +} + +static void bt1_axi_clear_data(void *data) +{ + struct bt1_axi *axi = data; + struct platform_device *pdev = to_platform_device(axi->dev); + + platform_set_drvdata(pdev, NULL); +} + +static struct bt1_axi *bt1_axi_create_data(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct bt1_axi *axi; + int ret; + + axi = devm_kzalloc(dev, sizeof(*axi), GFP_KERNEL); + if (!axi) + return ERR_PTR(-ENOMEM); + + ret = devm_add_action(dev, bt1_axi_clear_data, axi); + if (ret) { + dev_err(dev, "Can't add AXI EHB data clear action\n"); + return ERR_PTR(ret); + } + + axi->dev = dev; + atomic_set(&axi->count, 0); + platform_set_drvdata(pdev, axi); + + return axi; +} + +static int bt1_axi_request_regs(struct bt1_axi *axi) +{ + struct platform_device *pdev = to_platform_device(axi->dev); + struct device *dev = axi->dev; + + axi->sys_regs = syscon_regmap_lookup_by_phandle(dev->of_node, "syscon"); + if (IS_ERR(axi->sys_regs)) { + dev_err(dev, "Couldn't find syscon registers\n"); + return PTR_ERR(axi->sys_regs); + } + + axi->qos_regs = devm_platform_ioremap_resource_byname(pdev, "qos"); + if (IS_ERR(axi->qos_regs)) + dev_err(dev, "Couldn't map AXI-bus QoS registers\n"); + + return PTR_ERR_OR_ZERO(axi->qos_regs); +} + +static int bt1_axi_request_rst(struct bt1_axi *axi) +{ + int ret; + + axi->arst = devm_reset_control_get_optional_exclusive(axi->dev, "arst"); + if (IS_ERR(axi->arst)) { + dev_warn(axi->dev, "Couldn't get reset control line\n"); + return PTR_ERR(axi->arst); + } + + ret = reset_control_deassert(axi->arst); + if (ret) + dev_err(axi->dev, "Failed to deassert the reset line\n"); + + return ret; +} + +static void bt1_axi_disable_clk(void *data) +{ + struct bt1_axi *axi = data; + + clk_disable_unprepare(axi->aclk); +} + +static int bt1_axi_request_clk(struct bt1_axi *axi) +{ + int ret; + + axi->aclk = devm_clk_get(axi->dev, "aclk"); + if (IS_ERR(axi->aclk)) { + dev_err(axi->dev, "Couldn't get AXI Interconnect clock\n"); + return PTR_ERR(axi->aclk); + } + + ret = clk_prepare_enable(axi->aclk); + if (ret) { + dev_err(axi->dev, "Couldn't enable the AXI clock\n"); + return ret; + } + + ret = devm_add_action_or_reset(axi->dev, bt1_axi_disable_clk, axi); + if (ret) + dev_err(axi->dev, "Can't add AXI clock disable action\n"); + + return ret; +} + +static int bt1_axi_request_irq(struct bt1_axi *axi) +{ + struct platform_device *pdev = to_platform_device(axi->dev); + int ret; + + axi->irq = platform_get_irq(pdev, 0); + if (axi->irq < 0) + return axi->irq; + + ret = devm_request_irq(axi->dev, axi->irq, bt1_axi_isr, IRQF_SHARED, + "bt1-axi", axi); + if (ret) + dev_err(axi->dev, "Couldn't request AXI EHB IRQ\n"); + + return ret; +} + +static ssize_t count_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct bt1_axi *axi = dev_get_drvdata(dev); + + return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&axi->count)); +} +static DEVICE_ATTR_RO(count); + +static ssize_t inject_error_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return scnprintf(buf, PAGE_SIZE, "Error injection: bus unaligned\n"); +} + +static ssize_t inject_error_store(struct device *dev, + struct device_attribute *attr, + const char *data, size_t count) +{ + struct bt1_axi *axi = dev_get_drvdata(dev); + + /* + * Performing unaligned read from the memory will cause the CM2 bus + * error while unaligned writing - the AXI bus write error handled + * by this driver. + */ + if (sysfs_streq(data, "bus")) + readb(axi->qos_regs); + else if (sysfs_streq(data, "unaligned")) + writeb(0, axi->qos_regs); + else + return -EINVAL; + + return count; +} +static DEVICE_ATTR_RW(inject_error); + +static struct attribute *bt1_axi_sysfs_attrs[] = { + &dev_attr_count.attr, + &dev_attr_inject_error.attr, + NULL +}; +ATTRIBUTE_GROUPS(bt1_axi_sysfs); + +static void bt1_axi_remove_sysfs(void *data) +{ + struct bt1_axi *axi = data; + + device_remove_groups(axi->dev, bt1_axi_sysfs_groups); +} + +static int bt1_axi_init_sysfs(struct bt1_axi *axi) +{ + int ret; + + ret = device_add_groups(axi->dev, bt1_axi_sysfs_groups); + if (ret) { + dev_err(axi->dev, "Failed to add sysfs files group\n"); + return ret; + } + + ret = devm_add_action_or_reset(axi->dev, bt1_axi_remove_sysfs, axi); + if (ret) + dev_err(axi->dev, "Can't add AXI EHB sysfs remove action\n"); + + return ret; +} + +static int bt1_axi_probe(struct platform_device *pdev) +{ + struct bt1_axi *axi; + int ret; + + axi = bt1_axi_create_data(pdev); + if (IS_ERR(axi)) + return PTR_ERR(axi); + + ret = bt1_axi_request_regs(axi); + if (ret) + return ret; + + ret = bt1_axi_request_rst(axi); + if (ret) + return ret; + + ret = bt1_axi_request_clk(axi); + if (ret) + return ret; + + ret = bt1_axi_request_irq(axi); + if (ret) + return ret; + + ret = bt1_axi_init_sysfs(axi); + if (ret) + return ret; + + return 0; +} + +static const struct of_device_id bt1_axi_of_match[] = { + { .compatible = "baikal,bt1-axi" }, + { } +}; +MODULE_DEVICE_TABLE(of, bt1_axi_of_match); + +static struct platform_driver bt1_axi_driver = { + .probe = bt1_axi_probe, + .driver = { + .name = "bt1-axi", + .of_match_table = bt1_axi_of_match + } +}; +module_platform_driver(bt1_axi_driver); + +MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); +MODULE_DESCRIPTION("Baikal-T1 AXI-bus driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile index fb30c16e1596..1b431110b6f5 100644 --- a/drivers/clk/Makefile +++ b/drivers/clk/Makefile @@ -105,7 +105,7 @@ obj-$(CONFIG_CLK_SIFIVE) += sifive/ obj-$(CONFIG_ARCH_SIRF) += sirf/ obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/ obj-$(CONFIG_PLAT_SPEAR) += spear/ -obj-$(CONFIG_ARCH_SPRD) += sprd/ +obj-y += sprd/ obj-$(CONFIG_ARCH_STI) += st/ obj-$(CONFIG_ARCH_STRATIX10) += socfpga/ obj-$(CONFIG_ARCH_SUNXI) += sunxi/ diff --git a/drivers/clk/mediatek/Kconfig b/drivers/clk/mediatek/Kconfig index ea3c70d1307e..9e28db8125cd 100644 --- a/drivers/clk/mediatek/Kconfig +++ b/drivers/clk/mediatek/Kconfig @@ -274,6 +274,13 @@ config COMMON_CLK_MT8173 ---help--- This driver supports MediaTek MT8173 clocks. +config COMMON_CLK_MT8173_MMSYS + bool "Clock driver for MediaTek MT8173 mmsys" + depends on COMMON_CLK_MT8173 + default COMMON_CLK_MT8173 + help + This driver supports MediaTek MT8173 mmsys clocks. + config COMMON_CLK_MT8183 bool "Clock driver for MediaTek MT8183" depends on (ARCH_MEDIATEK && ARM64) || COMPILE_TEST diff --git a/drivers/clk/mediatek/Makefile b/drivers/clk/mediatek/Makefile index 8cdb76a5cd71..bb0536942075 100644 --- a/drivers/clk/mediatek/Makefile +++ b/drivers/clk/mediatek/Makefile @@ -41,6 +41,7 @@ obj-$(CONFIG_COMMON_CLK_MT7629_ETHSYS) += clk-mt7629-eth.o obj-$(CONFIG_COMMON_CLK_MT7629_HIFSYS) += clk-mt7629-hif.o obj-$(CONFIG_COMMON_CLK_MT8135) += clk-mt8135.o obj-$(CONFIG_COMMON_CLK_MT8173) += clk-mt8173.o +obj-$(CONFIG_COMMON_CLK_MT8173_MMSYS) += clk-mt8173-mm.o obj-$(CONFIG_COMMON_CLK_MT8183) += clk-mt8183.o obj-$(CONFIG_COMMON_CLK_MT8183_AUDIOSYS) += clk-mt8183-audio.o obj-$(CONFIG_COMMON_CLK_MT8183_CAMSYS) += clk-mt8183-cam.o diff --git a/drivers/clk/mediatek/clk-mt2701-mm.c b/drivers/clk/mediatek/clk-mt2701-mm.c index 054b597d4a73..cb18e1849492 100644 --- a/drivers/clk/mediatek/clk-mt2701-mm.c +++ b/drivers/clk/mediatek/clk-mt2701-mm.c @@ -79,16 +79,12 @@ static const struct mtk_gate mm_clks[] = { GATE_DISP1(CLK_MM_TVE_FMM, "mm_tve_fmm", "mm_sel", 14), }; -static const struct of_device_id of_match_clk_mt2701_mm[] = { - { .compatible = "mediatek,mt2701-mmsys", }, - {} -}; - static int clk_mt2701_mm_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct device_node *node = dev->parent->of_node; struct clk_onecell_data *clk_data; int r; - struct device_node *node = pdev->dev.of_node; clk_data = mtk_alloc_clk_data(CLK_MM_NR); @@ -108,7 +104,6 @@ static struct platform_driver clk_mt2701_mm_drv = { .probe = clk_mt2701_mm_probe, .driver = { .name = "clk-mt2701-mm", - .of_match_table = of_match_clk_mt2701_mm, }, }; diff --git a/drivers/clk/mediatek/clk-mt2712-mm.c b/drivers/clk/mediatek/clk-mt2712-mm.c index 1c5948be35f3..5519c3d68c1f 100644 --- a/drivers/clk/mediatek/clk-mt2712-mm.c +++ b/drivers/clk/mediatek/clk-mt2712-mm.c @@ -128,9 +128,10 @@ static const struct mtk_gate mm_clks[] = { static int clk_mt2712_mm_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct device_node *node = dev->parent->of_node; struct clk_onecell_data *clk_data; int r; - struct device_node *node = pdev->dev.of_node; clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); @@ -146,16 +147,10 @@ static int clk_mt2712_mm_probe(struct platform_device *pdev) return r; } -static const struct of_device_id of_match_clk_mt2712_mm[] = { - { .compatible = "mediatek,mt2712-mmsys", }, - {} -}; - static struct platform_driver clk_mt2712_mm_drv = { .probe = clk_mt2712_mm_probe, .driver = { .name = "clk-mt2712-mm", - .of_match_table = of_match_clk_mt2712_mm, }, }; diff --git a/drivers/clk/mediatek/clk-mt6779-mm.c b/drivers/clk/mediatek/clk-mt6779-mm.c index fb5fbb8e3e41..059c1a41ac7a 100644 --- a/drivers/clk/mediatek/clk-mt6779-mm.c +++ b/drivers/clk/mediatek/clk-mt6779-mm.c @@ -84,15 +84,11 @@ static const struct mtk_gate mm_clks[] = { GATE_MM1(CLK_MM_DISP_OVL_FBDC, "mm_disp_ovl_fbdc", "mm_sel", 16), }; -static const struct of_device_id of_match_clk_mt6779_mm[] = { - { .compatible = "mediatek,mt6779-mmsys", }, - {} -}; - static int clk_mt6779_mm_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct device_node *node = dev->parent->of_node; struct clk_onecell_data *clk_data; - struct device_node *node = pdev->dev.of_node; clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); @@ -106,7 +102,6 @@ static struct platform_driver clk_mt6779_mm_drv = { .probe = clk_mt6779_mm_probe, .driver = { .name = "clk-mt6779-mm", - .of_match_table = of_match_clk_mt6779_mm, }, }; diff --git a/drivers/clk/mediatek/clk-mt6797-mm.c b/drivers/clk/mediatek/clk-mt6797-mm.c index 8f05653b387d..01fdce287247 100644 --- a/drivers/clk/mediatek/clk-mt6797-mm.c +++ b/drivers/clk/mediatek/clk-mt6797-mm.c @@ -92,16 +92,12 @@ static const struct mtk_gate mm_clks[] = { "clk26m", 3), }; -static const struct of_device_id of_match_clk_mt6797_mm[] = { - { .compatible = "mediatek,mt6797-mmsys", }, - {} -}; - static int clk_mt6797_mm_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct device_node *node = dev->parent->of_node; struct clk_onecell_data *clk_data; int r; - struct device_node *node = pdev->dev.of_node; clk_data = mtk_alloc_clk_data(CLK_MM_NR); @@ -121,7 +117,6 @@ static struct platform_driver clk_mt6797_mm_drv = { .probe = clk_mt6797_mm_probe, .driver = { .name = "clk-mt6797-mm", - .of_match_table = of_match_clk_mt6797_mm, }, }; diff --git a/drivers/clk/mediatek/clk-mt8173-mm.c b/drivers/clk/mediatek/clk-mt8173-mm.c new file mode 100644 index 000000000000..36fa20be77b6 --- /dev/null +++ b/drivers/clk/mediatek/clk-mt8173-mm.c @@ -0,0 +1,146 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2014 MediaTek Inc. + * Author: James Liao <jamesjj.liao@mediatek.com> + */ + +#include <linux/clk-provider.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> + +#include "clk-gate.h" +#include "clk-mtk.h" + +#include <dt-bindings/clock/mt8173-clk.h> + +static const struct mtk_gate_regs mm0_cg_regs = { + .set_ofs = 0x0104, + .clr_ofs = 0x0108, + .sta_ofs = 0x0100, +}; + +static const struct mtk_gate_regs mm1_cg_regs = { + .set_ofs = 0x0114, + .clr_ofs = 0x0118, + .sta_ofs = 0x0110, +}; + +#define GATE_MM0(_id, _name, _parent, _shift) { \ + .id = _id, \ + .name = _name, \ + .parent_name = _parent, \ + .regs = &mm0_cg_regs, \ + .shift = _shift, \ + .ops = &mtk_clk_gate_ops_setclr, \ + } + +#define GATE_MM1(_id, _name, _parent, _shift) { \ + .id = _id, \ + .name = _name, \ + .parent_name = _parent, \ + .regs = &mm1_cg_regs, \ + .shift = _shift, \ + .ops = &mtk_clk_gate_ops_setclr, \ + } + +static const struct mtk_gate mt8173_mm_clks[] = { + /* MM0 */ + GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0), + GATE_MM0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1), + GATE_MM0(CLK_MM_CAM_MDP, "mm_cam_mdp", "mm_sel", 2), + GATE_MM0(CLK_MM_MDP_RDMA0, "mm_mdp_rdma0", "mm_sel", 3), + GATE_MM0(CLK_MM_MDP_RDMA1, "mm_mdp_rdma1", "mm_sel", 4), + GATE_MM0(CLK_MM_MDP_RSZ0, "mm_mdp_rsz0", "mm_sel", 5), + GATE_MM0(CLK_MM_MDP_RSZ1, "mm_mdp_rsz1", "mm_sel", 6), + GATE_MM0(CLK_MM_MDP_RSZ2, "mm_mdp_rsz2", "mm_sel", 7), + GATE_MM0(CLK_MM_MDP_TDSHP0, "mm_mdp_tdshp0", "mm_sel", 8), + GATE_MM0(CLK_MM_MDP_TDSHP1, "mm_mdp_tdshp1", "mm_sel", 9), + GATE_MM0(CLK_MM_MDP_WDMA, "mm_mdp_wdma", "mm_sel", 11), + GATE_MM0(CLK_MM_MDP_WROT0, "mm_mdp_wrot0", "mm_sel", 12), + GATE_MM0(CLK_MM_MDP_WROT1, "mm_mdp_wrot1", "mm_sel", 13), + GATE_MM0(CLK_MM_FAKE_ENG, "mm_fake_eng", "mm_sel", 14), + GATE_MM0(CLK_MM_MUTEX_32K, "mm_mutex_32k", "rtc_sel", 15), + GATE_MM0(CLK_MM_DISP_OVL0, "mm_disp_ovl0", "mm_sel", 16), + GATE_MM0(CLK_MM_DISP_OVL1, "mm_disp_ovl1", "mm_sel", 17), + GATE_MM0(CLK_MM_DISP_RDMA0, "mm_disp_rdma0", "mm_sel", 18), + GATE_MM0(CLK_MM_DISP_RDMA1, "mm_disp_rdma1", "mm_sel", 19), + GATE_MM0(CLK_MM_DISP_RDMA2, "mm_disp_rdma2", "mm_sel", 20), + GATE_MM0(CLK_MM_DISP_WDMA0, "mm_disp_wdma0", "mm_sel", 21), + GATE_MM0(CLK_MM_DISP_WDMA1, "mm_disp_wdma1", "mm_sel", 22), + GATE_MM0(CLK_MM_DISP_COLOR0, "mm_disp_color0", "mm_sel", 23), + GATE_MM0(CLK_MM_DISP_COLOR1, "mm_disp_color1", "mm_sel", 24), + GATE_MM0(CLK_MM_DISP_AAL, "mm_disp_aal", "mm_sel", 25), + GATE_MM0(CLK_MM_DISP_GAMMA, "mm_disp_gamma", "mm_sel", 26), + GATE_MM0(CLK_MM_DISP_UFOE, "mm_disp_ufoe", "mm_sel", 27), + GATE_MM0(CLK_MM_DISP_SPLIT0, "mm_disp_split0", "mm_sel", 28), + GATE_MM0(CLK_MM_DISP_SPLIT1, "mm_disp_split1", "mm_sel", 29), + GATE_MM0(CLK_MM_DISP_MERGE, "mm_disp_merge", "mm_sel", 30), + GATE_MM0(CLK_MM_DISP_OD, "mm_disp_od", "mm_sel", 31), + /* MM1 */ + GATE_MM1(CLK_MM_DISP_PWM0MM, "mm_disp_pwm0mm", "mm_sel", 0), + GATE_MM1(CLK_MM_DISP_PWM026M, "mm_disp_pwm026m", "pwm_sel", 1), + GATE_MM1(CLK_MM_DISP_PWM1MM, "mm_disp_pwm1mm", "mm_sel", 2), + GATE_MM1(CLK_MM_DISP_PWM126M, "mm_disp_pwm126m", "pwm_sel", 3), + GATE_MM1(CLK_MM_DSI0_ENGINE, "mm_dsi0_engine", "mm_sel", 4), + GATE_MM1(CLK_MM_DSI0_DIGITAL, "mm_dsi0_digital", "dsi0_dig", 5), + GATE_MM1(CLK_MM_DSI1_ENGINE, "mm_dsi1_engine", "mm_sel", 6), + GATE_MM1(CLK_MM_DSI1_DIGITAL, "mm_dsi1_digital", "dsi1_dig", 7), + GATE_MM1(CLK_MM_DPI_PIXEL, "mm_dpi_pixel", "dpi0_sel", 8), + GATE_MM1(CLK_MM_DPI_ENGINE, "mm_dpi_engine", "mm_sel", 9), + GATE_MM1(CLK_MM_DPI1_PIXEL, "mm_dpi1_pixel", "lvds_pxl", 10), + GATE_MM1(CLK_MM_DPI1_ENGINE, "mm_dpi1_engine", "mm_sel", 11), + GATE_MM1(CLK_MM_HDMI_PIXEL, "mm_hdmi_pixel", "dpi0_sel", 12), + GATE_MM1(CLK_MM_HDMI_PLLCK, "mm_hdmi_pllck", "hdmi_sel", 13), + GATE_MM1(CLK_MM_HDMI_AUDIO, "mm_hdmi_audio", "apll1", 14), + GATE_MM1(CLK_MM_HDMI_SPDIF, "mm_hdmi_spdif", "apll2", 15), + GATE_MM1(CLK_MM_LVDS_PIXEL, "mm_lvds_pixel", "lvds_pxl", 16), + GATE_MM1(CLK_MM_LVDS_CTS, "mm_lvds_cts", "lvds_cts", 17), + GATE_MM1(CLK_MM_SMI_LARB4, "mm_smi_larb4", "mm_sel", 18), + GATE_MM1(CLK_MM_HDMI_HDCP, "mm_hdmi_hdcp", "hdcp_sel", 19), + GATE_MM1(CLK_MM_HDMI_HDCP24M, "mm_hdmi_hdcp24m", "hdcp_24m_sel", 20), +}; + +struct clk_mt8173_mm_driver_data { + const struct mtk_gate *gates_clk; + int gates_num; +}; + +static const struct clk_mt8173_mm_driver_data mt8173_mmsys_driver_data = { + .gates_clk = mt8173_mm_clks, + .gates_num = ARRAY_SIZE(mt8173_mm_clks), +}; + +static int clk_mt8173_mm_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device_node *node = dev->parent->of_node; + const struct clk_mt8173_mm_driver_data *data; + struct clk_onecell_data *clk_data; + int ret; + + clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); + if (!clk_data) + return -ENOMEM; + + data = &mt8173_mmsys_driver_data; + + ret = mtk_clk_register_gates(node, data->gates_clk, data->gates_num, + clk_data); + if (ret) + return ret; + + ret = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); + if (ret) + return ret; + + return 0; +} + +static struct platform_driver clk_mt8173_mm_drv = { + .driver = { + .name = "clk-mt8173-mm", + }, + .probe = clk_mt8173_mm_probe, +}; + +builtin_platform_driver(clk_mt8173_mm_drv); diff --git a/drivers/clk/mediatek/clk-mt8173.c b/drivers/clk/mediatek/clk-mt8173.c index 537a7f49b0f7..8f898ac476c0 100644 --- a/drivers/clk/mediatek/clk-mt8173.c +++ b/drivers/clk/mediatek/clk-mt8173.c @@ -753,93 +753,6 @@ static const struct mtk_gate img_clks[] __initconst = { GATE_IMG(CLK_IMG_FD, "img_fd", "mm_sel", 11), }; -static const struct mtk_gate_regs mm0_cg_regs __initconst = { - .set_ofs = 0x0104, - .clr_ofs = 0x0108, - .sta_ofs = 0x0100, -}; - -static const struct mtk_gate_regs mm1_cg_regs __initconst = { - .set_ofs = 0x0114, - .clr_ofs = 0x0118, - .sta_ofs = 0x0110, -}; - -#define GATE_MM0(_id, _name, _parent, _shift) { \ - .id = _id, \ - .name = _name, \ - .parent_name = _parent, \ - .regs = &mm0_cg_regs, \ - .shift = _shift, \ - .ops = &mtk_clk_gate_ops_setclr, \ - } - -#define GATE_MM1(_id, _name, _parent, _shift) { \ - .id = _id, \ - .name = _name, \ - .parent_name = _parent, \ - .regs = &mm1_cg_regs, \ - .shift = _shift, \ - .ops = &mtk_clk_gate_ops_setclr, \ - } - -static const struct mtk_gate mm_clks[] __initconst = { - /* MM0 */ - GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0), - GATE_MM0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1), - GATE_MM0(CLK_MM_CAM_MDP, "mm_cam_mdp", "mm_sel", 2), - GATE_MM0(CLK_MM_MDP_RDMA0, "mm_mdp_rdma0", "mm_sel", 3), - GATE_MM0(CLK_MM_MDP_RDMA1, "mm_mdp_rdma1", "mm_sel", 4), - GATE_MM0(CLK_MM_MDP_RSZ0, "mm_mdp_rsz0", "mm_sel", 5), - GATE_MM0(CLK_MM_MDP_RSZ1, "mm_mdp_rsz1", "mm_sel", 6), - GATE_MM0(CLK_MM_MDP_RSZ2, "mm_mdp_rsz2", "mm_sel", 7), - GATE_MM0(CLK_MM_MDP_TDSHP0, "mm_mdp_tdshp0", "mm_sel", 8), - GATE_MM0(CLK_MM_MDP_TDSHP1, "mm_mdp_tdshp1", "mm_sel", 9), - GATE_MM0(CLK_MM_MDP_WDMA, "mm_mdp_wdma", "mm_sel", 11), - GATE_MM0(CLK_MM_MDP_WROT0, "mm_mdp_wrot0", "mm_sel", 12), - GATE_MM0(CLK_MM_MDP_WROT1, "mm_mdp_wrot1", "mm_sel", 13), - GATE_MM0(CLK_MM_FAKE_ENG, "mm_fake_eng", "mm_sel", 14), - GATE_MM0(CLK_MM_MUTEX_32K, "mm_mutex_32k", "rtc_sel", 15), - GATE_MM0(CLK_MM_DISP_OVL0, "mm_disp_ovl0", "mm_sel", 16), - GATE_MM0(CLK_MM_DISP_OVL1, "mm_disp_ovl1", "mm_sel", 17), - GATE_MM0(CLK_MM_DISP_RDMA0, "mm_disp_rdma0", "mm_sel", 18), - GATE_MM0(CLK_MM_DISP_RDMA1, "mm_disp_rdma1", "mm_sel", 19), - GATE_MM0(CLK_MM_DISP_RDMA2, "mm_disp_rdma2", "mm_sel", 20), - GATE_MM0(CLK_MM_DISP_WDMA0, "mm_disp_wdma0", "mm_sel", 21), - GATE_MM0(CLK_MM_DISP_WDMA1, "mm_disp_wdma1", "mm_sel", 22), - GATE_MM0(CLK_MM_DISP_COLOR0, "mm_disp_color0", "mm_sel", 23), - GATE_MM0(CLK_MM_DISP_COLOR1, "mm_disp_color1", "mm_sel", 24), - GATE_MM0(CLK_MM_DISP_AAL, "mm_disp_aal", "mm_sel", 25), - GATE_MM0(CLK_MM_DISP_GAMMA, "mm_disp_gamma", "mm_sel", 26), - GATE_MM0(CLK_MM_DISP_UFOE, "mm_disp_ufoe", "mm_sel", 27), - GATE_MM0(CLK_MM_DISP_SPLIT0, "mm_disp_split0", "mm_sel", 28), - GATE_MM0(CLK_MM_DISP_SPLIT1, "mm_disp_split1", "mm_sel", 29), - GATE_MM0(CLK_MM_DISP_MERGE, "mm_disp_merge", "mm_sel", 30), - GATE_MM0(CLK_MM_DISP_OD, "mm_disp_od", "mm_sel", 31), - /* MM1 */ - GATE_MM1(CLK_MM_DISP_PWM0MM, "mm_disp_pwm0mm", "mm_sel", 0), - GATE_MM1(CLK_MM_DISP_PWM026M, "mm_disp_pwm026m", "pwm_sel", 1), - GATE_MM1(CLK_MM_DISP_PWM1MM, "mm_disp_pwm1mm", "mm_sel", 2), - GATE_MM1(CLK_MM_DISP_PWM126M, "mm_disp_pwm126m", "pwm_sel", 3), - GATE_MM1(CLK_MM_DSI0_ENGINE, "mm_dsi0_engine", "mm_sel", 4), - GATE_MM1(CLK_MM_DSI0_DIGITAL, "mm_dsi0_digital", "dsi0_dig", 5), - GATE_MM1(CLK_MM_DSI1_ENGINE, "mm_dsi1_engine", "mm_sel", 6), - GATE_MM1(CLK_MM_DSI1_DIGITAL, "mm_dsi1_digital", "dsi1_dig", 7), - GATE_MM1(CLK_MM_DPI_PIXEL, "mm_dpi_pixel", "dpi0_sel", 8), - GATE_MM1(CLK_MM_DPI_ENGINE, "mm_dpi_engine", "mm_sel", 9), - GATE_MM1(CLK_MM_DPI1_PIXEL, "mm_dpi1_pixel", "lvds_pxl", 10), - GATE_MM1(CLK_MM_DPI1_ENGINE, "mm_dpi1_engine", "mm_sel", 11), - GATE_MM1(CLK_MM_HDMI_PIXEL, "mm_hdmi_pixel", "dpi0_sel", 12), - GATE_MM1(CLK_MM_HDMI_PLLCK, "mm_hdmi_pllck", "hdmi_sel", 13), - GATE_MM1(CLK_MM_HDMI_AUDIO, "mm_hdmi_audio", "apll1", 14), - GATE_MM1(CLK_MM_HDMI_SPDIF, "mm_hdmi_spdif", "apll2", 15), - GATE_MM1(CLK_MM_LVDS_PIXEL, "mm_lvds_pixel", "lvds_pxl", 16), - GATE_MM1(CLK_MM_LVDS_CTS, "mm_lvds_cts", "lvds_cts", 17), - GATE_MM1(CLK_MM_SMI_LARB4, "mm_smi_larb4", "mm_sel", 18), - GATE_MM1(CLK_MM_HDMI_HDCP, "mm_hdmi_hdcp", "hdcp_sel", 19), - GATE_MM1(CLK_MM_HDMI_HDCP24M, "mm_hdmi_hdcp24m", "hdcp_24m_sel", 20), -}; - static const struct mtk_gate_regs vdec0_cg_regs __initconst = { .set_ofs = 0x0000, .clr_ofs = 0x0004, @@ -1144,23 +1057,6 @@ static void __init mtk_imgsys_init(struct device_node *node) } CLK_OF_DECLARE(mtk_imgsys, "mediatek,mt8173-imgsys", mtk_imgsys_init); -static void __init mtk_mmsys_init(struct device_node *node) -{ - struct clk_onecell_data *clk_data; - int r; - - clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); - - mtk_clk_register_gates(node, mm_clks, ARRAY_SIZE(mm_clks), - clk_data); - - r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); - if (r) - pr_err("%s(): could not register clock provider: %d\n", - __func__, r); -} -CLK_OF_DECLARE(mtk_mmsys, "mediatek,mt8173-mmsys", mtk_mmsys_init); - static void __init mtk_vdecsys_init(struct device_node *node) { struct clk_onecell_data *clk_data; diff --git a/drivers/clk/mediatek/clk-mt8183-mm.c b/drivers/clk/mediatek/clk-mt8183-mm.c index 720c696b506d..9d60e09619c1 100644 --- a/drivers/clk/mediatek/clk-mt8183-mm.c +++ b/drivers/clk/mediatek/clk-mt8183-mm.c @@ -84,8 +84,9 @@ static const struct mtk_gate mm_clks[] = { static int clk_mt8183_mm_probe(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct device_node *node = dev->parent->of_node; struct clk_onecell_data *clk_data; - struct device_node *node = pdev->dev.of_node; clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); @@ -95,16 +96,10 @@ static int clk_mt8183_mm_probe(struct platform_device *pdev) return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); } -static const struct of_device_id of_match_clk_mt8183_mm[] = { - { .compatible = "mediatek,mt8183-mmsys", }, - {} -}; - static struct platform_driver clk_mt8183_mm_drv = { .probe = clk_mt8183_mm_probe, .driver = { .name = "clk-mt8183-mm", - .of_match_table = of_match_clk_mt8183_mm, }, }; diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm index 9481292981f0..c6cbfc8baf72 100644 --- a/drivers/cpufreq/Kconfig.arm +++ b/drivers/cpufreq/Kconfig.arm @@ -295,11 +295,11 @@ config ARM_TANGO_CPUFREQ default y config ARM_TEGRA20_CPUFREQ - tristate "Tegra20 CPUFreq support" - depends on ARCH_TEGRA + tristate "Tegra20/30 CPUFreq support" + depends on ARCH_TEGRA && CPUFREQ_DT default y help - This adds the CPUFreq driver support for Tegra20 SOCs. + This adds the CPUFreq driver support for Tegra20/30 SOCs. config ARM_TEGRA124_CPUFREQ bool "Tegra124 CPUFreq support" diff --git a/drivers/cpufreq/tegra20-cpufreq.c b/drivers/cpufreq/tegra20-cpufreq.c index f84ecd22f488..8c893043953e 100644 --- a/drivers/cpufreq/tegra20-cpufreq.c +++ b/drivers/cpufreq/tegra20-cpufreq.c @@ -7,201 +7,96 @@ * Based on arch/arm/plat-omap/cpu-omap.c, (C) 2005 Nokia Corporation */ -#include <linux/clk.h> -#include <linux/cpufreq.h> +#include <linux/bits.h> +#include <linux/cpu.h> #include <linux/err.h> #include <linux/init.h> #include <linux/module.h> +#include <linux/of_device.h> #include <linux/platform_device.h> +#include <linux/pm_opp.h> #include <linux/types.h> -static struct cpufreq_frequency_table freq_table[] = { - { .frequency = 216000 }, - { .frequency = 312000 }, - { .frequency = 456000 }, - { .frequency = 608000 }, - { .frequency = 760000 }, - { .frequency = 816000 }, - { .frequency = 912000 }, - { .frequency = 1000000 }, - { .frequency = CPUFREQ_TABLE_END }, -}; - -struct tegra20_cpufreq { - struct device *dev; - struct cpufreq_driver driver; - struct clk *cpu_clk; - struct clk *pll_x_clk; - struct clk *pll_p_clk; - bool pll_x_prepared; -}; +#include <soc/tegra/common.h> +#include <soc/tegra/fuse.h> -static unsigned int tegra_get_intermediate(struct cpufreq_policy *policy, - unsigned int index) +static bool cpu0_node_has_opp_v2_prop(void) { - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); - unsigned int ifreq = clk_get_rate(cpufreq->pll_p_clk) / 1000; - - /* - * Don't switch to intermediate freq if: - * - we are already at it, i.e. policy->cur == ifreq - * - index corresponds to ifreq - */ - if (freq_table[index].frequency == ifreq || policy->cur == ifreq) - return 0; - - return ifreq; -} + struct device_node *np = of_cpu_device_node_get(0); + bool ret = false; -static int tegra_target_intermediate(struct cpufreq_policy *policy, - unsigned int index) -{ - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); - int ret; - - /* - * Take an extra reference to the main pll so it doesn't turn - * off when we move the cpu off of it as enabling it again while we - * switch to it from tegra_target() would take additional time. - * - * When target-freq is equal to intermediate freq we don't need to - * switch to an intermediate freq and so this routine isn't called. - * Also, we wouldn't be using pll_x anymore and must not take extra - * reference to it, as it can be disabled now to save some power. - */ - clk_prepare_enable(cpufreq->pll_x_clk); - - ret = clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_p_clk); - if (ret) - clk_disable_unprepare(cpufreq->pll_x_clk); - else - cpufreq->pll_x_prepared = true; + if (of_get_property(np, "operating-points-v2", NULL)) + ret = true; + of_node_put(np); return ret; } -static int tegra_target(struct cpufreq_policy *policy, unsigned int index) -{ - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); - unsigned long rate = freq_table[index].frequency; - unsigned int ifreq = clk_get_rate(cpufreq->pll_p_clk) / 1000; - int ret; - - /* - * target freq == pll_p, don't need to take extra reference to pll_x_clk - * as it isn't used anymore. - */ - if (rate == ifreq) - return clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_p_clk); - - ret = clk_set_rate(cpufreq->pll_x_clk, rate * 1000); - /* Restore to earlier frequency on error, i.e. pll_x */ - if (ret) - dev_err(cpufreq->dev, "Failed to change pll_x to %lu\n", rate); - - ret = clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_x_clk); - /* This shouldn't fail while changing or restoring */ - WARN_ON(ret); - - /* - * Drop count to pll_x clock only if we switched to intermediate freq - * earlier while transitioning to a target frequency. - */ - if (cpufreq->pll_x_prepared) { - clk_disable_unprepare(cpufreq->pll_x_clk); - cpufreq->pll_x_prepared = false; - } - - return ret; -} - -static int tegra_cpu_init(struct cpufreq_policy *policy) -{ - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); - - clk_prepare_enable(cpufreq->cpu_clk); - - /* FIXME: what's the actual transition time? */ - cpufreq_generic_init(policy, freq_table, 300 * 1000); - policy->clk = cpufreq->cpu_clk; - policy->suspend_freq = freq_table[0].frequency; - return 0; -} - -static int tegra_cpu_exit(struct cpufreq_policy *policy) -{ - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); - - clk_disable_unprepare(cpufreq->cpu_clk); - return 0; -} - static int tegra20_cpufreq_probe(struct platform_device *pdev) { - struct tegra20_cpufreq *cpufreq; + struct platform_device *cpufreq_dt; + struct opp_table *opp_table; + struct device *cpu_dev; + u32 versions[2]; int err; - cpufreq = devm_kzalloc(&pdev->dev, sizeof(*cpufreq), GFP_KERNEL); - if (!cpufreq) - return -ENOMEM; + if (!cpu0_node_has_opp_v2_prop()) { + dev_err(&pdev->dev, "operating points not found\n"); + dev_err(&pdev->dev, "please update your device tree\n"); + return -ENODEV; + } + + if (of_machine_is_compatible("nvidia,tegra20")) { + versions[0] = BIT(tegra_sku_info.cpu_process_id); + versions[1] = BIT(tegra_sku_info.soc_speedo_id); + } else { + versions[0] = BIT(tegra_sku_info.cpu_process_id); + versions[1] = BIT(tegra_sku_info.cpu_speedo_id); + } + + dev_info(&pdev->dev, "hardware version 0x%x 0x%x\n", + versions[0], versions[1]); - cpufreq->cpu_clk = clk_get_sys(NULL, "cclk"); - if (IS_ERR(cpufreq->cpu_clk)) - return PTR_ERR(cpufreq->cpu_clk); + cpu_dev = get_cpu_device(0); + if (WARN_ON(!cpu_dev)) + return -ENODEV; - cpufreq->pll_x_clk = clk_get_sys(NULL, "pll_x"); - if (IS_ERR(cpufreq->pll_x_clk)) { - err = PTR_ERR(cpufreq->pll_x_clk); - goto put_cpu; + opp_table = dev_pm_opp_set_supported_hw(cpu_dev, versions, 2); + err = PTR_ERR_OR_ZERO(opp_table); + if (err) { + dev_err(&pdev->dev, "failed to set supported hw: %d\n", err); + return err; } - cpufreq->pll_p_clk = clk_get_sys(NULL, "pll_p"); - if (IS_ERR(cpufreq->pll_p_clk)) { - err = PTR_ERR(cpufreq->pll_p_clk); - goto put_pll_x; + cpufreq_dt = platform_device_register_simple("cpufreq-dt", -1, NULL, 0); + err = PTR_ERR_OR_ZERO(cpufreq_dt); + if (err) { + dev_err(&pdev->dev, + "failed to create cpufreq-dt device: %d\n", err); + goto err_put_supported_hw; } - cpufreq->dev = &pdev->dev; - cpufreq->driver.get = cpufreq_generic_get; - cpufreq->driver.attr = cpufreq_generic_attr; - cpufreq->driver.init = tegra_cpu_init; - cpufreq->driver.exit = tegra_cpu_exit; - cpufreq->driver.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK; - cpufreq->driver.verify = cpufreq_generic_frequency_table_verify; - cpufreq->driver.suspend = cpufreq_generic_suspend; - cpufreq->driver.driver_data = cpufreq; - cpufreq->driver.target_index = tegra_target; - cpufreq->driver.get_intermediate = tegra_get_intermediate; - cpufreq->driver.target_intermediate = tegra_target_intermediate; - snprintf(cpufreq->driver.name, CPUFREQ_NAME_LEN, "tegra"); - - err = cpufreq_register_driver(&cpufreq->driver); - if (err) - goto put_pll_p; - - platform_set_drvdata(pdev, cpufreq); + platform_set_drvdata(pdev, cpufreq_dt); return 0; -put_pll_p: - clk_put(cpufreq->pll_p_clk); -put_pll_x: - clk_put(cpufreq->pll_x_clk); -put_cpu: - clk_put(cpufreq->cpu_clk); +err_put_supported_hw: + dev_pm_opp_put_supported_hw(opp_table); return err; } static int tegra20_cpufreq_remove(struct platform_device *pdev) { - struct tegra20_cpufreq *cpufreq = platform_get_drvdata(pdev); + struct platform_device *cpufreq_dt; + struct opp_table *opp_table; - cpufreq_unregister_driver(&cpufreq->driver); + cpufreq_dt = platform_get_drvdata(pdev); + platform_device_unregister(cpufreq_dt); - clk_put(cpufreq->pll_p_clk); - clk_put(cpufreq->pll_x_clk); - clk_put(cpufreq->cpu_clk); + opp_table = dev_pm_opp_get_opp_table(get_cpu_device(0)); + dev_pm_opp_put_supported_hw(opp_table); + dev_pm_opp_put_opp_table(opp_table); return 0; } diff --git a/drivers/cpuidle/cpuidle-tegra.c b/drivers/cpuidle/cpuidle-tegra.c index 313b0290e97b..150045849d78 100644 --- a/drivers/cpuidle/cpuidle-tegra.c +++ b/drivers/cpuidle/cpuidle-tegra.c @@ -365,7 +365,6 @@ static int tegra_cpuidle_probe(struct platform_device *pdev) break; case TEGRA30: - tegra_cpuidle_disable_state(TEGRA_CC6); break; case TEGRA114: diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile index 6694d0d908d6..1cad32b38b29 100644 --- a/drivers/firmware/arm_scmi/Makefile +++ b/drivers/firmware/arm_scmi/Makefile @@ -2,6 +2,8 @@ obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o scmi-bus-y = bus.o scmi-driver-y = driver.o -scmi-transport-y = mailbox.o shmem.o +scmi-transport-y = shmem.o +scmi-transport-$(CONFIG_MAILBOX) += mailbox.o +scmi-transport-$(CONFIG_ARM_PSCI_FW) += smc.o scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c index f804e8af6521..ce7d9203e41b 100644 --- a/drivers/firmware/arm_scmi/base.c +++ b/drivers/firmware/arm_scmi/base.c @@ -14,6 +14,13 @@ enum scmi_base_protocol_cmd { BASE_DISCOVER_LIST_PROTOCOLS = 0x6, BASE_DISCOVER_AGENT = 0x7, BASE_NOTIFY_ERRORS = 0x8, + BASE_SET_DEVICE_PERMISSIONS = 0x9, + BASE_SET_PROTOCOL_PERMISSIONS = 0xa, + BASE_RESET_AGENT_CONFIGURATION = 0xb, +}; + +enum scmi_base_protocol_notify { + BASE_ERROR_EVENT = 0x0, }; struct scmi_msg_resp_base_attributes { diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index 5ac06469b01c..31fe5a22a011 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -178,6 +178,8 @@ struct scmi_chan_info { * @send_message: Callback to send a message * @mark_txdone: Callback to mark tx as done * @fetch_response: Callback to fetch response + * @fetch_notification: Callback to fetch notification + * @clear_channel: Callback to clear a channel * @poll_done: Callback to poll transfer status */ struct scmi_transport_ops { @@ -190,6 +192,9 @@ struct scmi_transport_ops { void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret); void (*fetch_response)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); + void (*fetch_notification)(struct scmi_chan_info *cinfo, + size_t max_len, struct scmi_xfer *xfer); + void (*clear_channel)(struct scmi_chan_info *cinfo); bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); }; @@ -210,6 +215,9 @@ struct scmi_desc { }; extern const struct scmi_desc scmi_mailbox_desc; +#ifdef CONFIG_HAVE_ARM_SMCCC +extern const struct scmi_desc scmi_smc_desc; +#endif void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr); void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id); @@ -222,5 +230,8 @@ void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem, u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem); void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem, struct scmi_xfer *xfer); +void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem, + size_t max_len, struct scmi_xfer *xfer); +void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem); bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, struct scmi_xfer *xfer); diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index dbec767222e9..7483cacf63f9 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -76,6 +76,7 @@ struct scmi_xfers_info { * implementation version and (sub-)vendor identification. * @handle: Instance of SCMI handle to send to clients * @tx_minfo: Universal Transmit Message management info + * @rx_minfo: Universal Receive Message management info * @tx_idr: IDR object to map protocol id to Tx channel info pointer * @rx_idr: IDR object to map protocol id to Rx channel info pointer * @protocols_imp: List of protocols implemented, currently maximum of @@ -89,6 +90,7 @@ struct scmi_info { struct scmi_revision_info version; struct scmi_handle handle; struct scmi_xfers_info tx_minfo; + struct scmi_xfers_info rx_minfo; struct idr tx_idr; struct idr rx_idr; u8 *protocols_imp; @@ -200,37 +202,66 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer) spin_unlock_irqrestore(&minfo->xfer_lock, flags); } -/** - * scmi_rx_callback() - callback for receiving messages - * - * @cinfo: SCMI channel info - * @msg_hdr: Message header - * - * Processes one received message to appropriate transfer information and - * signals completion of the transfer. - * - * NOTE: This function will be invoked in IRQ context, hence should be - * as optimal as possible. - */ -void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr) +static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr) { - struct scmi_info *info = handle_to_scmi_info(cinfo->handle); - struct scmi_xfers_info *minfo = &info->tx_minfo; - u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr); - u8 msg_type = MSG_XTRACT_TYPE(msg_hdr); - struct device *dev = cinfo->dev; struct scmi_xfer *xfer; + struct device *dev = cinfo->dev; + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); + struct scmi_xfers_info *minfo = &info->rx_minfo; - if (msg_type == MSG_TYPE_NOTIFICATION) - return; /* Notifications not yet supported */ + xfer = scmi_xfer_get(cinfo->handle, minfo); + if (IS_ERR(xfer)) { + dev_err(dev, "failed to get free message slot (%ld)\n", + PTR_ERR(xfer)); + info->desc->ops->clear_channel(cinfo); + return; + } + + unpack_scmi_header(msg_hdr, &xfer->hdr); + scmi_dump_header_dbg(dev, &xfer->hdr); + info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, + xfer); + + trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, + xfer->hdr.protocol_id, xfer->hdr.seq, + MSG_TYPE_NOTIFICATION); + + __scmi_xfer_put(minfo, xfer); + + info->desc->ops->clear_channel(cinfo); +} + +static void scmi_handle_response(struct scmi_chan_info *cinfo, + u16 xfer_id, u8 msg_type) +{ + struct scmi_xfer *xfer; + struct device *dev = cinfo->dev; + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); + struct scmi_xfers_info *minfo = &info->tx_minfo; /* Are we even expecting this? */ if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { dev_err(dev, "message for %d is not expected!\n", xfer_id); + info->desc->ops->clear_channel(cinfo); return; } xfer = &minfo->xfer_block[xfer_id]; + /* + * Even if a response was indeed expected on this slot at this point, + * a buggy platform could wrongly reply feeding us an unexpected + * delayed response we're not prepared to handle: bail-out safely + * blaming firmware. + */ + if (unlikely(msg_type == MSG_TYPE_DELAYED_RESP && !xfer->async_done)) { + dev_err(dev, + "Delayed Response for %d not expected! Buggy F/W ?\n", + xfer_id); + info->desc->ops->clear_channel(cinfo); + /* It was unexpected, so nobody will clear the xfer if not us */ + __scmi_xfer_put(minfo, xfer); + return; + } scmi_dump_header_dbg(dev, &xfer->hdr); @@ -240,10 +271,43 @@ void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr) xfer->hdr.protocol_id, xfer->hdr.seq, msg_type); - if (msg_type == MSG_TYPE_DELAYED_RESP) + if (msg_type == MSG_TYPE_DELAYED_RESP) { + info->desc->ops->clear_channel(cinfo); complete(xfer->async_done); - else + } else { complete(&xfer->done); + } +} + +/** + * scmi_rx_callback() - callback for receiving messages + * + * @cinfo: SCMI channel info + * @msg_hdr: Message header + * + * Processes one received message to appropriate transfer information and + * signals completion of the transfer. + * + * NOTE: This function will be invoked in IRQ context, hence should be + * as optimal as possible. + */ +void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr) +{ + u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr); + u8 msg_type = MSG_XTRACT_TYPE(msg_hdr); + + switch (msg_type) { + case MSG_TYPE_NOTIFICATION: + scmi_handle_notification(cinfo, msg_hdr); + break; + case MSG_TYPE_COMMAND: + case MSG_TYPE_DELAYED_RESP: + scmi_handle_response(cinfo, xfer_id, msg_type); + break; + default: + WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type); + break; + } } /** @@ -525,13 +589,13 @@ int scmi_handle_put(const struct scmi_handle *handle) return 0; } -static int scmi_xfer_info_init(struct scmi_info *sinfo) +static int __scmi_xfer_info_init(struct scmi_info *sinfo, + struct scmi_xfers_info *info) { int i; struct scmi_xfer *xfer; struct device *dev = sinfo->dev; const struct scmi_desc *desc = sinfo->desc; - struct scmi_xfers_info *info = &sinfo->tx_minfo; /* Pre-allocated messages, no more than what hdr.seq can support */ if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) { @@ -566,6 +630,16 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo) return 0; } +static int scmi_xfer_info_init(struct scmi_info *sinfo) +{ + int ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo); + + if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE)) + ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo); + + return ret; +} + static int scmi_chan_setup(struct scmi_info *info, struct device *dev, int prot_id, bool tx) { @@ -699,10 +773,6 @@ static int scmi_probe(struct platform_device *pdev) info->desc = desc; INIT_LIST_HEAD(&info->node); - ret = scmi_xfer_info_init(info); - if (ret) - return ret; - platform_set_drvdata(pdev, info); idr_init(&info->tx_idr); idr_init(&info->rx_idr); @@ -715,6 +785,10 @@ static int scmi_probe(struct platform_device *pdev) if (ret) return ret; + ret = scmi_xfer_info_init(info); + if (ret) + return ret; + ret = scmi_base_protocol_init(handle); if (ret) { dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); @@ -827,6 +901,9 @@ ATTRIBUTE_GROUPS(versions); /* Each compatible listed below must have descriptor associated with it */ static const struct of_device_id scmi_of_match[] = { { .compatible = "arm,scmi", .data = &scmi_mailbox_desc }, +#ifdef CONFIG_ARM_PSCI_FW + { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc}, +#endif { /* Sentinel */ }, }; diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c index 73077bbc4ad9..6998dc86b5ce 100644 --- a/drivers/firmware/arm_scmi/mailbox.c +++ b/drivers/firmware/arm_scmi/mailbox.c @@ -158,6 +158,21 @@ static void mailbox_fetch_response(struct scmi_chan_info *cinfo, shmem_fetch_response(smbox->shmem, xfer); } +static void mailbox_fetch_notification(struct scmi_chan_info *cinfo, + size_t max_len, struct scmi_xfer *xfer) +{ + struct scmi_mailbox *smbox = cinfo->transport_info; + + shmem_fetch_notification(smbox->shmem, max_len, xfer); +} + +static void mailbox_clear_channel(struct scmi_chan_info *cinfo) +{ + struct scmi_mailbox *smbox = cinfo->transport_info; + + shmem_clear_channel(smbox->shmem); +} + static bool mailbox_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) { @@ -173,6 +188,8 @@ static struct scmi_transport_ops scmi_mailbox_ops = { .send_message = mailbox_send_message, .mark_txdone = mailbox_mark_txdone, .fetch_response = mailbox_fetch_response, + .fetch_notification = mailbox_fetch_notification, + .clear_channel = mailbox_clear_channel, .poll_done = mailbox_poll_done, }; diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c index 34f3a917dd8d..eadc171e254b 100644 --- a/drivers/firmware/arm_scmi/perf.c +++ b/drivers/firmware/arm_scmi/perf.c @@ -27,6 +27,11 @@ enum scmi_performance_protocol_cmd { PERF_DESCRIBE_FASTCHANNEL = 0xb, }; +enum scmi_performance_protocol_notify { + PERFORMANCE_LIMITS_CHANGED = 0x0, + PERFORMANCE_LEVEL_CHANGED = 0x1, +}; + struct scmi_opp { u32 perf; u32 power; diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c index 214886ce84f1..cf7f0312381b 100644 --- a/drivers/firmware/arm_scmi/power.c +++ b/drivers/firmware/arm_scmi/power.c @@ -12,6 +12,12 @@ enum scmi_power_protocol_cmd { POWER_STATE_SET = 0x4, POWER_STATE_GET = 0x5, POWER_STATE_NOTIFY = 0x6, + POWER_STATE_CHANGE_REQUESTED_NOTIFY = 0x7, +}; + +enum scmi_power_protocol_notify { + POWER_STATE_CHANGED = 0x0, + POWER_STATE_CHANGE_REQUESTED = 0x1, }; struct scmi_msg_resp_power_attributes { diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c index eba61b9c1f53..db1b1ab303da 100644 --- a/drivers/firmware/arm_scmi/sensors.c +++ b/drivers/firmware/arm_scmi/sensors.c @@ -14,6 +14,10 @@ enum scmi_sensor_protocol_cmd { SENSOR_READING_GET = 0x6, }; +enum scmi_sensor_protocol_notify { + SENSOR_TRIP_POINT_EVENT = 0x0, +}; + struct scmi_msg_resp_sensor_attributes { __le16 num_sensors; u8 max_requests; diff --git a/drivers/firmware/arm_scmi/shmem.c b/drivers/firmware/arm_scmi/shmem.c index e1e816e0018c..0e3eaea5d852 100644 --- a/drivers/firmware/arm_scmi/shmem.c +++ b/drivers/firmware/arm_scmi/shmem.c @@ -67,6 +67,21 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem, memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len); } +void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem, + size_t max_len, struct scmi_xfer *xfer) +{ + /* Skip only the length of header in shmem area i.e 4 bytes */ + xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4); + + /* Take a copy to the rx buffer.. */ + memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len); +} + +void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem) +{ + iowrite32(SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE, &shmem->channel_status); +} + bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, struct scmi_xfer *xfer) { diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c new file mode 100644 index 000000000000..49bc4b0e8428 --- /dev/null +++ b/drivers/firmware/arm_scmi/smc.c @@ -0,0 +1,153 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * System Control and Management Interface (SCMI) Message SMC/HVC + * Transport driver + * + * Copyright 2020 NXP + */ + +#include <linux/arm-smccc.h> +#include <linux/device.h> +#include <linux/err.h> +#include <linux/mutex.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/slab.h> + +#include "common.h" + +/** + * struct scmi_smc - Structure representing a SCMI smc transport + * + * @cinfo: SCMI channel info + * @shmem: Transmit/Receive shared memory area + * @func_id: smc/hvc call function id + */ + +struct scmi_smc { + struct scmi_chan_info *cinfo; + struct scmi_shared_mem __iomem *shmem; + struct mutex shmem_lock; + u32 func_id; +}; + +static bool smc_chan_available(struct device *dev, int idx) +{ + struct device_node *np = of_parse_phandle(dev->of_node, "shmem", 0); + if (!np) + return false; + + of_node_put(np); + return true; +} + +static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, + bool tx) +{ + struct device *cdev = cinfo->dev; + struct scmi_smc *scmi_info; + resource_size_t size; + struct resource res; + struct device_node *np; + u32 func_id; + int ret; + + if (!tx) + return -ENODEV; + + scmi_info = devm_kzalloc(dev, sizeof(*scmi_info), GFP_KERNEL); + if (!scmi_info) + return -ENOMEM; + + np = of_parse_phandle(cdev->of_node, "shmem", 0); + ret = of_address_to_resource(np, 0, &res); + of_node_put(np); + if (ret) { + dev_err(cdev, "failed to get SCMI Tx shared memory\n"); + return ret; + } + + size = resource_size(&res); + scmi_info->shmem = devm_ioremap(dev, res.start, size); + if (!scmi_info->shmem) { + dev_err(dev, "failed to ioremap SCMI Tx shared memory\n"); + return -EADDRNOTAVAIL; + } + + ret = of_property_read_u32(dev->of_node, "arm,smc-id", &func_id); + if (ret < 0) + return ret; + + scmi_info->func_id = func_id; + scmi_info->cinfo = cinfo; + mutex_init(&scmi_info->shmem_lock); + cinfo->transport_info = scmi_info; + + return 0; +} + +static int smc_chan_free(int id, void *p, void *data) +{ + struct scmi_chan_info *cinfo = p; + struct scmi_smc *scmi_info = cinfo->transport_info; + + cinfo->transport_info = NULL; + scmi_info->cinfo = NULL; + + scmi_free_channel(cinfo, data, id); + + return 0; +} + +static int smc_send_message(struct scmi_chan_info *cinfo, + struct scmi_xfer *xfer) +{ + struct scmi_smc *scmi_info = cinfo->transport_info; + struct arm_smccc_res res; + + mutex_lock(&scmi_info->shmem_lock); + + shmem_tx_prepare(scmi_info->shmem, xfer); + + arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res); + scmi_rx_callback(scmi_info->cinfo, shmem_read_header(scmi_info->shmem)); + + mutex_unlock(&scmi_info->shmem_lock); + + /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ + if (res.a0) + return -EOPNOTSUPP; + return 0; +} + +static void smc_fetch_response(struct scmi_chan_info *cinfo, + struct scmi_xfer *xfer) +{ + struct scmi_smc *scmi_info = cinfo->transport_info; + + shmem_fetch_response(scmi_info->shmem, xfer); +} + +static bool +smc_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) +{ + struct scmi_smc *scmi_info = cinfo->transport_info; + + return shmem_poll_done(scmi_info->shmem, xfer); +} + +static struct scmi_transport_ops scmi_smc_ops = { + .chan_available = smc_chan_available, + .chan_setup = smc_chan_setup, + .chan_free = smc_chan_free, + .send_message = smc_send_message, + .fetch_response = smc_fetch_response, + .poll_done = smc_poll_done, +}; + +const struct scmi_desc scmi_smc_desc = { + .ops = &scmi_smc_ops, + .max_rx_timeout_ms = 30, + .max_msg = 1, + .max_msg_size = 128, +}; diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c index f71eaa5bf52d..2ab048222fe9 100644 --- a/drivers/firmware/imx/imx-scu.c +++ b/drivers/firmware/imx/imx-scu.c @@ -8,7 +8,6 @@ */ #include <linux/err.h> -#include <linux/firmware/imx/types.h> #include <linux/firmware/imx/ipc.h> #include <linux/firmware/imx/sci.h> #include <linux/interrupt.h> @@ -38,6 +37,7 @@ struct imx_sc_ipc { struct device *dev; struct mutex lock; struct completion done; + bool fast_ipc; /* temporarily store the SCU msg */ u32 *msg; @@ -115,6 +115,7 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg) struct imx_sc_ipc *sc_ipc = sc_chan->sc_ipc; struct imx_sc_rpc_msg *hdr; u32 *data = msg; + int i; if (!sc_ipc->msg) { dev_warn(sc_ipc->dev, "unexpected rx idx %d 0x%08x, ignore!\n", @@ -122,6 +123,19 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg) return; } + if (sc_ipc->fast_ipc) { + hdr = msg; + sc_ipc->rx_size = hdr->size; + sc_ipc->msg[0] = *data++; + + for (i = 1; i < sc_ipc->rx_size; i++) + sc_ipc->msg[i] = *data++; + + complete(&sc_ipc->done); + + return; + } + if (sc_chan->idx == 0) { hdr = msg; sc_ipc->rx_size = hdr->size; @@ -143,20 +157,22 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg) static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg) { - struct imx_sc_rpc_msg *hdr = msg; + struct imx_sc_rpc_msg hdr = *(struct imx_sc_rpc_msg *)msg; struct imx_sc_chan *sc_chan; u32 *data = msg; int ret; + int size; int i; /* Check size */ - if (hdr->size > IMX_SC_RPC_MAX_MSG) + if (hdr.size > IMX_SC_RPC_MAX_MSG) return -EINVAL; - dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr->svc, - hdr->func, hdr->size); + dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr.svc, + hdr.func, hdr.size); - for (i = 0; i < hdr->size; i++) { + size = sc_ipc->fast_ipc ? 1 : hdr.size; + for (i = 0; i < size; i++) { sc_chan = &sc_ipc->chans[i % 4]; /* @@ -168,8 +184,10 @@ static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg) * Wait for tx_done before every send to ensure that no * queueing happens at the mailbox channel level. */ - wait_for_completion(&sc_chan->tx_done); - reinit_completion(&sc_chan->tx_done); + if (!sc_ipc->fast_ipc) { + wait_for_completion(&sc_chan->tx_done); + reinit_completion(&sc_chan->tx_done); + } ret = mbox_send_message(sc_chan->ch, &data[i]); if (ret < 0) @@ -246,6 +264,8 @@ static int imx_scu_probe(struct platform_device *pdev) struct imx_sc_chan *sc_chan; struct mbox_client *cl; char *chan_name; + struct of_phandle_args args; + int num_channel; int ret; int i; @@ -253,11 +273,20 @@ static int imx_scu_probe(struct platform_device *pdev) if (!sc_ipc) return -ENOMEM; - for (i = 0; i < SCU_MU_CHAN_NUM; i++) { - if (i < 4) + ret = of_parse_phandle_with_args(pdev->dev.of_node, "mboxes", + "#mbox-cells", 0, &args); + if (ret) + return ret; + + sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu"); + + num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM; + for (i = 0; i < num_channel; i++) { + if (i < num_channel / 2) chan_name = kasprintf(GFP_KERNEL, "tx%d", i); else - chan_name = kasprintf(GFP_KERNEL, "rx%d", i - 4); + chan_name = kasprintf(GFP_KERNEL, "rx%d", + i - num_channel / 2); if (!chan_name) return -ENOMEM; @@ -269,19 +298,22 @@ static int imx_scu_probe(struct platform_device *pdev) cl->knows_txdone = true; cl->rx_callback = imx_scu_rx_callback; - /* Initial tx_done completion as "done" */ - cl->tx_done = imx_scu_tx_done; - init_completion(&sc_chan->tx_done); - complete(&sc_chan->tx_done); + if (!sc_ipc->fast_ipc) { + /* Initial tx_done completion as "done" */ + cl->tx_done = imx_scu_tx_done; + init_completion(&sc_chan->tx_done); + complete(&sc_chan->tx_done); + } sc_chan->sc_ipc = sc_ipc; - sc_chan->idx = i % 4; + sc_chan->idx = i % (num_channel / 2); sc_chan->ch = mbox_request_channel_byname(cl, chan_name); if (IS_ERR(sc_chan->ch)) { ret = PTR_ERR(sc_chan->ch); if (ret != -EPROBE_DEFER) dev_err(dev, "Failed to request mbox chan %s ret %d\n", chan_name, ret); + kfree(chan_name); return ret; } diff --git a/drivers/firmware/qcom_scm-legacy.c b/drivers/firmware/qcom_scm-legacy.c index 8532e7c78ef7..eba6b60bfb61 100644 --- a/drivers/firmware/qcom_scm-legacy.c +++ b/drivers/firmware/qcom_scm-legacy.c @@ -56,7 +56,7 @@ struct scm_legacy_command { __le32 buf_offset; __le32 resp_hdr_offset; __le32 id; - __le32 buf[0]; + __le32 buf[]; }; /** diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c index 059bb0fbae9e..0e7233a20f34 100644 --- a/drivers/firmware/qcom_scm.c +++ b/drivers/firmware/qcom_scm.c @@ -6,7 +6,6 @@ #include <linux/init.h> #include <linux/cpumask.h> #include <linux/export.h> -#include <linux/dma-direct.h> #include <linux/dma-mapping.h> #include <linux/module.h> #include <linux/types.h> @@ -806,8 +805,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, struct qcom_scm_mem_map_info *mem_to_map; phys_addr_t mem_to_map_phys; phys_addr_t dest_phys; - phys_addr_t ptr_phys; - dma_addr_t ptr_dma; + dma_addr_t ptr_phys; size_t mem_to_map_sz; size_t dest_sz; size_t src_sz; @@ -824,10 +822,9 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) + ALIGN(dest_sz, SZ_64); - ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL); + ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL); if (!ptr) return -ENOMEM; - ptr_phys = dma_to_phys(__scm->dev, ptr_dma); /* Fill source vmid detail */ src = ptr; @@ -855,7 +852,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz, ptr_phys, src_sz, dest_phys, dest_sz); - dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma); + dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_phys); if (ret) { dev_err(__scm->dev, "Assign memory protection call failed %d\n", ret); @@ -943,7 +940,7 @@ bool qcom_scm_hdcp_available(void) qcom_scm_clk_disable(); - return ret > 0 ? true : false; + return ret > 0; } EXPORT_SYMBOL(qcom_scm_hdcp_available); diff --git a/drivers/firmware/tegra/bpmp-tegra186.c b/drivers/firmware/tegra/bpmp-tegra186.c index ea308751635f..63ab21d89c2c 100644 --- a/drivers/firmware/tegra/bpmp-tegra186.c +++ b/drivers/firmware/tegra/bpmp-tegra186.c @@ -176,7 +176,7 @@ static int tegra186_bpmp_init(struct tegra_bpmp *bpmp) priv->tx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0); if (!priv->tx.pool) { dev_err(bpmp->dev, "TX shmem pool not found\n"); - return -ENOMEM; + return -EPROBE_DEFER; } priv->tx.virt = gen_pool_dma_alloc(priv->tx.pool, 4096, &priv->tx.phys); @@ -188,7 +188,7 @@ static int tegra186_bpmp_init(struct tegra_bpmp *bpmp) priv->rx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 1); if (!priv->rx.pool) { dev_err(bpmp->dev, "RX shmem pool not found\n"); - err = -ENOMEM; + err = -EPROBE_DEFER; goto free_tx; } diff --git a/drivers/gpu/drm/mediatek/Kconfig b/drivers/gpu/drm/mediatek/Kconfig index fa5ffc4fe823..c420f5a3d33b 100644 --- a/drivers/gpu/drm/mediatek/Kconfig +++ b/drivers/gpu/drm/mediatek/Kconfig @@ -11,6 +11,7 @@ config DRM_MEDIATEK select DRM_MIPI_DSI select DRM_PANEL select MEMORY + select MTK_MMSYS select MTK_SMI select VIDEOMODE_HELPERS help diff --git a/drivers/gpu/drm/mediatek/mtk_disp_color.c b/drivers/gpu/drm/mediatek/mtk_disp_color.c index 6fb0d6983a4a..3ae9c810845b 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_color.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_color.c @@ -119,7 +119,10 @@ static int mtk_disp_color_probe(struct platform_device *pdev) ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, &mtk_disp_color_funcs); if (ret) { - dev_err(dev, "Failed to initialize component: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to initialize component: %d\n", + ret); + return ret; } diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c index 891d80c73e04..28651bc579bc 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c @@ -386,7 +386,10 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev) ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, &mtk_disp_ovl_funcs); if (ret) { - dev_err(dev, "Failed to initialize component: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to initialize component: %d\n", + ret); + return ret; } diff --git a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c index 0cb848d64206..e04319fedf46 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c @@ -294,7 +294,10 @@ static int mtk_disp_rdma_probe(struct platform_device *pdev) ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, &mtk_disp_rdma_funcs); if (ret) { - dev_err(dev, "Failed to initialize component: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to initialize component: %d\n", + ret); + return ret; } diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c index 945c3ac92998..d4f0fb7ad312 100644 --- a/drivers/gpu/drm/mediatek/mtk_dpi.c +++ b/drivers/gpu/drm/mediatek/mtk_dpi.c @@ -739,21 +739,27 @@ static int mtk_dpi_probe(struct platform_device *pdev) dpi->engine_clk = devm_clk_get(dev, "engine"); if (IS_ERR(dpi->engine_clk)) { ret = PTR_ERR(dpi->engine_clk); - dev_err(dev, "Failed to get engine clock: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get engine clock: %d\n", ret); + return ret; } dpi->pixel_clk = devm_clk_get(dev, "pixel"); if (IS_ERR(dpi->pixel_clk)) { ret = PTR_ERR(dpi->pixel_clk); - dev_err(dev, "Failed to get pixel clock: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get pixel clock: %d\n", ret); + return ret; } dpi->tvd_clk = devm_clk_get(dev, "pll"); if (IS_ERR(dpi->tvd_clk)) { ret = PTR_ERR(dpi->tvd_clk); - dev_err(dev, "Failed to get tvdpll clock: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get tvdpll clock: %d\n", ret); + return ret; } diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c index fe85e487e477..fe46c4bac64d 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c @@ -6,6 +6,7 @@ #include <linux/clk.h> #include <linux/pm_runtime.h> #include <linux/soc/mediatek/mtk-cmdq.h> +#include <linux/soc/mediatek/mtk-mmsys.h> #include <asm/barrier.h> #include <soc/mediatek/smi.h> @@ -28,7 +29,7 @@ * @enabled: records whether crtc_enable succeeded * @planes: array of 4 drm_plane structures, one for each overlay plane * @pending_planes: whether any plane has pending changes to be applied - * @config_regs: memory mapped mmsys configuration register space + * @mmsys_dev: pointer to the mmsys device for configuration registers * @mutex: handle to one of the ten disp_mutex streams * @ddp_comp_nr: number of components in ddp_comp * @ddp_comp: array of pointers the mtk_ddp_comp structures used by this crtc @@ -50,7 +51,7 @@ struct mtk_drm_crtc { u32 cmdq_event; #endif - void __iomem *config_regs; + struct device *mmsys_dev; struct mtk_disp_mutex *mutex; unsigned int ddp_comp_nr; struct mtk_ddp_comp **ddp_comp; @@ -300,9 +301,9 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc) DRM_DEBUG_DRIVER("mediatek_ddp_ddp_path_setup\n"); for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) { - mtk_ddp_add_comp_to_path(mtk_crtc->config_regs, - mtk_crtc->ddp_comp[i]->id, - mtk_crtc->ddp_comp[i + 1]->id); + mtk_mmsys_ddp_connect(mtk_crtc->mmsys_dev, + mtk_crtc->ddp_comp[i]->id, + mtk_crtc->ddp_comp[i + 1]->id); mtk_disp_mutex_add_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); } @@ -360,9 +361,9 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_drm_crtc *mtk_crtc) mtk_crtc->ddp_comp[i]->id); mtk_disp_mutex_disable(mtk_crtc->mutex); for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) { - mtk_ddp_remove_comp_from_path(mtk_crtc->config_regs, - mtk_crtc->ddp_comp[i]->id, - mtk_crtc->ddp_comp[i + 1]->id); + mtk_mmsys_ddp_disconnect(mtk_crtc->mmsys_dev, + mtk_crtc->ddp_comp[i]->id, + mtk_crtc->ddp_comp[i + 1]->id); mtk_disp_mutex_remove_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); } @@ -766,7 +767,7 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev, if (!mtk_crtc) return -ENOMEM; - mtk_crtc->config_regs = priv->config_regs; + mtk_crtc->mmsys_dev = priv->mmsys_dev; mtk_crtc->ddp_comp_nr = path_len; mtk_crtc->ddp_comp = devm_kmalloc_array(dev, mtk_crtc->ddp_comp_nr, sizeof(*mtk_crtc->ddp_comp), diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp.c index 13035c906035..014c1bbe1df2 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_ddp.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp.c @@ -13,26 +13,6 @@ #include "mtk_drm_ddp.h" #include "mtk_drm_ddp_comp.h" -#define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040 -#define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044 -#define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048 -#define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c -#define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050 -#define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084 -#define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088 -#define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4 -#define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8 -#define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac -#define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8 -#define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4 -#define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8 -#define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100 - -#define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030 -#define DISP_REG_CONFIG_OUT_SEL 0x04c -#define DISP_REG_CONFIG_DSI_SEL 0x050 -#define DISP_REG_CONFIG_DPI_SEL 0x064 - #define MT2701_DISP_MUTEX0_MOD0 0x2c #define MT2701_DISP_MUTEX0_SOF0 0x30 @@ -94,48 +74,6 @@ #define MUTEX_SOF_DSI2 5 #define MUTEX_SOF_DSI3 6 -#define OVL0_MOUT_EN_COLOR0 0x1 -#define OD_MOUT_EN_RDMA0 0x1 -#define OD1_MOUT_EN_RDMA1 BIT(16) -#define UFOE_MOUT_EN_DSI0 0x1 -#define COLOR0_SEL_IN_OVL0 0x1 -#define OVL1_MOUT_EN_COLOR1 0x1 -#define GAMMA_MOUT_EN_RDMA1 0x1 -#define RDMA0_SOUT_DPI0 0x2 -#define RDMA0_SOUT_DPI1 0x3 -#define RDMA0_SOUT_DSI1 0x1 -#define RDMA0_SOUT_DSI2 0x4 -#define RDMA0_SOUT_DSI3 0x5 -#define RDMA1_SOUT_DPI0 0x2 -#define RDMA1_SOUT_DPI1 0x3 -#define RDMA1_SOUT_DSI1 0x1 -#define RDMA1_SOUT_DSI2 0x4 -#define RDMA1_SOUT_DSI3 0x5 -#define RDMA2_SOUT_DPI0 0x2 -#define RDMA2_SOUT_DPI1 0x3 -#define RDMA2_SOUT_DSI1 0x1 -#define RDMA2_SOUT_DSI2 0x4 -#define RDMA2_SOUT_DSI3 0x5 -#define DPI0_SEL_IN_RDMA1 0x1 -#define DPI0_SEL_IN_RDMA2 0x3 -#define DPI1_SEL_IN_RDMA1 (0x1 << 8) -#define DPI1_SEL_IN_RDMA2 (0x3 << 8) -#define DSI0_SEL_IN_RDMA1 0x1 -#define DSI0_SEL_IN_RDMA2 0x4 -#define DSI1_SEL_IN_RDMA1 0x1 -#define DSI1_SEL_IN_RDMA2 0x4 -#define DSI2_SEL_IN_RDMA1 (0x1 << 16) -#define DSI2_SEL_IN_RDMA2 (0x4 << 16) -#define DSI3_SEL_IN_RDMA1 (0x1 << 16) -#define DSI3_SEL_IN_RDMA2 (0x4 << 16) -#define COLOR1_SEL_IN_OVL1 0x1 - -#define OVL_MOUT_EN_RDMA 0x1 -#define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 -#define BLS_TO_DPI_RDMA1_TO_DSI 0x2 -#define DSI_SEL_IN_BLS 0x0 -#define DPI_SEL_IN_BLS 0x0 -#define DSI_SEL_IN_RDMA 0x1 struct mtk_disp_mutex { int id; @@ -246,200 +184,6 @@ static const struct mtk_ddp_data mt8173_ddp_driver_data = { .mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0, }; -static unsigned int mtk_ddp_mout_en(enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next, - unsigned int *addr) -{ - unsigned int value; - - if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { - *addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN; - value = OVL0_MOUT_EN_COLOR0; - } else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) { - *addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN; - value = OVL_MOUT_EN_RDMA; - } else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) { - *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; - value = OD_MOUT_EN_RDMA0; - } else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) { - *addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN; - value = UFOE_MOUT_EN_DSI0; - } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { - *addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN; - value = OVL1_MOUT_EN_COLOR1; - } else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) { - *addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN; - value = GAMMA_MOUT_EN_RDMA1; - } else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) { - *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; - value = OD1_MOUT_EN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) { - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; - value = RDMA0_SOUT_DPI0; - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) { - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; - value = RDMA0_SOUT_DPI1; - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) { - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; - value = RDMA0_SOUT_DSI1; - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) { - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; - value = RDMA0_SOUT_DSI2; - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) { - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; - value = RDMA0_SOUT_DSI3; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; - value = RDMA1_SOUT_DSI1; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; - value = RDMA1_SOUT_DSI2; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; - value = RDMA1_SOUT_DSI3; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; - value = RDMA1_SOUT_DPI0; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; - value = RDMA1_SOUT_DPI1; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; - value = RDMA2_SOUT_DPI0; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; - value = RDMA2_SOUT_DPI1; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; - value = RDMA2_SOUT_DSI1; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; - value = RDMA2_SOUT_DSI2; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; - value = RDMA2_SOUT_DSI3; - } else { - value = 0; - } - - return value; -} - -static unsigned int mtk_ddp_sel_in(enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next, - unsigned int *addr) -{ - unsigned int value; - - if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { - *addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN; - value = COLOR0_SEL_IN_OVL0; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { - *addr = DISP_REG_CONFIG_DPI_SEL_IN; - value = DPI0_SEL_IN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { - *addr = DISP_REG_CONFIG_DPI_SEL_IN; - value = DPI1_SEL_IN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) { - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; - value = DSI0_SEL_IN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; - value = DSI1_SEL_IN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; - value = DSI2_SEL_IN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; - value = DSI3_SEL_IN_RDMA1; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { - *addr = DISP_REG_CONFIG_DPI_SEL_IN; - value = DPI0_SEL_IN_RDMA2; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { - *addr = DISP_REG_CONFIG_DPI_SEL_IN; - value = DPI1_SEL_IN_RDMA2; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) { - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; - value = DSI0_SEL_IN_RDMA2; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; - value = DSI1_SEL_IN_RDMA2; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; - value = DSI2_SEL_IN_RDMA2; - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; - value = DSI3_SEL_IN_RDMA2; - } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { - *addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN; - value = COLOR1_SEL_IN_OVL1; - } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { - *addr = DISP_REG_CONFIG_DSI_SEL; - value = DSI_SEL_IN_BLS; - } else { - value = 0; - } - - return value; -} - -static void mtk_ddp_sout_sel(void __iomem *config_regs, - enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next) -{ - if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { - writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1, - config_regs + DISP_REG_CONFIG_OUT_SEL); - } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) { - writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI, - config_regs + DISP_REG_CONFIG_OUT_SEL); - writel_relaxed(DSI_SEL_IN_RDMA, - config_regs + DISP_REG_CONFIG_DSI_SEL); - writel_relaxed(DPI_SEL_IN_BLS, - config_regs + DISP_REG_CONFIG_DPI_SEL); - } -} - -void mtk_ddp_add_comp_to_path(void __iomem *config_regs, - enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next) -{ - unsigned int addr, value, reg; - - value = mtk_ddp_mout_en(cur, next, &addr); - if (value) { - reg = readl_relaxed(config_regs + addr) | value; - writel_relaxed(reg, config_regs + addr); - } - - mtk_ddp_sout_sel(config_regs, cur, next); - - value = mtk_ddp_sel_in(cur, next, &addr); - if (value) { - reg = readl_relaxed(config_regs + addr) | value; - writel_relaxed(reg, config_regs + addr); - } -} - -void mtk_ddp_remove_comp_from_path(void __iomem *config_regs, - enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next) -{ - unsigned int addr, value, reg; - - value = mtk_ddp_mout_en(cur, next, &addr); - if (value) { - reg = readl_relaxed(config_regs + addr) & ~value; - writel_relaxed(reg, config_regs + addr); - } - - value = mtk_ddp_sel_in(cur, next, &addr); - if (value) { - reg = readl_relaxed(config_regs + addr) & ~value; - writel_relaxed(reg, config_regs + addr); - } -} - struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id) { struct mtk_ddp *ddp = dev_get_drvdata(dev); @@ -628,7 +372,8 @@ static int mtk_ddp_probe(struct platform_device *pdev) if (!ddp->data->no_clk) { ddp->clk = devm_clk_get(dev, NULL); if (IS_ERR(ddp->clk)) { - dev_err(dev, "Failed to get clock\n"); + if (PTR_ERR(ddp->clk) != -EPROBE_DEFER) + dev_err(dev, "Failed to get clock\n"); return PTR_ERR(ddp->clk); } } diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp.h b/drivers/gpu/drm/mediatek/mtk_drm_ddp.h index 827be424a148..6b691a57be4a 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_ddp.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp.h @@ -12,13 +12,6 @@ struct regmap; struct device; struct mtk_disp_mutex; -void mtk_ddp_add_comp_to_path(void __iomem *config_regs, - enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next); -void mtk_ddp_remove_comp_from_path(void __iomem *config_regs, - enum mtk_ddp_comp_id cur, - enum mtk_ddp_comp_id next); - struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id); int mtk_disp_mutex_prepare(struct mtk_disp_mutex *mutex); void mtk_disp_mutex_add_comp(struct mtk_disp_mutex *mutex, diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c index ce570283b55f..6bd369434d9d 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c @@ -10,6 +10,7 @@ #include <linux/of_address.h> #include <linux/of_platform.h> #include <linux/pm_runtime.h> +#include <linux/soc/mediatek/mtk-mmsys.h> #include <linux/dma-mapping.h> #include <drm/drm_atomic.h> @@ -418,11 +419,22 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = { { } }; +static const struct of_device_id mtk_drm_of_ids[] = { + { .compatible = "mediatek,mt2701-mmsys", + .data = &mt2701_mmsys_driver_data}, + { .compatible = "mediatek,mt2712-mmsys", + .data = &mt2712_mmsys_driver_data}, + { .compatible = "mediatek,mt8173-mmsys", + .data = &mt8173_mmsys_driver_data}, + { } +}; + static int mtk_drm_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; + struct device_node *phandle = dev->parent->of_node; + const struct of_device_id *of_id; struct mtk_drm_private *private; - struct resource *mem; struct device_node *node; struct component_match *match = NULL; int ret; @@ -433,18 +445,20 @@ static int mtk_drm_probe(struct platform_device *pdev) return -ENOMEM; private->data = of_device_get_match_data(dev); - - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); - private->config_regs = devm_ioremap_resource(dev, mem); - if (IS_ERR(private->config_regs)) { - ret = PTR_ERR(private->config_regs); - dev_err(dev, "Failed to ioremap mmsys-config resource: %d\n", - ret); - return ret; + private->mmsys_dev = dev->parent; + if (!private->mmsys_dev) { + dev_err(dev, "Failed to get MMSYS device\n"); + return -ENODEV; } + of_id = of_match_node(mtk_drm_of_ids, phandle); + if (!of_id) + return -ENODEV; + + private->data = of_id->data; + /* Iterate over sibling DISP function blocks */ - for_each_child_of_node(dev->of_node->parent, node) { + for_each_child_of_node(phandle->parent, node) { const struct of_device_id *of_id; enum mtk_ddp_comp_type comp_type; int comp_id; @@ -578,22 +592,11 @@ static int mtk_drm_sys_resume(struct device *dev) static SIMPLE_DEV_PM_OPS(mtk_drm_pm_ops, mtk_drm_sys_suspend, mtk_drm_sys_resume); -static const struct of_device_id mtk_drm_of_ids[] = { - { .compatible = "mediatek,mt2701-mmsys", - .data = &mt2701_mmsys_driver_data}, - { .compatible = "mediatek,mt2712-mmsys", - .data = &mt2712_mmsys_driver_data}, - { .compatible = "mediatek,mt8173-mmsys", - .data = &mt8173_mmsys_driver_data}, - { } -}; - static struct platform_driver mtk_drm_platform_driver = { .probe = mtk_drm_probe, .remove = mtk_drm_remove, .driver = { .name = "mediatek-drm", - .of_match_table = mtk_drm_of_ids, .pm = &mtk_drm_pm_ops, }, }; diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.h b/drivers/gpu/drm/mediatek/mtk_drm_drv.h index 17bc99b9f5d4..b5be63e53176 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.h @@ -39,7 +39,7 @@ struct mtk_drm_private { struct device_node *mutex_node; struct device *mutex_dev; - void __iomem *config_regs; + struct device *mmsys_dev; struct device_node *comp_node[DDP_COMPONENT_ID_MAX]; struct mtk_ddp_comp *ddp_comp[DDP_COMPONENT_ID_MAX]; const struct mtk_mmsys_driver_data *data; diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c index a9a25087112f..270bf22c98fe 100644 --- a/drivers/gpu/drm/mediatek/mtk_dsi.c +++ b/drivers/gpu/drm/mediatek/mtk_dsi.c @@ -1186,14 +1186,18 @@ static int mtk_dsi_probe(struct platform_device *pdev) dsi->engine_clk = devm_clk_get(dev, "engine"); if (IS_ERR(dsi->engine_clk)) { ret = PTR_ERR(dsi->engine_clk); - dev_err(dev, "Failed to get engine clock: %d\n", ret); + + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get engine clock: %d\n", ret); goto err_unregister_host; } dsi->digital_clk = devm_clk_get(dev, "digital"); if (IS_ERR(dsi->digital_clk)) { ret = PTR_ERR(dsi->digital_clk); - dev_err(dev, "Failed to get digital clock: %d\n", ret); + + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get digital clock: %d\n", ret); goto err_unregister_host; } diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c index 7bc086ec74f7..5feb760617cb 100644 --- a/drivers/gpu/drm/mediatek/mtk_hdmi.c +++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c @@ -1470,7 +1470,9 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi, ret = mtk_hdmi_get_all_clk(hdmi, np); if (ret) { - dev_err(dev, "Failed to get clocks: %d\n", ret); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to get clocks: %d\n", ret); + return ret; } diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig index 9bddca292330..04368ee2a809 100644 --- a/drivers/memory/Kconfig +++ b/drivers/memory/Kconfig @@ -46,6 +46,17 @@ config ATMEL_EBI tree is used. This bus supports NANDs, external ethernet controller, SRAMs, ATA devices, etc. +config BT1_L2_CTL + bool "Baikal-T1 CM2 L2-RAM Cache Control Block" + depends on MIPS_BAIKAL_T1 || COMPILE_TEST + select MFD_SYSCON + help + Baikal-T1 CPU is based on the MIPS P5600 Warrior IP-core. The CPU + resides Coherency Manager v2 with embedded 1MB L2-cache. It's + possible to tune the L2 cache performance up by setting the data, + tags and way-select latencies of RAM access. This driver provides a + dt properties-based and sysfs interface for it. + config TI_AEMIF tristate "Texas Instruments AEMIF driver" depends on (ARCH_DAVINCI || ARCH_KEYSTONE) && OF diff --git a/drivers/memory/Makefile b/drivers/memory/Makefile index 27b493435e61..6d7e3e64ba62 100644 --- a/drivers/memory/Makefile +++ b/drivers/memory/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o obj-$(CONFIG_ARCH_BRCMSTB) += brcmstb_dpfe.o +obj-$(CONFIG_BT1_L2_CTL) += bt1-l2-ctl.o obj-$(CONFIG_TI_AEMIF) += ti-aemif.o obj-$(CONFIG_TI_EMIF) += emif.o obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o diff --git a/drivers/memory/bt1-l2-ctl.c b/drivers/memory/bt1-l2-ctl.c new file mode 100644 index 000000000000..633fea6a4edf --- /dev/null +++ b/drivers/memory/bt1-l2-ctl.c @@ -0,0 +1,322 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC + * + * Authors: + * Serge Semin <Sergey.Semin@baikalelectronics.ru> + * + * Baikal-T1 CM2 L2-cache Control Block driver. + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/bitfield.h> +#include <linux/types.h> +#include <linux/device.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> +#include <linux/mfd/syscon.h> +#include <linux/sysfs.h> +#include <linux/of.h> + +#define L2_CTL_REG 0x028 +#define L2_CTL_DATA_STALL_FLD 0 +#define L2_CTL_DATA_STALL_MASK GENMASK(1, L2_CTL_DATA_STALL_FLD) +#define L2_CTL_TAG_STALL_FLD 2 +#define L2_CTL_TAG_STALL_MASK GENMASK(3, L2_CTL_TAG_STALL_FLD) +#define L2_CTL_WS_STALL_FLD 4 +#define L2_CTL_WS_STALL_MASK GENMASK(5, L2_CTL_WS_STALL_FLD) +#define L2_CTL_SET_CLKRATIO BIT(13) +#define L2_CTL_CLKRATIO_LOCK BIT(31) + +#define L2_CTL_STALL_MIN 0 +#define L2_CTL_STALL_MAX 3 +#define L2_CTL_STALL_SET_DELAY_US 1 +#define L2_CTL_STALL_SET_TOUT_US 1000 + +/* + * struct l2_ctl - Baikal-T1 L2 Control block private data. + * @dev: Pointer to the device structure. + * @sys_regs: Baikal-T1 System Controller registers map. + */ +struct l2_ctl { + struct device *dev; + + struct regmap *sys_regs; +}; + +/* + * enum l2_ctl_stall - Baikal-T1 L2-cache-RAM stall identifier. + * @L2_WSSTALL: Way-select latency. + * @L2_TAGSTALL: Tag latency. + * @L2_DATASTALL: Data latency. + */ +enum l2_ctl_stall { + L2_WS_STALL, + L2_TAG_STALL, + L2_DATA_STALL +}; + +/* + * struct l2_ctl_device_attribute - Baikal-T1 L2-cache device attribute. + * @dev_attr: Actual sysfs device attribute. + * @id: L2-cache stall field identifier. + */ +struct l2_ctl_device_attribute { + struct device_attribute dev_attr; + enum l2_ctl_stall id; +}; +#define to_l2_ctl_dev_attr(_dev_attr) \ + container_of(_dev_attr, struct l2_ctl_device_attribute, dev_attr) + +#define L2_CTL_ATTR_RW(_name, _prefix, _id) \ + struct l2_ctl_device_attribute l2_ctl_attr_##_name = \ + { __ATTR(_name, 0644, _prefix##_show, _prefix##_store), _id } + +static int l2_ctl_get_latency(struct l2_ctl *l2, enum l2_ctl_stall id, u32 *val) +{ + u32 data = 0; + int ret; + + ret = regmap_read(l2->sys_regs, L2_CTL_REG, &data); + if (ret) + return ret; + + switch (id) { + case L2_WS_STALL: + *val = FIELD_GET(L2_CTL_WS_STALL_MASK, data); + break; + case L2_TAG_STALL: + *val = FIELD_GET(L2_CTL_TAG_STALL_MASK, data); + break; + case L2_DATA_STALL: + *val = FIELD_GET(L2_CTL_DATA_STALL_MASK, data); + break; + default: + return -EINVAL; + } + + return 0; +} + +static int l2_ctl_set_latency(struct l2_ctl *l2, enum l2_ctl_stall id, u32 val) +{ + u32 mask = 0, data = 0; + int ret; + + val = clamp_val(val, L2_CTL_STALL_MIN, L2_CTL_STALL_MAX); + + switch (id) { + case L2_WS_STALL: + data = FIELD_PREP(L2_CTL_WS_STALL_MASK, val); + mask = L2_CTL_WS_STALL_MASK; + break; + case L2_TAG_STALL: + data = FIELD_PREP(L2_CTL_TAG_STALL_MASK, val); + mask = L2_CTL_TAG_STALL_MASK; + break; + case L2_DATA_STALL: + data = FIELD_PREP(L2_CTL_DATA_STALL_MASK, val); + mask = L2_CTL_DATA_STALL_MASK; + break; + default: + return -EINVAL; + } + + data |= L2_CTL_SET_CLKRATIO; + mask |= L2_CTL_SET_CLKRATIO; + + ret = regmap_update_bits(l2->sys_regs, L2_CTL_REG, mask, data); + if (ret) + return ret; + + return regmap_read_poll_timeout(l2->sys_regs, L2_CTL_REG, data, + data & L2_CTL_CLKRATIO_LOCK, + L2_CTL_STALL_SET_DELAY_US, + L2_CTL_STALL_SET_TOUT_US); +} + +static void l2_ctl_clear_data(void *data) +{ + struct l2_ctl *l2 = data; + struct platform_device *pdev = to_platform_device(l2->dev); + + platform_set_drvdata(pdev, NULL); +} + +static struct l2_ctl *l2_ctl_create_data(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct l2_ctl *l2; + int ret; + + l2 = devm_kzalloc(dev, sizeof(*l2), GFP_KERNEL); + if (!l2) + return ERR_PTR(-ENOMEM); + + ret = devm_add_action(dev, l2_ctl_clear_data, l2); + if (ret) { + dev_err(dev, "Can't add L2 CTL data clear action\n"); + return ERR_PTR(ret); + } + + l2->dev = dev; + platform_set_drvdata(pdev, l2); + + return l2; +} + +static int l2_ctl_find_sys_regs(struct l2_ctl *l2) +{ + l2->sys_regs = syscon_node_to_regmap(l2->dev->of_node->parent); + if (IS_ERR(l2->sys_regs)) { + dev_err(l2->dev, "Couldn't get L2 CTL register map\n"); + return PTR_ERR(l2->sys_regs); + } + + return 0; +} + +static int l2_ctl_of_parse_property(struct l2_ctl *l2, enum l2_ctl_stall id, + const char *propname) +{ + int ret = 0; + u32 data; + + if (!of_property_read_u32(l2->dev->of_node, propname, &data)) { + ret = l2_ctl_set_latency(l2, id, data); + if (ret) + dev_err(l2->dev, "Invalid value of '%s'\n", propname); + } + + return ret; +} + +static int l2_ctl_of_parse(struct l2_ctl *l2) +{ + int ret; + + ret = l2_ctl_of_parse_property(l2, L2_WS_STALL, "baikal,l2-ws-latency"); + if (ret) + return ret; + + ret = l2_ctl_of_parse_property(l2, L2_TAG_STALL, "baikal,l2-tag-latency"); + if (ret) + return ret; + + return l2_ctl_of_parse_property(l2, L2_DATA_STALL, + "baikal,l2-data-latency"); +} + +static ssize_t l2_ctl_latency_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct l2_ctl_device_attribute *devattr = to_l2_ctl_dev_attr(attr); + struct l2_ctl *l2 = dev_get_drvdata(dev); + u32 data; + int ret; + + ret = l2_ctl_get_latency(l2, devattr->id, &data); + if (ret) + return ret; + + return scnprintf(buf, PAGE_SIZE, "%u\n", data); +} + +static ssize_t l2_ctl_latency_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct l2_ctl_device_attribute *devattr = to_l2_ctl_dev_attr(attr); + struct l2_ctl *l2 = dev_get_drvdata(dev); + u32 data; + int ret; + + if (kstrtouint(buf, 0, &data) < 0) + return -EINVAL; + + ret = l2_ctl_set_latency(l2, devattr->id, data); + if (ret) + return ret; + + return count; +} +static L2_CTL_ATTR_RW(l2_ws_latency, l2_ctl_latency, L2_WS_STALL); +static L2_CTL_ATTR_RW(l2_tag_latency, l2_ctl_latency, L2_TAG_STALL); +static L2_CTL_ATTR_RW(l2_data_latency, l2_ctl_latency, L2_DATA_STALL); + +static struct attribute *l2_ctl_sysfs_attrs[] = { + &l2_ctl_attr_l2_ws_latency.dev_attr.attr, + &l2_ctl_attr_l2_tag_latency.dev_attr.attr, + &l2_ctl_attr_l2_data_latency.dev_attr.attr, + NULL +}; +ATTRIBUTE_GROUPS(l2_ctl_sysfs); + +static void l2_ctl_remove_sysfs(void *data) +{ + struct l2_ctl *l2 = data; + + device_remove_groups(l2->dev, l2_ctl_sysfs_groups); +} + +static int l2_ctl_init_sysfs(struct l2_ctl *l2) +{ + int ret; + + ret = device_add_groups(l2->dev, l2_ctl_sysfs_groups); + if (ret) { + dev_err(l2->dev, "Failed to create L2 CTL sysfs nodes\n"); + return ret; + } + + ret = devm_add_action_or_reset(l2->dev, l2_ctl_remove_sysfs, l2); + if (ret) + dev_err(l2->dev, "Can't add L2 CTL sysfs remove action\n"); + + return ret; +} + +static int l2_ctl_probe(struct platform_device *pdev) +{ + struct l2_ctl *l2; + int ret; + + l2 = l2_ctl_create_data(pdev); + if (IS_ERR(l2)) + return PTR_ERR(l2); + + ret = l2_ctl_find_sys_regs(l2); + if (ret) + return ret; + + ret = l2_ctl_of_parse(l2); + if (ret) + return ret; + + ret = l2_ctl_init_sysfs(l2); + if (ret) + return ret; + + return 0; +} + +static const struct of_device_id l2_ctl_of_match[] = { + { .compatible = "baikal,bt1-l2-ctl" }, + { } +}; +MODULE_DEVICE_TABLE(of, l2_ctl_of_match); + +static struct platform_driver l2_ctl_driver = { + .probe = l2_ctl_probe, + .driver = { + .name = "bt1-l2-ctl", + .of_match_table = l2_ctl_of_match + } +}; +module_platform_driver(l2_ctl_driver); + +MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); +MODULE_DESCRIPTION("Baikal-T1 L2-cache driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c index 81a1b1d01683..25196d6268e2 100644 --- a/drivers/memory/samsung/exynos5422-dmc.c +++ b/drivers/memory/samsung/exynos5422-dmc.c @@ -1091,7 +1091,7 @@ static int create_timings_aligned(struct exynos5_dmc *dmc, u32 *reg_timing_row, /* power related timings */ val = dmc->timings->tFAW / clk_period_ps; val += dmc->timings->tFAW % clk_period_ps ? 1 : 0; - val = max(val, dmc->min_tck->tXP); + val = max(val, dmc->min_tck->tFAW); reg = &timing_power[0]; *reg_timing_power |= TIMING_VAL2REG(reg, val); @@ -1346,15 +1346,13 @@ static irqreturn_t dmc_irq_thread(int irq, void *priv) struct exynos5_dmc *dmc = priv; mutex_lock(&dmc->df->lock); - exynos5_dmc_perf_events_check(dmc); - res = update_devfreq(dmc->df); + mutex_unlock(&dmc->df->lock); + if (res) dev_warn(dmc->dev, "devfreq failed with %d\n", res); - mutex_unlock(&dmc->df->lock); - return IRQ_HANDLED; } diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 1a84bc0d5fa8..f61e8739502a 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -358,6 +358,25 @@ int of_reserved_mem_device_init_by_idx(struct device *dev, EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx); /** + * of_reserved_mem_device_init_by_name() - assign named reserved memory region + * to given device + * @dev: pointer to the device to configure + * @np: pointer to the device node with 'memory-region' property + * @name: name of the selected memory region + * + * Returns: 0 on success or a negative error-code on failure. + */ +int of_reserved_mem_device_init_by_name(struct device *dev, + struct device_node *np, + const char *name) +{ + int idx = of_property_match_string(np, "memory-region-names", name); + + return of_reserved_mem_device_init_by_idx(dev, np, idx); +} +EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_name); + +/** * of_reserved_mem_device_release() - release reserved memory device structures * @dev: Pointer to the device to deconfigure * @@ -366,24 +385,22 @@ EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx); */ void of_reserved_mem_device_release(struct device *dev) { - struct rmem_assigned_device *rd; - struct reserved_mem *rmem = NULL; + struct rmem_assigned_device *rd, *tmp; + LIST_HEAD(release_list); mutex_lock(&of_rmem_assigned_device_mutex); - list_for_each_entry(rd, &of_rmem_assigned_device_list, list) { - if (rd->dev == dev) { - rmem = rd->rmem; - list_del(&rd->list); - kfree(rd); - break; - } + list_for_each_entry_safe(rd, tmp, &of_rmem_assigned_device_list, list) { + if (rd->dev == dev) + list_move_tail(&rd->list, &release_list); } mutex_unlock(&of_rmem_assigned_device_mutex); - if (!rmem || !rmem->ops || !rmem->ops->device_release) - return; + list_for_each_entry_safe(rd, tmp, &release_list, list) { + if (rd->rmem && rd->rmem->ops && rd->rmem->ops->device_release) + rd->rmem->ops->device_release(rd->rmem, dev); - rmem->ops->device_release(rmem, dev); + kfree(rd); + } } EXPORT_SYMBOL_GPL(of_reserved_mem_device_release); diff --git a/drivers/reset/hisilicon/hi6220_reset.c b/drivers/reset/hisilicon/hi6220_reset.c index 24e6d420b26b..19926506d033 100644 --- a/drivers/reset/hisilicon/hi6220_reset.c +++ b/drivers/reset/hisilicon/hi6220_reset.c @@ -33,6 +33,7 @@ enum hi6220_reset_ctrl_type { PERIPHERAL, MEDIA, + AO, }; struct hi6220_reset_data { @@ -92,6 +93,65 @@ static const struct reset_control_ops hi6220_media_reset_ops = { .deassert = hi6220_media_deassert, }; +#define AO_SCTRL_SC_PW_CLKEN0 0x800 +#define AO_SCTRL_SC_PW_CLKDIS0 0x804 + +#define AO_SCTRL_SC_PW_RSTEN0 0x810 +#define AO_SCTRL_SC_PW_RSTDIS0 0x814 + +#define AO_SCTRL_SC_PW_ISOEN0 0x820 +#define AO_SCTRL_SC_PW_ISODIS0 0x824 +#define AO_MAX_INDEX 12 + +static int hi6220_ao_assert(struct reset_controller_dev *rc_dev, + unsigned long idx) +{ + struct hi6220_reset_data *data = to_reset_data(rc_dev); + struct regmap *regmap = data->regmap; + int ret; + + ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTEN0, BIT(idx)); + if (ret) + return ret; + + ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISOEN0, BIT(idx)); + if (ret) + return ret; + + ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKDIS0, BIT(idx)); + return ret; +} + +static int hi6220_ao_deassert(struct reset_controller_dev *rc_dev, + unsigned long idx) +{ + struct hi6220_reset_data *data = to_reset_data(rc_dev); + struct regmap *regmap = data->regmap; + int ret; + + /* + * It was suggested to disable isolation before enabling + * the clocks and deasserting reset, to avoid glitches. + * But this order is preserved to keep it matching the + * vendor code. + */ + ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTDIS0, BIT(idx)); + if (ret) + return ret; + + ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISODIS0, BIT(idx)); + if (ret) + return ret; + + ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKEN0, BIT(idx)); + return ret; +} + +static const struct reset_control_ops hi6220_ao_reset_ops = { + .assert = hi6220_ao_assert, + .deassert = hi6220_ao_deassert, +}; + static int hi6220_reset_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node; @@ -117,9 +177,12 @@ static int hi6220_reset_probe(struct platform_device *pdev) if (type == MEDIA) { data->rc_dev.ops = &hi6220_media_reset_ops; data->rc_dev.nr_resets = MEDIA_MAX_INDEX; - } else { + } else if (type == PERIPHERAL) { data->rc_dev.ops = &hi6220_peripheral_reset_ops; data->rc_dev.nr_resets = PERIPH_MAX_INDEX; + } else { + data->rc_dev.ops = &hi6220_ao_reset_ops; + data->rc_dev.nr_resets = AO_MAX_INDEX; } return reset_controller_register(&data->rc_dev); @@ -134,6 +197,10 @@ static const struct of_device_id hi6220_reset_match[] = { .compatible = "hisilicon,hi6220-mediactrl", .data = (void *)MEDIA, }, + { + .compatible = "hisilicon,hi6220-aoctrl", + .data = (void *)AO, + }, { /* sentinel */ }, }; MODULE_DEVICE_TABLE(of, hi6220_reset_match); diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c index 1443a55a0c29..d170fe663210 100644 --- a/drivers/reset/reset-imx7.c +++ b/drivers/reset/reset-imx7.c @@ -15,6 +15,7 @@ #include <linux/regmap.h> #include <dt-bindings/reset/imx7-reset.h> #include <dt-bindings/reset/imx8mq-reset.h> +#include <dt-bindings/reset/imx8mp-reset.h> struct imx7_src_signal { unsigned int offset, bit; @@ -145,6 +146,18 @@ enum imx8mq_src_registers { SRC_DDRC2_RCR = 0x1004, }; +enum imx8mp_src_registers { + SRC_SUPERMIX_RCR = 0x0018, + SRC_AUDIOMIX_RCR = 0x001c, + SRC_MLMIX_RCR = 0x0028, + SRC_GPU2D_RCR = 0x0038, + SRC_GPU3D_RCR = 0x003c, + SRC_VPU_G1_RCR = 0x0048, + SRC_VPU_G2_RCR = 0x004c, + SRC_VPUVC8KE_RCR = 0x0050, + SRC_NOC_RCR = 0x0054, +}; + static const struct imx7_src_signal imx8mq_src_signals[IMX8MQ_RESET_NUM] = { [IMX8MQ_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) }, [IMX8MQ_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) }, @@ -253,6 +266,93 @@ static const struct imx7_src_variant variant_imx8mq = { }, }; +static const struct imx7_src_signal imx8mp_src_signals[IMX8MP_RESET_NUM] = { + [IMX8MP_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) }, + [IMX8MP_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) }, + [IMX8MP_RESET_A53_CORE_POR_RESET2] = { SRC_A53RCR0, BIT(2) }, + [IMX8MP_RESET_A53_CORE_POR_RESET3] = { SRC_A53RCR0, BIT(3) }, + [IMX8MP_RESET_A53_CORE_RESET0] = { SRC_A53RCR0, BIT(4) }, + [IMX8MP_RESET_A53_CORE_RESET1] = { SRC_A53RCR0, BIT(5) }, + [IMX8MP_RESET_A53_CORE_RESET2] = { SRC_A53RCR0, BIT(6) }, + [IMX8MP_RESET_A53_CORE_RESET3] = { SRC_A53RCR0, BIT(7) }, + [IMX8MP_RESET_A53_DBG_RESET0] = { SRC_A53RCR0, BIT(8) }, + [IMX8MP_RESET_A53_DBG_RESET1] = { SRC_A53RCR0, BIT(9) }, + [IMX8MP_RESET_A53_DBG_RESET2] = { SRC_A53RCR0, BIT(10) }, + [IMX8MP_RESET_A53_DBG_RESET3] = { SRC_A53RCR0, BIT(11) }, + [IMX8MP_RESET_A53_ETM_RESET0] = { SRC_A53RCR0, BIT(12) }, + [IMX8MP_RESET_A53_ETM_RESET1] = { SRC_A53RCR0, BIT(13) }, + [IMX8MP_RESET_A53_ETM_RESET2] = { SRC_A53RCR0, BIT(14) }, + [IMX8MP_RESET_A53_ETM_RESET3] = { SRC_A53RCR0, BIT(15) }, + [IMX8MP_RESET_A53_SOC_DBG_RESET] = { SRC_A53RCR0, BIT(20) }, + [IMX8MP_RESET_A53_L2RESET] = { SRC_A53RCR0, BIT(21) }, + [IMX8MP_RESET_SW_NON_SCLR_M7C_RST] = { SRC_M4RCR, BIT(0) }, + [IMX8MP_RESET_OTG1_PHY_RESET] = { SRC_USBOPHY1_RCR, BIT(0) }, + [IMX8MP_RESET_OTG2_PHY_RESET] = { SRC_USBOPHY2_RCR, BIT(0) }, + [IMX8MP_RESET_SUPERMIX_RESET] = { SRC_SUPERMIX_RCR, BIT(0) }, + [IMX8MP_RESET_AUDIOMIX_RESET] = { SRC_AUDIOMIX_RCR, BIT(0) }, + [IMX8MP_RESET_MLMIX_RESET] = { SRC_MLMIX_RCR, BIT(0) }, + [IMX8MP_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) }, + [IMX8MP_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, + [IMX8MP_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) }, + [IMX8MP_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) }, + [IMX8MP_RESET_HDMI_PHY_APB_RESET] = { SRC_HDMI_RCR, BIT(0) }, + [IMX8MP_RESET_MEDIA_RESET] = { SRC_DISP_RCR, BIT(0) }, + [IMX8MP_RESET_GPU2D_RESET] = { SRC_GPU2D_RCR, BIT(0) }, + [IMX8MP_RESET_GPU3D_RESET] = { SRC_GPU3D_RCR, BIT(0) }, + [IMX8MP_RESET_GPU_RESET] = { SRC_GPU_RCR, BIT(0) }, + [IMX8MP_RESET_VPU_RESET] = { SRC_VPU_RCR, BIT(0) }, + [IMX8MP_RESET_VPU_G1_RESET] = { SRC_VPU_G1_RCR, BIT(0) }, + [IMX8MP_RESET_VPU_G2_RESET] = { SRC_VPU_G2_RCR, BIT(0) }, + [IMX8MP_RESET_VPUVC8KE_RESET] = { SRC_VPUVC8KE_RCR, BIT(0) }, + [IMX8MP_RESET_NOC_RESET] = { SRC_NOC_RCR, BIT(0) }, +}; + +static int imx8mp_reset_set(struct reset_controller_dev *rcdev, + unsigned long id, bool assert) +{ + struct imx7_src *imx7src = to_imx7_src(rcdev); + const unsigned int bit = imx7src->signals[id].bit; + unsigned int value = assert ? bit : 0; + + switch (id) { + case IMX8MP_RESET_PCIEPHY: + /* + * wait for more than 10us to release phy g_rst and + * btnrst + */ + if (!assert) + udelay(10); + break; + + case IMX8MP_RESET_PCIE_CTRL_APPS_EN: + value = assert ? 0 : bit; + break; + } + + return imx7_reset_update(imx7src, id, value); +} + +static int imx8mp_reset_assert(struct reset_controller_dev *rcdev, + unsigned long id) +{ + return imx8mp_reset_set(rcdev, id, true); +} + +static int imx8mp_reset_deassert(struct reset_controller_dev *rcdev, + unsigned long id) +{ + return imx8mp_reset_set(rcdev, id, false); +} + +static const struct imx7_src_variant variant_imx8mp = { + .signals = imx8mp_src_signals, + .signals_num = ARRAY_SIZE(imx8mp_src_signals), + .ops = { + .assert = imx8mp_reset_assert, + .deassert = imx8mp_reset_deassert, + }, +}; + static int imx7_reset_probe(struct platform_device *pdev) { struct imx7_src *imx7src; @@ -283,6 +383,7 @@ static int imx7_reset_probe(struct platform_device *pdev) static const struct of_device_id imx7_reset_dt_ids[] = { { .compatible = "fsl,imx7d-src", .data = &variant_imx7 }, { .compatible = "fsl,imx8mq-src", .data = &variant_imx8mq }, + { .compatible = "fsl,imx8mp-src", .data = &variant_imx8mp }, { /* sentinel */ }, }; diff --git a/drivers/soc/amlogic/meson-ee-pwrc.c b/drivers/soc/amlogic/meson-ee-pwrc.c index 3f0261d53ad9..43665b77aa9e 100644 --- a/drivers/soc/amlogic/meson-ee-pwrc.c +++ b/drivers/soc/amlogic/meson-ee-pwrc.c @@ -14,13 +14,23 @@ #include <linux/reset-controller.h> #include <linux/reset.h> #include <linux/clk.h> +#include <dt-bindings/power/meson8-power.h> #include <dt-bindings/power/meson-g12a-power.h> +#include <dt-bindings/power/meson-gxbb-power.h> #include <dt-bindings/power/meson-sm1-power.h> /* AO Offsets */ -#define AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2) -#define AO_RTI_GEN_PWR_ISO0 (0x3b << 2) +#define GX_AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2) +#define GX_AO_RTI_GEN_PWR_ISO0 (0x3b << 2) + +/* + * Meson8/Meson8b/Meson8m2 only expose the power management registers of the + * AO-bus as syscon. 0x3a from GX translates to 0x02, 0x3b translates to 0x03 + * and so on. + */ +#define MESON8_AO_RTI_GEN_PWR_SLEEP0 (0x02 << 2) +#define MESON8_AO_RTI_GEN_PWR_ISO0 (0x03 << 2) /* HHI Offsets */ @@ -66,18 +76,25 @@ struct meson_ee_pwrc_domain_data { /* TOP Power Domains */ -static struct meson_ee_pwrc_top_domain g12a_pwrc_vpu = { - .sleep_reg = AO_RTI_GEN_PWR_SLEEP0, +static struct meson_ee_pwrc_top_domain gx_pwrc_vpu = { + .sleep_reg = GX_AO_RTI_GEN_PWR_SLEEP0, + .sleep_mask = BIT(8), + .iso_reg = GX_AO_RTI_GEN_PWR_SLEEP0, + .iso_mask = BIT(9), +}; + +static struct meson_ee_pwrc_top_domain meson8_pwrc_vpu = { + .sleep_reg = MESON8_AO_RTI_GEN_PWR_SLEEP0, .sleep_mask = BIT(8), - .iso_reg = AO_RTI_GEN_PWR_SLEEP0, + .iso_reg = MESON8_AO_RTI_GEN_PWR_SLEEP0, .iso_mask = BIT(9), }; #define SM1_EE_PD(__bit) \ { \ - .sleep_reg = AO_RTI_GEN_PWR_SLEEP0, \ + .sleep_reg = GX_AO_RTI_GEN_PWR_SLEEP0, \ .sleep_mask = BIT(__bit), \ - .iso_reg = AO_RTI_GEN_PWR_ISO0, \ + .iso_reg = GX_AO_RTI_GEN_PWR_ISO0, \ .iso_mask = BIT(__bit), \ } @@ -124,10 +141,26 @@ static struct meson_ee_pwrc_mem_domain g12a_pwrc_mem_vpu[] = { VPU_HHI_MEMPD(HHI_MEM_PD_REG0), }; -static struct meson_ee_pwrc_mem_domain g12a_pwrc_mem_eth[] = { +static struct meson_ee_pwrc_mem_domain gxbb_pwrc_mem_vpu[] = { + VPU_MEMPD(HHI_VPU_MEM_PD_REG0), + VPU_MEMPD(HHI_VPU_MEM_PD_REG1), + VPU_HHI_MEMPD(HHI_MEM_PD_REG0), +}; + +static struct meson_ee_pwrc_mem_domain meson_pwrc_mem_eth[] = { { HHI_MEM_PD_REG0, GENMASK(3, 2) }, }; +static struct meson_ee_pwrc_mem_domain meson8_pwrc_audio_dsp_mem[] = { + { HHI_MEM_PD_REG0, GENMASK(1, 0) }, +}; + +static struct meson_ee_pwrc_mem_domain meson8_pwrc_mem_vpu[] = { + VPU_MEMPD(HHI_VPU_MEM_PD_REG0), + VPU_MEMPD(HHI_VPU_MEM_PD_REG1), + VPU_HHI_MEMPD(HHI_MEM_PD_REG0), +}; + static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_vpu[] = { VPU_MEMPD(HHI_VPU_MEM_PD_REG0), VPU_MEMPD(HHI_VPU_MEM_PD_REG1), @@ -199,9 +232,35 @@ static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_audio[] = { static bool pwrc_ee_get_power(struct meson_ee_pwrc_domain *pwrc_domain); static struct meson_ee_pwrc_domain_desc g12a_pwrc_domains[] = { - [PWRC_G12A_VPU_ID] = VPU_PD("VPU", &g12a_pwrc_vpu, g12a_pwrc_mem_vpu, + [PWRC_G12A_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, g12a_pwrc_mem_vpu, pwrc_ee_get_power, 11, 2), - [PWRC_G12A_ETH_ID] = MEM_PD("ETH", g12a_pwrc_mem_eth), + [PWRC_G12A_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth), +}; + +static struct meson_ee_pwrc_domain_desc gxbb_pwrc_domains[] = { + [PWRC_GXBB_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, gxbb_pwrc_mem_vpu, + pwrc_ee_get_power, 12, 2), + [PWRC_GXBB_ETHERNET_MEM_ID] = MEM_PD("ETH", meson_pwrc_mem_eth), +}; + +static struct meson_ee_pwrc_domain_desc meson8_pwrc_domains[] = { + [PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu, + meson8_pwrc_mem_vpu, pwrc_ee_get_power, + 0, 1), + [PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM", + meson_pwrc_mem_eth), + [PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM", + meson8_pwrc_audio_dsp_mem), +}; + +static struct meson_ee_pwrc_domain_desc meson8b_pwrc_domains[] = { + [PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu, + meson8_pwrc_mem_vpu, pwrc_ee_get_power, + 11, 1), + [PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM", + meson_pwrc_mem_eth), + [PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM", + meson8_pwrc_audio_dsp_mem), }; static struct meson_ee_pwrc_domain_desc sm1_pwrc_domains[] = { @@ -216,7 +275,7 @@ static struct meson_ee_pwrc_domain_desc sm1_pwrc_domains[] = { [PWRC_SM1_GE2D_ID] = TOP_PD("GE2D", &sm1_pwrc_ge2d, sm1_pwrc_mem_ge2d, pwrc_ee_get_power), [PWRC_SM1_AUDIO_ID] = MEM_PD("AUDIO", sm1_pwrc_mem_audio), - [PWRC_SM1_ETH_ID] = MEM_PD("ETH", g12a_pwrc_mem_eth), + [PWRC_SM1_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth), }; struct meson_ee_pwrc_domain { @@ -470,6 +529,21 @@ static struct meson_ee_pwrc_domain_data meson_ee_g12a_pwrc_data = { .domains = g12a_pwrc_domains, }; +static struct meson_ee_pwrc_domain_data meson_ee_gxbb_pwrc_data = { + .count = ARRAY_SIZE(gxbb_pwrc_domains), + .domains = gxbb_pwrc_domains, +}; + +static struct meson_ee_pwrc_domain_data meson_ee_m8_pwrc_data = { + .count = ARRAY_SIZE(meson8_pwrc_domains), + .domains = meson8_pwrc_domains, +}; + +static struct meson_ee_pwrc_domain_data meson_ee_m8b_pwrc_data = { + .count = ARRAY_SIZE(meson8b_pwrc_domains), + .domains = meson8b_pwrc_domains, +}; + static struct meson_ee_pwrc_domain_data meson_ee_sm1_pwrc_data = { .count = ARRAY_SIZE(sm1_pwrc_domains), .domains = sm1_pwrc_domains, @@ -477,6 +551,22 @@ static struct meson_ee_pwrc_domain_data meson_ee_sm1_pwrc_data = { static const struct of_device_id meson_ee_pwrc_match_table[] = { { + .compatible = "amlogic,meson8-pwrc", + .data = &meson_ee_m8_pwrc_data, + }, + { + .compatible = "amlogic,meson8b-pwrc", + .data = &meson_ee_m8b_pwrc_data, + }, + { + .compatible = "amlogic,meson8m2-pwrc", + .data = &meson_ee_m8b_pwrc_data, + }, + { + .compatible = "amlogic,meson-gxbb-pwrc", + .data = &meson_ee_gxbb_pwrc_data, + }, + { .compatible = "amlogic,meson-g12a-pwrc", .data = &meson_ee_g12a_pwrc_data, }, diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c index bcdcd3e7d7f1..7351f3030550 100644 --- a/drivers/soc/fsl/dpio/dpio-service.c +++ b/drivers/soc/fsl/dpio/dpio-service.c @@ -58,7 +58,7 @@ static inline struct dpaa2_io *service_select_by_cpu(struct dpaa2_io *d, * If cpu == -1, choose the current cpu, with no guarantees about * potentially being migrated away. */ - if (unlikely(cpu < 0)) + if (cpu < 0) cpu = smp_processor_id(); /* If a specific cpu was requested, pick it up immediately */ @@ -70,6 +70,10 @@ static inline struct dpaa2_io *service_select(struct dpaa2_io *d) if (d) return d; + d = service_select_by_cpu(d, -1); + if (d) + return d; + spin_lock(&dpio_list_lock); d = list_entry(dpio_list.next, struct dpaa2_io, node); list_del(&d->node); diff --git a/drivers/soc/fsl/dpio/qbman-portal.c b/drivers/soc/fsl/dpio/qbman-portal.c index 23a1377971f4..0ab85bfb116f 100644 --- a/drivers/soc/fsl/dpio/qbman-portal.c +++ b/drivers/soc/fsl/dpio/qbman-portal.c @@ -572,18 +572,6 @@ void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, u32 qdid, #define EQAR_VB(eqar) ((eqar) & 0x80) #define EQAR_SUCCESS(eqar) ((eqar) & 0x100) -static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p, - u8 idx) -{ - if (idx < 16) - qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT + idx * 4, - QMAN_RT_MODE); - else - qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT2 + - (idx - 16) * 4, - QMAN_RT_MODE); -} - #define QB_RT_BIT ((u32)0x100) /** * qbman_swp_enqueue_direct() - Issue an enqueue command diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c index 1e164e03410a..9888a7061873 100644 --- a/drivers/soc/fsl/qbman/qman.c +++ b/drivers/soc/fsl/qbman/qman.c @@ -449,11 +449,6 @@ static inline int qm_eqcr_init(struct qm_portal *portal, return 0; } -static inline unsigned int qm_eqcr_get_ci_stashing(struct qm_portal *portal) -{ - return (qm_in(portal, QM_REG_CFG) >> 28) & 0x7; -} - static inline void qm_eqcr_finish(struct qm_portal *portal) { struct qm_eqcr *eqcr = &portal->eqcr; diff --git a/drivers/soc/fsl/qe/qe.c b/drivers/soc/fsl/qe/qe.c index 447146861c2c..2df20d6f85fa 100644 --- a/drivers/soc/fsl/qe/qe.c +++ b/drivers/soc/fsl/qe/qe.c @@ -448,7 +448,7 @@ int qe_upload_firmware(const struct qe_firmware *firmware) unsigned int i; unsigned int j; u32 crc; - size_t calc_size = sizeof(struct qe_firmware); + size_t calc_size; size_t length; const struct qe_header *hdr; @@ -480,7 +480,7 @@ int qe_upload_firmware(const struct qe_firmware *firmware) } /* Validate the length and check if there's a CRC */ - calc_size += (firmware->count - 1) * sizeof(struct qe_microcode); + calc_size = struct_size(firmware, microcode, firmware->count); for (i = 0; i < firmware->count; i++) /* diff --git a/drivers/soc/fsl/qe/ucc.c b/drivers/soc/fsl/qe/ucc.c index d6c93970df4d..cac0fb7693a0 100644 --- a/drivers/soc/fsl/qe/ucc.c +++ b/drivers/soc/fsl/qe/ucc.c @@ -519,7 +519,7 @@ int ucc_set_tdm_rxtx_clk(u32 tdm_num, enum qe_clock clock, int clock_bits; u32 shift; struct qe_mux __iomem *qe_mux_reg; - __be32 __iomem *cmxs1cr; + __be32 __iomem *cmxs1cr; qe_mux_reg = &qe_immr->qmx; diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c index 719e1f189ebf..7b0759adb47d 100644 --- a/drivers/soc/imx/soc-imx8m.c +++ b/drivers/soc/imx/soc-imx8m.c @@ -53,11 +53,11 @@ static u32 __init imx8mq_soc_revision(void) struct device_node *np; void __iomem *ocotp_base; u32 magic; - u32 rev = 0; + u32 rev; np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp"); if (!np) - goto out; + return 0; ocotp_base = of_iomap(np, 0); WARN_ON(!ocotp_base); @@ -78,9 +78,8 @@ static u32 __init imx8mq_soc_revision(void) soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); iounmap(ocotp_base); - -out: of_node_put(np); + return rev; } diff --git a/drivers/soc/mediatek/Kconfig b/drivers/soc/mediatek/Kconfig index 2114b563478c..59a56cd790ec 100644 --- a/drivers/soc/mediatek/Kconfig +++ b/drivers/soc/mediatek/Kconfig @@ -44,4 +44,11 @@ config MTK_SCPSYS Say yes here to add support for the MediaTek SCPSYS power domain driver. +config MTK_MMSYS + bool "MediaTek MMSYS Support" + default ARCH_MEDIATEK + help + Say yes here to add support for the MediaTek Multimedia + Subsystem (MMSYS). + endmenu diff --git a/drivers/soc/mediatek/Makefile b/drivers/soc/mediatek/Makefile index b01733074ad6..01f9f873634a 100644 --- a/drivers/soc/mediatek/Makefile +++ b/drivers/soc/mediatek/Makefile @@ -3,3 +3,4 @@ obj-$(CONFIG_MTK_CMDQ) += mtk-cmdq-helper.o obj-$(CONFIG_MTK_INFRACFG) += mtk-infracfg.o obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o obj-$(CONFIG_MTK_SCPSYS) += mtk-scpsys.o +obj-$(CONFIG_MTK_MMSYS) += mtk-mmsys.o diff --git a/drivers/soc/mediatek/mtk-mmsys.c b/drivers/soc/mediatek/mtk-mmsys.c new file mode 100644 index 000000000000..a55f25511173 --- /dev/null +++ b/drivers/soc/mediatek/mtk-mmsys.c @@ -0,0 +1,378 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2014 MediaTek Inc. + * Author: James Liao <jamesjj.liao@mediatek.com> + */ + +#include <linux/device.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include <linux/soc/mediatek/mtk-mmsys.h> + +#include "../../gpu/drm/mediatek/mtk_drm_ddp.h" +#include "../../gpu/drm/mediatek/mtk_drm_ddp_comp.h" + +#define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040 +#define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044 +#define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048 +#define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c +#define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050 +#define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084 +#define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088 +#define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4 +#define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8 +#define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac +#define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8 +#define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4 +#define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8 +#define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100 + +#define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030 +#define DISP_REG_CONFIG_OUT_SEL 0x04c +#define DISP_REG_CONFIG_DSI_SEL 0x050 +#define DISP_REG_CONFIG_DPI_SEL 0x064 + +#define OVL0_MOUT_EN_COLOR0 0x1 +#define OD_MOUT_EN_RDMA0 0x1 +#define OD1_MOUT_EN_RDMA1 BIT(16) +#define UFOE_MOUT_EN_DSI0 0x1 +#define COLOR0_SEL_IN_OVL0 0x1 +#define OVL1_MOUT_EN_COLOR1 0x1 +#define GAMMA_MOUT_EN_RDMA1 0x1 +#define RDMA0_SOUT_DPI0 0x2 +#define RDMA0_SOUT_DPI1 0x3 +#define RDMA0_SOUT_DSI1 0x1 +#define RDMA0_SOUT_DSI2 0x4 +#define RDMA0_SOUT_DSI3 0x5 +#define RDMA1_SOUT_DPI0 0x2 +#define RDMA1_SOUT_DPI1 0x3 +#define RDMA1_SOUT_DSI1 0x1 +#define RDMA1_SOUT_DSI2 0x4 +#define RDMA1_SOUT_DSI3 0x5 +#define RDMA2_SOUT_DPI0 0x2 +#define RDMA2_SOUT_DPI1 0x3 +#define RDMA2_SOUT_DSI1 0x1 +#define RDMA2_SOUT_DSI2 0x4 +#define RDMA2_SOUT_DSI3 0x5 +#define DPI0_SEL_IN_RDMA1 0x1 +#define DPI0_SEL_IN_RDMA2 0x3 +#define DPI1_SEL_IN_RDMA1 (0x1 << 8) +#define DPI1_SEL_IN_RDMA2 (0x3 << 8) +#define DSI0_SEL_IN_RDMA1 0x1 +#define DSI0_SEL_IN_RDMA2 0x4 +#define DSI1_SEL_IN_RDMA1 0x1 +#define DSI1_SEL_IN_RDMA2 0x4 +#define DSI2_SEL_IN_RDMA1 (0x1 << 16) +#define DSI2_SEL_IN_RDMA2 (0x4 << 16) +#define DSI3_SEL_IN_RDMA1 (0x1 << 16) +#define DSI3_SEL_IN_RDMA2 (0x4 << 16) +#define COLOR1_SEL_IN_OVL1 0x1 + +#define OVL_MOUT_EN_RDMA 0x1 +#define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 +#define BLS_TO_DPI_RDMA1_TO_DSI 0x2 +#define DSI_SEL_IN_BLS 0x0 +#define DPI_SEL_IN_BLS 0x0 +#define DSI_SEL_IN_RDMA 0x1 + +struct mtk_mmsys_driver_data { + const char *clk_driver; +}; + +static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = { + .clk_driver = "clk-mt2701-mm", +}; + +static const struct mtk_mmsys_driver_data mt2712_mmsys_driver_data = { + .clk_driver = "clk-mt2712-mm", +}; + +static const struct mtk_mmsys_driver_data mt6779_mmsys_driver_data = { + .clk_driver = "clk-mt6779-mm", +}; + +static const struct mtk_mmsys_driver_data mt6797_mmsys_driver_data = { + .clk_driver = "clk-mt6797-mm", +}; + +static const struct mtk_mmsys_driver_data mt8173_mmsys_driver_data = { + .clk_driver = "clk-mt8173-mm", +}; + +static const struct mtk_mmsys_driver_data mt8183_mmsys_driver_data = { + .clk_driver = "clk-mt8183-mm", +}; + +static unsigned int mtk_mmsys_ddp_mout_en(enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next, + unsigned int *addr) +{ + unsigned int value; + + if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { + *addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN; + value = OVL0_MOUT_EN_COLOR0; + } else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) { + *addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN; + value = OVL_MOUT_EN_RDMA; + } else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) { + *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; + value = OD_MOUT_EN_RDMA0; + } else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) { + *addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN; + value = UFOE_MOUT_EN_DSI0; + } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { + *addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN; + value = OVL1_MOUT_EN_COLOR1; + } else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) { + *addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN; + value = GAMMA_MOUT_EN_RDMA1; + } else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) { + *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; + value = OD1_MOUT_EN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) { + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; + value = RDMA0_SOUT_DPI0; + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) { + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; + value = RDMA0_SOUT_DPI1; + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) { + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; + value = RDMA0_SOUT_DSI1; + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) { + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; + value = RDMA0_SOUT_DSI2; + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) { + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; + value = RDMA0_SOUT_DSI3; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; + value = RDMA1_SOUT_DSI1; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; + value = RDMA1_SOUT_DSI2; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; + value = RDMA1_SOUT_DSI3; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; + value = RDMA1_SOUT_DPI0; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; + value = RDMA1_SOUT_DPI1; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; + value = RDMA2_SOUT_DPI0; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; + value = RDMA2_SOUT_DPI1; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; + value = RDMA2_SOUT_DSI1; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; + value = RDMA2_SOUT_DSI2; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; + value = RDMA2_SOUT_DSI3; + } else { + value = 0; + } + + return value; +} + +static unsigned int mtk_mmsys_ddp_sel_in(enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next, + unsigned int *addr) +{ + unsigned int value; + + if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { + *addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN; + value = COLOR0_SEL_IN_OVL0; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { + *addr = DISP_REG_CONFIG_DPI_SEL_IN; + value = DPI0_SEL_IN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { + *addr = DISP_REG_CONFIG_DPI_SEL_IN; + value = DPI1_SEL_IN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) { + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; + value = DSI0_SEL_IN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { + *addr = DISP_REG_CONFIG_DSIO_SEL_IN; + value = DSI1_SEL_IN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; + value = DSI2_SEL_IN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { + *addr = DISP_REG_CONFIG_DSIO_SEL_IN; + value = DSI3_SEL_IN_RDMA1; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { + *addr = DISP_REG_CONFIG_DPI_SEL_IN; + value = DPI0_SEL_IN_RDMA2; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { + *addr = DISP_REG_CONFIG_DPI_SEL_IN; + value = DPI1_SEL_IN_RDMA2; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) { + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; + value = DSI0_SEL_IN_RDMA2; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { + *addr = DISP_REG_CONFIG_DSIO_SEL_IN; + value = DSI1_SEL_IN_RDMA2; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; + value = DSI2_SEL_IN_RDMA2; + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; + value = DSI3_SEL_IN_RDMA2; + } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { + *addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN; + value = COLOR1_SEL_IN_OVL1; + } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { + *addr = DISP_REG_CONFIG_DSI_SEL; + value = DSI_SEL_IN_BLS; + } else { + value = 0; + } + + return value; +} + +static void mtk_mmsys_ddp_sout_sel(void __iomem *config_regs, + enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next) +{ + if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { + writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1, + config_regs + DISP_REG_CONFIG_OUT_SEL); + } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) { + writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI, + config_regs + DISP_REG_CONFIG_OUT_SEL); + writel_relaxed(DSI_SEL_IN_RDMA, + config_regs + DISP_REG_CONFIG_DSI_SEL); + writel_relaxed(DPI_SEL_IN_BLS, + config_regs + DISP_REG_CONFIG_DPI_SEL); + } +} + +void mtk_mmsys_ddp_connect(struct device *dev, + enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next) +{ + void __iomem *config_regs = dev_get_drvdata(dev); + unsigned int addr, value, reg; + + value = mtk_mmsys_ddp_mout_en(cur, next, &addr); + if (value) { + reg = readl_relaxed(config_regs + addr) | value; + writel_relaxed(reg, config_regs + addr); + } + + mtk_mmsys_ddp_sout_sel(config_regs, cur, next); + + value = mtk_mmsys_ddp_sel_in(cur, next, &addr); + if (value) { + reg = readl_relaxed(config_regs + addr) | value; + writel_relaxed(reg, config_regs + addr); + } +} +EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_connect); + +void mtk_mmsys_ddp_disconnect(struct device *dev, + enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next) +{ + void __iomem *config_regs = dev_get_drvdata(dev); + unsigned int addr, value, reg; + + value = mtk_mmsys_ddp_mout_en(cur, next, &addr); + if (value) { + reg = readl_relaxed(config_regs + addr) & ~value; + writel_relaxed(reg, config_regs + addr); + } + + value = mtk_mmsys_ddp_sel_in(cur, next, &addr); + if (value) { + reg = readl_relaxed(config_regs + addr) & ~value; + writel_relaxed(reg, config_regs + addr); + } +} +EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_disconnect); + +static int mtk_mmsys_probe(struct platform_device *pdev) +{ + const struct mtk_mmsys_driver_data *data; + struct device *dev = &pdev->dev; + struct platform_device *clks; + struct platform_device *drm; + void __iomem *config_regs; + struct resource *mem; + int ret; + + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); + config_regs = devm_ioremap_resource(dev, mem); + if (IS_ERR(config_regs)) { + ret = PTR_ERR(config_regs); + dev_err(dev, "Failed to ioremap mmsys-config resource: %d\n", + ret); + return ret; + } + + platform_set_drvdata(pdev, config_regs); + + data = of_device_get_match_data(&pdev->dev); + + clks = platform_device_register_data(&pdev->dev, data->clk_driver, + PLATFORM_DEVID_AUTO, NULL, 0); + if (IS_ERR(clks)) + return PTR_ERR(clks); + + drm = platform_device_register_data(&pdev->dev, "mediatek-drm", + PLATFORM_DEVID_AUTO, NULL, 0); + if (IS_ERR(drm)) { + platform_device_unregister(clks); + return PTR_ERR(drm); + } + + return 0; +} + +static const struct of_device_id of_match_mtk_mmsys[] = { + { + .compatible = "mediatek,mt2701-mmsys", + .data = &mt2701_mmsys_driver_data, + }, + { + .compatible = "mediatek,mt2712-mmsys", + .data = &mt2712_mmsys_driver_data, + }, + { + .compatible = "mediatek,mt6779-mmsys", + .data = &mt6779_mmsys_driver_data, + }, + { + .compatible = "mediatek,mt6797-mmsys", + .data = &mt6797_mmsys_driver_data, + }, + { + .compatible = "mediatek,mt8173-mmsys", + .data = &mt8173_mmsys_driver_data, + }, + { + .compatible = "mediatek,mt8183-mmsys", + .data = &mt8183_mmsys_driver_data, + }, + { } +}; + +static struct platform_driver mtk_mmsys_drv = { + .driver = { + .name = "mtk-mmsys", + .of_match_table = of_match_mtk_mmsys, + }, + .probe = mtk_mmsys_probe, +}; + +builtin_platform_driver(mtk_mmsys_drv); diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index 285baa7e474e..250a393f255d 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig @@ -107,7 +107,7 @@ config QCOM_RPMH help apply the aggregated state on the resource. config QCOM_RPMHPD - bool "Qualcomm RPMh Power domain driver" + tristate "Qualcomm RPMh Power domain driver" depends on QCOM_RPMH && QCOM_COMMAND_DB help QCOM RPMh Power domain driver to support power-domains with @@ -116,8 +116,8 @@ config QCOM_RPMHPD for the voltage rail. config QCOM_RPMPD - bool "Qualcomm RPM Power domain driver" - depends on QCOM_SMD_RPM=y + tristate "Qualcomm RPM Power domain driver" + depends on QCOM_SMD_RPM help QCOM RPM Power domain driver to support power-domains with performance states. The driver communicates a performance state diff --git a/drivers/soc/qcom/cmd-db.c b/drivers/soc/qcom/cmd-db.c index f6c3d17b05c7..fc5610603b17 100644 --- a/drivers/soc/qcom/cmd-db.c +++ b/drivers/soc/qcom/cmd-db.c @@ -1,12 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. */ +#include <linux/debugfs.h> #include <linux/kernel.h> #include <linux/of.h> #include <linux/of_address.h> -#include <linux/of_platform.h> #include <linux/of_reserved_mem.h> #include <linux/platform_device.h> +#include <linux/seq_file.h> #include <linux/types.h> #include <soc/qcom/cmd-db.h> @@ -236,6 +237,77 @@ enum cmd_db_hw_type cmd_db_read_slave_id(const char *id) } EXPORT_SYMBOL(cmd_db_read_slave_id); +#ifdef CONFIG_DEBUG_FS +static int cmd_db_debugfs_dump(struct seq_file *seq, void *p) +{ + int i, j; + const struct rsc_hdr *rsc; + const struct entry_header *ent; + const char *name; + u16 len, version; + u8 major, minor; + + seq_puts(seq, "Command DB DUMP\n"); + + for (i = 0; i < MAX_SLV_ID; i++) { + rsc = &cmd_db_header->header[i]; + if (!rsc->slv_id) + break; + + switch (le16_to_cpu(rsc->slv_id)) { + case CMD_DB_HW_ARC: + name = "ARC"; + break; + case CMD_DB_HW_VRM: + name = "VRM"; + break; + case CMD_DB_HW_BCM: + name = "BCM"; + break; + default: + name = "Unknown"; + break; + } + + version = le16_to_cpu(rsc->version); + major = version >> 8; + minor = version; + + seq_printf(seq, "Slave %s (v%u.%u)\n", name, major, minor); + seq_puts(seq, "-------------------------\n"); + + ent = rsc_to_entry_header(rsc); + for (j = 0; j < le16_to_cpu(rsc->cnt); j++, ent++) { + seq_printf(seq, "0x%05x: %*pEp", le32_to_cpu(ent->addr), + (int)sizeof(ent->id), ent->id); + + len = le16_to_cpu(ent->len); + if (len) { + seq_printf(seq, " [%*ph]", + len, rsc_offset(rsc, ent)); + } + seq_putc(seq, '\n'); + } + } + + return 0; +} + +static int open_cmd_db_debugfs(struct inode *inode, struct file *file) +{ + return single_open(file, cmd_db_debugfs_dump, inode->i_private); +} +#endif + +static const struct file_operations cmd_db_debugfs_ops = { +#ifdef CONFIG_DEBUG_FS + .open = open_cmd_db_debugfs, +#endif + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + static int cmd_db_dev_probe(struct platform_device *pdev) { struct reserved_mem *rmem; @@ -259,12 +331,14 @@ static int cmd_db_dev_probe(struct platform_device *pdev) return -EINVAL; } + debugfs_create_file("cmd-db", 0400, NULL, NULL, &cmd_db_debugfs_ops); + return 0; } static const struct of_device_id cmd_db_match_table[] = { { .compatible = "qcom,cmd-db" }, - { }, + { } }; static struct platform_driver cmd_db_dev_driver = { diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c index 17ad3b8698e1..bdcf16f88a97 100644 --- a/drivers/soc/qcom/pdr_interface.c +++ b/drivers/soc/qcom/pdr_interface.c @@ -155,10 +155,6 @@ static int pdr_register_listener(struct pdr_handle *pdr, return ret; } - if ((int)resp.curr_state < INT_MIN || (int)resp.curr_state > INT_MAX) - pr_err("PDR: %s notification state invalid: 0x%x\n", - pds->service_path, resp.curr_state); - pds->state = resp.curr_state; return 0; diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c index f43a2e07ee83..ed2c687c16b3 100644 --- a/drivers/soc/qcom/qcom_aoss.c +++ b/drivers/soc/qcom/qcom_aoss.c @@ -599,6 +599,7 @@ static const struct of_device_id qmp_dt_match[] = { { .compatible = "qcom,sc7180-aoss-qmp", }, { .compatible = "qcom,sdm845-aoss-qmp", }, { .compatible = "qcom,sm8150-aoss-qmp", }, + { .compatible = "qcom,sm8250-aoss-qmp", }, {} }; MODULE_DEVICE_TABLE(of, qmp_dt_match); diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h index 6eec32b97f83..ef60e790a750 100644 --- a/drivers/soc/qcom/rpmh-internal.h +++ b/drivers/soc/qcom/rpmh-internal.h @@ -22,16 +22,23 @@ struct rsc_drv; * struct tcs_group: group of Trigger Command Sets (TCS) to send state requests * to the controller * - * @drv: the controller - * @type: type of the TCS in this group - active, sleep, wake - * @mask: mask of the TCSes relative to all the TCSes in the RSC - * @offset: start of the TCS group relative to the TCSes in the RSC - * @num_tcs: number of TCSes in this type - * @ncpt: number of commands in each TCS - * @lock: lock for synchronizing this TCS writes - * @req: requests that are sent from the TCS - * @cmd_cache: flattened cache of cmds in sleep/wake TCS - * @slots: indicates which of @cmd_addr are occupied + * @drv: The controller. + * @type: Type of the TCS in this group - active, sleep, wake. + * @mask: Mask of the TCSes relative to all the TCSes in the RSC. + * @offset: Start of the TCS group relative to the TCSes in the RSC. + * @num_tcs: Number of TCSes in this type. + * @ncpt: Number of commands in each TCS. + * @req: Requests that are sent from the TCS; only used for ACTIVE_ONLY + * transfers (could be on a wake/sleep TCS if we are borrowing for + * an ACTIVE_ONLY transfer). + * Start: grab drv->lock, set req, set tcs_in_use, drop drv->lock, + * trigger + * End: get irq, access req, + * grab drv->lock, clear tcs_in_use, drop drv->lock + * @slots: Indicates which of @cmd_addr are occupied; only used for + * SLEEP / WAKE TCSs. Things are tightly packed in the + * case that (ncpt < MAX_CMDS_PER_TCS). That is if ncpt = 2 and + * MAX_CMDS_PER_TCS = 16 then bit[2] = the first bit in 2nd TCS. */ struct tcs_group { struct rsc_drv *drv; @@ -40,9 +47,7 @@ struct tcs_group { u32 offset; int num_tcs; int ncpt; - spinlock_t lock; const struct tcs_request *req[MAX_TCS_PER_TYPE]; - u32 *cmd_cache; DECLARE_BITMAP(slots, MAX_TCS_SLOTS); }; @@ -84,20 +89,32 @@ struct rpmh_ctrlr { * struct rsc_drv: the Direct Resource Voter (DRV) of the * Resource State Coordinator controller (RSC) * - * @name: controller identifier - * @tcs_base: start address of the TCS registers in this controller - * @id: instance id in the controller (Direct Resource Voter) - * @num_tcs: number of TCSes in this DRV - * @tcs: TCS groups - * @tcs_in_use: s/w state of the TCS - * @lock: synchronize state of the controller - * @client: handle to the DRV's client. + * @name: Controller identifier. + * @tcs_base: Start address of the TCS registers in this controller. + * @id: Instance id in the controller (Direct Resource Voter). + * @num_tcs: Number of TCSes in this DRV. + * @rsc_pm: CPU PM notifier for controller. + * Used when solver mode is not present. + * @cpus_in_pm: Number of CPUs not in idle power collapse. + * Used when solver mode is not present. + * @tcs: TCS groups. + * @tcs_in_use: S/W state of the TCS; only set for ACTIVE_ONLY + * transfers, but might show a sleep/wake TCS in use if + * it was borrowed for an active_only transfer. You + * must hold the lock in this struct (AKA drv->lock) in + * order to update this. + * @lock: Synchronize state of the controller. If RPMH's cache + * lock will also be held, the order is: drv->lock then + * cache_lock. + * @client: Handle to the DRV's client. */ struct rsc_drv { const char *name; void __iomem *tcs_base; int id; int num_tcs; + struct notifier_block rsc_pm; + atomic_t cpus_in_pm; struct tcs_group tcs[TCS_TYPE_NR]; DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR); spinlock_t lock; @@ -107,7 +124,7 @@ struct rsc_drv { int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg); int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg); -int rpmh_rsc_invalidate(struct rsc_drv *drv); +void rpmh_rsc_invalidate(struct rsc_drv *drv); void rpmh_tx_done(const struct tcs_request *msg, int r); int rpmh_flush(struct rpmh_ctrlr *ctrlr); diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index b71822131f59..076fd27f3081 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -6,9 +6,11 @@ #define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME #include <linux/atomic.h> +#include <linux/cpu_pm.h> #include <linux/delay.h> #include <linux/interrupt.h> #include <linux/io.h> +#include <linux/iopoll.h> #include <linux/kernel.h> #include <linux/list.h> #include <linux/of.h> @@ -30,21 +32,41 @@ #define RSC_DRV_TCS_OFFSET 672 #define RSC_DRV_CMD_OFFSET 20 -/* DRV Configuration Information Register */ +/* DRV HW Solver Configuration Information Register */ +#define DRV_SOLVER_CONFIG 0x04 +#define DRV_HW_SOLVER_MASK 1 +#define DRV_HW_SOLVER_SHIFT 24 + +/* DRV TCS Configuration Information Register */ #define DRV_PRNT_CHLD_CONFIG 0x0C #define DRV_NUM_TCS_MASK 0x3F #define DRV_NUM_TCS_SHIFT 6 #define DRV_NCPT_MASK 0x1F #define DRV_NCPT_SHIFT 27 -/* Register offsets */ +/* Offsets for common TCS Registers, one bit per TCS */ #define RSC_DRV_IRQ_ENABLE 0x00 #define RSC_DRV_IRQ_STATUS 0x04 -#define RSC_DRV_IRQ_CLEAR 0x08 -#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10 +#define RSC_DRV_IRQ_CLEAR 0x08 /* w/o; write 1 to clear */ + +/* + * Offsets for per TCS Registers. + * + * TCSes start at 0x10 from tcs_base and are stored one after another. + * Multiply tcs_id by RSC_DRV_TCS_OFFSET to find a given TCS and add one + * of the below to find a register. + */ +#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10 /* 1 bit per command */ #define RSC_DRV_CONTROL 0x14 -#define RSC_DRV_STATUS 0x18 -#define RSC_DRV_CMD_ENABLE 0x1C +#define RSC_DRV_STATUS 0x18 /* zero if tcs is busy */ +#define RSC_DRV_CMD_ENABLE 0x1C /* 1 bit per command */ + +/* + * Offsets for per command in a TCS. + * + * Commands (up to 16) start at 0x30 in a TCS; multiply command index + * by RSC_DRV_CMD_OFFSET and add one of the below to find a register. + */ #define RSC_DRV_CMD_MSGID 0x30 #define RSC_DRV_CMD_ADDR 0x34 #define RSC_DRV_CMD_DATA 0x38 @@ -61,94 +83,179 @@ #define CMD_STATUS_ISSUED BIT(8) #define CMD_STATUS_COMPL BIT(16) -static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id) +/* + * Here's a high level overview of how all the registers in RPMH work + * together: + * + * - The main rpmh-rsc address is the base of a register space that can + * be used to find overall configuration of the hardware + * (DRV_PRNT_CHLD_CONFIG). Also found within the rpmh-rsc register + * space are all the TCS blocks. The offset of the TCS blocks is + * specified in the device tree by "qcom,tcs-offset" and used to + * compute tcs_base. + * - TCS blocks come one after another. Type, count, and order are + * specified by the device tree as "qcom,tcs-config". + * - Each TCS block has some registers, then space for up to 16 commands. + * Note that though address space is reserved for 16 commands, fewer + * might be present. See ncpt (num cmds per TCS). + * + * Here's a picture: + * + * +---------------------------------------------------+ + * |RSC | + * | ctrl | + * | | + * | Drvs: | + * | +-----------------------------------------------+ | + * | |DRV0 | | + * | | ctrl/config | | + * | | IRQ | | + * | | | | + * | | TCSes: | | + * | | +------------------------------------------+ | | + * | | |TCS0 | | | | | | | | | | | | | | | + * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | | + * | | | | | | | | | | | | | | | | | | + * | | +------------------------------------------+ | | + * | | +------------------------------------------+ | | + * | | |TCS1 | | | | | | | | | | | | | | | + * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | | + * | | | | | | | | | | | | | | | | | | + * | | +------------------------------------------+ | | + * | | +------------------------------------------+ | | + * | | |TCS2 | | | | | | | | | | | | | | | + * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | | + * | | | | | | | | | | | | | | | | | | + * | | +------------------------------------------+ | | + * | | ...... | | + * | +-----------------------------------------------+ | + * | +-----------------------------------------------+ | + * | |DRV1 | | + * | | (same as DRV0) | | + * | +-----------------------------------------------+ | + * | ...... | + * +---------------------------------------------------+ + */ + +static inline void __iomem * +tcs_reg_addr(const struct rsc_drv *drv, int reg, int tcs_id) { - return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id + - RSC_DRV_CMD_OFFSET * cmd_id); + return drv->tcs_base + RSC_DRV_TCS_OFFSET * tcs_id + reg; } -static void write_tcs_cmd(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id, - u32 data) +static inline void __iomem * +tcs_cmd_addr(const struct rsc_drv *drv, int reg, int tcs_id, int cmd_id) { - writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id + - RSC_DRV_CMD_OFFSET * cmd_id); + return tcs_reg_addr(drv, reg, tcs_id) + RSC_DRV_CMD_OFFSET * cmd_id; } -static void write_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, u32 data) +static u32 read_tcs_cmd(const struct rsc_drv *drv, int reg, int tcs_id, + int cmd_id) { - writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id); + return readl_relaxed(tcs_cmd_addr(drv, reg, tcs_id, cmd_id)); } -static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int tcs_id, - u32 data) +static u32 read_tcs_reg(const struct rsc_drv *drv, int reg, int tcs_id) { - writel(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id); - for (;;) { - if (data == readl(drv->tcs_base + reg + - RSC_DRV_TCS_OFFSET * tcs_id)) - break; - udelay(1); - } + return readl_relaxed(tcs_reg_addr(drv, reg, tcs_id)); } -static bool tcs_is_free(struct rsc_drv *drv, int tcs_id) +static void write_tcs_cmd(const struct rsc_drv *drv, int reg, int tcs_id, + int cmd_id, u32 data) { - return !test_bit(tcs_id, drv->tcs_in_use) && - read_tcs_reg(drv, RSC_DRV_STATUS, tcs_id, 0); + writel_relaxed(data, tcs_cmd_addr(drv, reg, tcs_id, cmd_id)); } -static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type) +static void write_tcs_reg(const struct rsc_drv *drv, int reg, int tcs_id, + u32 data) { - return &drv->tcs[type]; + writel_relaxed(data, tcs_reg_addr(drv, reg, tcs_id)); } -static int tcs_invalidate(struct rsc_drv *drv, int type) +static void write_tcs_reg_sync(const struct rsc_drv *drv, int reg, int tcs_id, + u32 data) { - int m; - struct tcs_group *tcs; + u32 new_data; - tcs = get_tcs_of_type(drv, type); + writel(data, tcs_reg_addr(drv, reg, tcs_id)); + if (readl_poll_timeout_atomic(tcs_reg_addr(drv, reg, tcs_id), new_data, + new_data == data, 1, USEC_PER_SEC)) + pr_err("%s: error writing %#x to %d:%#x\n", drv->name, + data, tcs_id, reg); +} - spin_lock(&tcs->lock); - if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) { - spin_unlock(&tcs->lock); - return 0; - } +/** + * tcs_is_free() - Return if a TCS is totally free. + * @drv: The RSC controller. + * @tcs_id: The global ID of this TCS. + * + * Returns true if nobody has claimed this TCS (by setting tcs_in_use). + * + * Context: Must be called with the drv->lock held. + * + * Return: true if the given TCS is free. + */ +static bool tcs_is_free(struct rsc_drv *drv, int tcs_id) +{ + return !test_bit(tcs_id, drv->tcs_in_use); +} + +/** + * tcs_invalidate() - Invalidate all TCSes of the given type (sleep or wake). + * @drv: The RSC controller. + * @type: SLEEP_TCS or WAKE_TCS + * + * This will clear the "slots" variable of the given tcs_group and also + * tell the hardware to forget about all entries. + * + * The caller must ensure that no other RPMH actions are happening when this + * function is called, since otherwise the device may immediately become + * used again even before this function exits. + */ +static void tcs_invalidate(struct rsc_drv *drv, int type) +{ + int m; + struct tcs_group *tcs = &drv->tcs[type]; + + /* Caller ensures nobody else is running so no lock */ + if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) + return; for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { - if (!tcs_is_free(drv, m)) { - spin_unlock(&tcs->lock); - return -EAGAIN; - } write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0); write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0); } bitmap_zero(tcs->slots, MAX_TCS_SLOTS); - spin_unlock(&tcs->lock); - - return 0; } /** - * rpmh_rsc_invalidate - Invalidate sleep and wake TCSes + * rpmh_rsc_invalidate() - Invalidate sleep and wake TCSes. + * @drv: The RSC controller. * - * @drv: the RSC controller + * The caller must ensure that no other RPMH actions are happening when this + * function is called, since otherwise the device may immediately become + * used again even before this function exits. */ -int rpmh_rsc_invalidate(struct rsc_drv *drv) +void rpmh_rsc_invalidate(struct rsc_drv *drv) { - int ret; - - ret = tcs_invalidate(drv, SLEEP_TCS); - if (!ret) - ret = tcs_invalidate(drv, WAKE_TCS); - - return ret; + tcs_invalidate(drv, SLEEP_TCS); + tcs_invalidate(drv, WAKE_TCS); } +/** + * get_tcs_for_msg() - Get the tcs_group used to send the given message. + * @drv: The RSC controller. + * @msg: The message we want to send. + * + * This is normally pretty straightforward except if we are trying to send + * an ACTIVE_ONLY message but don't have any active_only TCSes. + * + * Return: A pointer to a tcs_group or an ERR_PTR. + */ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv, const struct tcs_request *msg) { - int type, ret; + int type; struct tcs_group *tcs; switch (msg->state) { @@ -168,24 +275,33 @@ static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv, /* * If we are making an active request on a RSC that does not have a * dedicated TCS for active state use, then re-purpose a wake TCS to - * send active votes. - * NOTE: The driver must be aware that this RSC does not have a - * dedicated AMC, and therefore would invalidate the sleep and wake - * TCSes before making an active state request. + * send active votes. This is safe because we ensure any active-only + * transfers have finished before we use it (maybe by running from + * the last CPU in PM code). */ - tcs = get_tcs_of_type(drv, type); - if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) { - tcs = get_tcs_of_type(drv, WAKE_TCS); - if (tcs->num_tcs) { - ret = rpmh_rsc_invalidate(drv); - if (ret) - return ERR_PTR(ret); - } - } + tcs = &drv->tcs[type]; + if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) + tcs = &drv->tcs[WAKE_TCS]; return tcs; } +/** + * get_req_from_tcs() - Get a stashed request that was xfering on the given TCS. + * @drv: The RSC controller. + * @tcs_id: The global ID of this TCS. + * + * For ACTIVE_ONLY transfers we want to call back into the client when the + * transfer finishes. To do this we need the "request" that the client + * originally provided us. This function grabs the request that we stashed + * when we started the transfer. + * + * This only makes sense for ACTIVE_ONLY transfers since those are the only + * ones we track sending (the only ones we enable interrupts for and the only + * ones we call back to the client for). + * + * Return: The stashed request. + */ static const struct tcs_request *get_req_from_tcs(struct rsc_drv *drv, int tcs_id) { @@ -202,7 +318,76 @@ static const struct tcs_request *get_req_from_tcs(struct rsc_drv *drv, } /** - * tcs_tx_done: TX Done interrupt handler + * __tcs_set_trigger() - Start xfer on a TCS or unset trigger on a borrowed TCS + * @drv: The controller. + * @tcs_id: The global ID of this TCS. + * @trigger: If true then untrigger/retrigger. If false then just untrigger. + * + * In the normal case we only ever call with "trigger=true" to start a + * transfer. That will un-trigger/disable the TCS from the last transfer + * then trigger/enable for this transfer. + * + * If we borrowed a wake TCS for an active-only transfer we'll also call + * this function with "trigger=false" to just do the un-trigger/disable + * before using the TCS for wake purposes again. + * + * Note that the AP is only in charge of triggering active-only transfers. + * The AP never triggers sleep/wake values using this function. + */ +static void __tcs_set_trigger(struct rsc_drv *drv, int tcs_id, bool trigger) +{ + u32 enable; + + /* + * HW req: Clear the DRV_CONTROL and enable TCS again + * While clearing ensure that the AMC mode trigger is cleared + * and then the mode enable is cleared. + */ + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id); + enable &= ~TCS_AMC_MODE_TRIGGER; + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); + enable &= ~TCS_AMC_MODE_ENABLE; + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); + + if (trigger) { + /* Enable the AMC mode on the TCS and then trigger the TCS */ + enable = TCS_AMC_MODE_ENABLE; + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); + enable |= TCS_AMC_MODE_TRIGGER; + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); + } +} + +/** + * enable_tcs_irq() - Enable or disable interrupts on the given TCS. + * @drv: The controller. + * @tcs_id: The global ID of this TCS. + * @enable: If true then enable; if false then disable + * + * We only ever call this when we borrow a wake TCS for an active-only + * transfer. For active-only TCSes interrupts are always left enabled. + */ +static void enable_tcs_irq(struct rsc_drv *drv, int tcs_id, bool enable) +{ + u32 data; + + data = readl_relaxed(drv->tcs_base + RSC_DRV_IRQ_ENABLE); + if (enable) + data |= BIT(tcs_id); + else + data &= ~BIT(tcs_id); + writel_relaxed(data, drv->tcs_base + RSC_DRV_IRQ_ENABLE); +} + +/** + * tcs_tx_done() - TX Done interrupt handler. + * @irq: The IRQ number (ignored). + * @p: Pointer to "struct rsc_drv". + * + * Called for ACTIVE_ONLY transfers (those are the only ones we enable the + * IRQ for) when a transfer is done. + * + * Return: IRQ_HANDLED */ static irqreturn_t tcs_tx_done(int irq, void *p) { @@ -212,7 +397,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) const struct tcs_request *req; struct tcs_cmd *cmd; - irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0); + irq_status = readl_relaxed(drv->tcs_base + RSC_DRV_IRQ_STATUS); for_each_set_bit(i, &irq_status, BITS_PER_LONG) { req = get_req_from_tcs(drv, i); @@ -226,7 +411,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) u32 sts; cmd = &req->cmds[j]; - sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, i, j); + sts = read_tcs_cmd(drv, RSC_DRV_CMD_STATUS, i, j); if (!(sts & CMD_STATUS_ISSUED) || ((req->wait_for_compl || cmd->wait) && !(sts & CMD_STATUS_COMPL))) { @@ -237,13 +422,28 @@ static irqreturn_t tcs_tx_done(int irq, void *p) } trace_rpmh_tx_done(drv, i, req, err); + + /* + * If wake tcs was re-purposed for sending active + * votes, clear AMC trigger & enable modes and + * disable interrupt for this TCS + */ + if (!drv->tcs[ACTIVE_TCS].num_tcs) + __tcs_set_trigger(drv, i, false); skip: /* Reclaim the TCS */ write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0); write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, i, 0); - write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i)); + writel_relaxed(BIT(i), drv->tcs_base + RSC_DRV_IRQ_CLEAR); spin_lock(&drv->lock); clear_bit(i, drv->tcs_in_use); + /* + * Disable interrupt for WAKE TCS to avoid being + * spammed with interrupts coming when the solver + * sends its wake votes. + */ + if (!drv->tcs[ACTIVE_TCS].num_tcs) + enable_tcs_irq(drv, i, false); spin_unlock(&drv->lock); if (req) rpmh_tx_done(req, err); @@ -252,6 +452,16 @@ skip: return IRQ_HANDLED; } +/** + * __tcs_buffer_write() - Write to TCS hardware from a request; don't trigger. + * @drv: The controller. + * @tcs_id: The global ID of this TCS. + * @cmd_id: The index within the TCS to start writing. + * @msg: The message we want to send, which will contain several addr/data + * pairs to program (but few enough that they all fit in one TCS). + * + * This is used for all types of transfers (active, sleep, and wake). + */ static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id, const struct tcs_request *msg) { @@ -265,7 +475,7 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id, cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0; cmd_msgid |= CMD_MSGID_WRITE; - cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); + cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id); for (i = 0, j = cmd_id; i < msg->num_cmds; i++, j++) { cmd = &msg->cmds[i]; @@ -281,32 +491,30 @@ static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id, } write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, cmd_complete); - cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); + cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id); write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, cmd_enable); } -static void __tcs_trigger(struct rsc_drv *drv, int tcs_id) -{ - u32 enable; - - /* - * HW req: Clear the DRV_CONTROL and enable TCS again - * While clearing ensure that the AMC mode trigger is cleared - * and then the mode enable is cleared. - */ - enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0); - enable &= ~TCS_AMC_MODE_TRIGGER; - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); - enable &= ~TCS_AMC_MODE_ENABLE; - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); - - /* Enable the AMC mode on the TCS and then trigger the TCS */ - enable = TCS_AMC_MODE_ENABLE; - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); - enable |= TCS_AMC_MODE_TRIGGER; - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); -} - +/** + * check_for_req_inflight() - Look to see if conflicting cmds are in flight. + * @drv: The controller. + * @tcs: A pointer to the tcs_group used for ACTIVE_ONLY transfers. + * @msg: The message we want to send, which will contain several addr/data + * pairs to program (but few enough that they all fit in one TCS). + * + * This will walk through the TCSes in the group and check if any of them + * appear to be sending to addresses referenced in the message. If it finds + * one it'll return -EBUSY. + * + * Only for use for active-only transfers. + * + * Must be called with the drv->lock held since that protects tcs_in_use. + * + * Return: 0 if nothing in flight or -EBUSY if we should try again later. + * The caller must re-enable interrupts between tries since that's + * the only way tcs_is_free() will ever return true and the only way + * RSC_DRV_CMD_ENABLE will ever be cleared. + */ static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs, const struct tcs_request *msg) { @@ -319,10 +527,10 @@ static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs, if (tcs_is_free(drv, tcs_id)) continue; - curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id); for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) { - addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, tcs_id, j); + addr = read_tcs_cmd(drv, RSC_DRV_CMD_ADDR, tcs_id, j); for (k = 0; k < msg->num_cmds; k++) { if (addr == msg->cmds[k].addr) return -EBUSY; @@ -333,6 +541,15 @@ static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs, return 0; } +/** + * find_free_tcs() - Find free tcs in the given tcs_group; only for active. + * @tcs: A pointer to the active-only tcs_group (or the wake tcs_group if + * we borrowed it because there are zero active-only ones). + * + * Must be called with the drv->lock held since that protects tcs_in_use. + * + * Return: The first tcs that's free. + */ static int find_free_tcs(struct tcs_group *tcs) { int i; @@ -345,6 +562,20 @@ static int find_free_tcs(struct tcs_group *tcs) return -EBUSY; } +/** + * tcs_write() - Store messages into a TCS right now, or return -EBUSY. + * @drv: The controller. + * @msg: The data to be sent. + * + * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it. + * + * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for + * the same address is already transferring returns -EBUSY which means the + * client should retry shortly. + * + * Return: 0 on success, -EBUSY if client should retry, or an error. + * Client should have interrupts enabled for a bit before retrying. + */ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg) { struct tcs_group *tcs; @@ -356,57 +587,77 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg) if (IS_ERR(tcs)) return PTR_ERR(tcs); - spin_lock_irqsave(&tcs->lock, flags); - spin_lock(&drv->lock); + spin_lock_irqsave(&drv->lock, flags); /* * The h/w does not like if we send a request to the same address, * when one is already in-flight or being processed. */ ret = check_for_req_inflight(drv, tcs, msg); - if (ret) { - spin_unlock(&drv->lock); - goto done_write; - } + if (ret) + goto unlock; - tcs_id = find_free_tcs(tcs); - if (tcs_id < 0) { - ret = tcs_id; - spin_unlock(&drv->lock); - goto done_write; - } + ret = find_free_tcs(tcs); + if (ret < 0) + goto unlock; + tcs_id = ret; tcs->req[tcs_id - tcs->offset] = msg; set_bit(tcs_id, drv->tcs_in_use); - spin_unlock(&drv->lock); + if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) { + /* + * Clear previously programmed WAKE commands in selected + * repurposed TCS to avoid triggering them. tcs->slots will be + * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate() + */ + write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); + write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); + enable_tcs_irq(drv, tcs_id, true); + } + spin_unlock_irqrestore(&drv->lock, flags); + /* + * These two can be done after the lock is released because: + * - We marked "tcs_in_use" under lock. + * - Once "tcs_in_use" has been marked nobody else could be writing + * to these registers until the interrupt goes off. + * - The interrupt can't go off until we trigger w/ the last line + * of __tcs_set_trigger() below. + */ __tcs_buffer_write(drv, tcs_id, 0, msg); - __tcs_trigger(drv, tcs_id); + __tcs_set_trigger(drv, tcs_id, true); -done_write: - spin_unlock_irqrestore(&tcs->lock, flags); + return 0; +unlock: + spin_unlock_irqrestore(&drv->lock, flags); return ret; } /** - * rpmh_rsc_send_data: Validate the incoming message and write to the - * appropriate TCS block. + * rpmh_rsc_send_data() - Write / trigger active-only message. + * @drv: The controller. + * @msg: The data to be sent. * - * @drv: the controller - * @msg: the data to be sent + * NOTES: + * - This is only used for "ACTIVE_ONLY" since the limitations of this + * function don't make sense for sleep/wake cases. + * - To do the transfer, we will grab a whole TCS for ourselves--we don't + * try to share. If there are none available we'll wait indefinitely + * for a free one. + * - This function will not wait for the commands to be finished, only for + * data to be programmed into the RPMh. See rpmh_tx_done() which will + * be called when the transfer is fully complete. + * - This function must be called with interrupts enabled. If the hardware + * is busy doing someone else's transfer we need that transfer to fully + * finish so that we can have the hardware, and to fully finish it needs + * the interrupt handler to run. If the interrupts is set to run on the + * active CPU this can never happen if interrupts are disabled. * * Return: 0 on success, -EINVAL on error. - * Note: This call blocks until a valid data is written to the TCS. */ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) { int ret; - if (!msg || !msg->cmds || !msg->num_cmds || - msg->num_cmds > MAX_RPMH_PAYLOAD) { - WARN_ON(1); - return -EINVAL; - } - do { ret = tcs_write(drv, msg); if (ret == -EBUSY) { @@ -419,43 +670,28 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) return ret; } -static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd, - int len) -{ - int i, j; - - /* Check for already cached commands */ - for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) { - if (tcs->cmd_cache[i] != cmd[0].addr) - continue; - if (i + len >= tcs->num_tcs * tcs->ncpt) - goto seq_err; - for (j = 0; j < len; j++) { - if (tcs->cmd_cache[i + j] != cmd[j].addr) - goto seq_err; - } - return i; - } - - return -ENODATA; - -seq_err: - WARN(1, "Message does not match previous sequence.\n"); - return -EINVAL; -} - +/** + * find_slots() - Find a place to write the given message. + * @tcs: The tcs group to search. + * @msg: The message we want to find room for. + * @tcs_id: If we return 0 from the function, we return the global ID of the + * TCS to write to here. + * @cmd_id: If we return 0 from the function, we return the index of + * the command array of the returned TCS where the client should + * start writing the message. + * + * Only for use on sleep/wake TCSes since those are the only ones we maintain + * tcs->slots for. + * + * Return: -ENOMEM if there was no room, else 0. + */ static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg, int *tcs_id, int *cmd_id) { int slot, offset; int i = 0; - /* Find if we already have the msg in our TCS */ - slot = find_match(tcs, msg->cmds, msg->num_cmds); - if (slot >= 0) - goto copy_data; - - /* Do over, until we can fit the full payload in a TCS */ + /* Do over, until we can fit the full payload in a single TCS */ do { slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS, i, msg->num_cmds, 0); @@ -464,11 +700,7 @@ static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg, i += tcs->ncpt; } while (slot + msg->num_cmds - 1 >= i); -copy_data: bitmap_set(tcs->slots, slot, msg->num_cmds); - /* Copy the addresses of the resources over to the slots */ - for (i = 0; i < msg->num_cmds; i++) - tcs->cmd_cache[slot + i] = msg->cmds[i].addr; offset = slot / tcs->ncpt; *tcs_id = offset + tcs->offset; @@ -477,52 +709,157 @@ copy_data: return 0; } -static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg) +/** + * rpmh_rsc_write_ctrl_data() - Write request to controller but don't trigger. + * @drv: The controller. + * @msg: The data to be written to the controller. + * + * This should only be called for for sleep/wake state, never active-only + * state. + * + * The caller must ensure that no other RPMH actions are happening and the + * controller is idle when this function is called since it runs lockless. + * + * Return: 0 if no error; else -error. + */ +int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg) { struct tcs_group *tcs; int tcs_id = 0, cmd_id = 0; - unsigned long flags; int ret; tcs = get_tcs_for_msg(drv, msg); if (IS_ERR(tcs)) return PTR_ERR(tcs); - spin_lock_irqsave(&tcs->lock, flags); /* find the TCS id and the command in the TCS to write to */ ret = find_slots(tcs, msg, &tcs_id, &cmd_id); if (!ret) __tcs_buffer_write(drv, tcs_id, cmd_id, msg); - spin_unlock_irqrestore(&tcs->lock, flags); return ret; } /** - * rpmh_rsc_write_ctrl_data: Write request to the controller + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy. + * @drv: The controller + * + * Checks if any of the AMCs are busy in handling ACTIVE sets. + * This is called from the last cpu powering down before flushing + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter + * power collapse, so deny from the last cpu's pm notification. * - * @drv: the controller - * @msg: the data to be written to the controller + * Context: Must be called with the drv->lock held. * - * There is no response returned for writing the request to the controller. + * Return: + * * False - AMCs are idle + * * True - AMCs are busy */ -int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg) +static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv) { - if (!msg || !msg->cmds || !msg->num_cmds || - msg->num_cmds > MAX_RPMH_PAYLOAD) { - pr_err("Payload error\n"); - return -EINVAL; + int m; + struct tcs_group *tcs = &drv->tcs[ACTIVE_TCS]; + + /* + * If we made an active request on a RSC that does not have a + * dedicated TCS for active state use, then re-purposed wake TCSes + * should be checked for not busy, because we used wake TCSes for + * active requests in this case. + */ + if (!tcs->num_tcs) + tcs = &drv->tcs[WAKE_TCS]; + + for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { + if (!tcs_is_free(drv, m)) + return true; } - /* Data sent to this API will not be sent immediately */ - if (msg->state == RPMH_ACTIVE_ONLY_STATE) - return -EINVAL; + return false; +} + +/** + * rpmh_rsc_cpu_pm_callback() - Check if any of the AMCs are busy. + * @nfb: Pointer to the notifier block in struct rsc_drv. + * @action: CPU_PM_ENTER, CPU_PM_ENTER_FAILED, or CPU_PM_EXIT. + * @v: Unused + * + * This function is given to cpu_pm_register_notifier so we can be informed + * about when CPUs go down. When all CPUs go down we know no more active + * transfers will be started so we write sleep/wake sets. This function gets + * called from cpuidle code paths and also at system suspend time. + * + * If its last CPU going down and AMCs are not busy then writes cached sleep + * and wake messages to TCSes. The firmware then takes care of triggering + * them when entering deepest low power modes. + * + * Return: See cpu_pm_register_notifier() + */ +static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb, + unsigned long action, void *v) +{ + struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm); + int ret = NOTIFY_OK; + int cpus_in_pm; - return tcs_ctrl_write(drv, msg); + switch (action) { + case CPU_PM_ENTER: + cpus_in_pm = atomic_inc_return(&drv->cpus_in_pm); + /* + * NOTE: comments for num_online_cpus() point out that it's + * only a snapshot so we need to be careful. It should be OK + * for us to use, though. It's important for us not to miss + * if we're the last CPU going down so it would only be a + * problem if a CPU went offline right after we did the check + * AND that CPU was not idle AND that CPU was the last non-idle + * CPU. That can't happen. CPUs would have to come out of idle + * before the CPU could go offline. + */ + if (cpus_in_pm < num_online_cpus()) + return NOTIFY_OK; + break; + case CPU_PM_ENTER_FAILED: + case CPU_PM_EXIT: + atomic_dec(&drv->cpus_in_pm); + return NOTIFY_OK; + default: + return NOTIFY_DONE; + } + + /* + * It's likely we're on the last CPU. Grab the drv->lock and write + * out the sleep/wake commands to RPMH hardware. Grabbing the lock + * means that if we race with another CPU coming up we are still + * guaranteed to be safe. If another CPU came up just after we checked + * and has grabbed the lock or started an active transfer then we'll + * notice we're busy and abort. If another CPU comes up after we start + * flushing it will be blocked from starting an active transfer until + * we're done flushing. If another CPU starts an active transfer after + * we release the lock we're still OK because we're no longer the last + * CPU. + */ + if (spin_trylock(&drv->lock)) { + if (rpmh_rsc_ctrlr_is_busy(drv) || rpmh_flush(&drv->client)) + ret = NOTIFY_BAD; + spin_unlock(&drv->lock); + } else { + /* Another CPU must be up */ + return NOTIFY_OK; + } + + if (ret == NOTIFY_BAD) { + /* Double-check if we're here because someone else is up */ + if (cpus_in_pm < num_online_cpus()) + ret = NOTIFY_OK; + else + /* We won't be called w/ CPU_PM_ENTER_FAILED */ + atomic_dec(&drv->cpus_in_pm); + } + + return ret; } static int rpmh_probe_tcs_config(struct platform_device *pdev, - struct rsc_drv *drv) + struct rsc_drv *drv, void __iomem *base) { struct tcs_type_config { u32 type; @@ -532,15 +869,6 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev, u32 config, max_tcs, ncpt, offset; int i, ret, n, st = 0; struct tcs_group *tcs; - struct resource *res; - void __iomem *base; - char drv_id[10] = {0}; - - snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id); - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id); - base = devm_ioremap_resource(&pdev->dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); ret = of_property_read_u32(dn, "qcom,tcs-offset", &offset); if (ret) @@ -584,7 +912,6 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev, tcs->type = tcs_cfg[i].type; tcs->num_tcs = tcs_cfg[i].n; tcs->ncpt = ncpt; - spin_lock_init(&tcs->lock); if (!tcs->num_tcs || tcs->type == CONTROL_TCS) continue; @@ -596,19 +923,6 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev, tcs->mask = ((1 << tcs->num_tcs) - 1) << st; tcs->offset = st; st += tcs->num_tcs; - - /* - * Allocate memory to cache sleep and wake requests to - * avoid reading TCS register memory. - */ - if (tcs->type == ACTIVE_TCS) - continue; - - tcs->cmd_cache = devm_kcalloc(&pdev->dev, - tcs->num_tcs * ncpt, sizeof(u32), - GFP_KERNEL); - if (!tcs->cmd_cache) - return -ENOMEM; } drv->num_tcs = st; @@ -620,7 +934,11 @@ static int rpmh_rsc_probe(struct platform_device *pdev) { struct device_node *dn = pdev->dev.of_node; struct rsc_drv *drv; + struct resource *res; + char drv_id[10] = {0}; int ret, irq; + u32 solver_config; + void __iomem *base; /* * Even though RPMh doesn't directly use cmd-db, all of its children @@ -646,7 +964,13 @@ static int rpmh_rsc_probe(struct platform_device *pdev) if (!drv->name) drv->name = dev_name(&pdev->dev); - ret = rpmh_probe_tcs_config(pdev, drv); + snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id); + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id); + base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + ret = rpmh_probe_tcs_config(pdev, drv, base); if (ret) return ret; @@ -663,8 +987,22 @@ static int rpmh_rsc_probe(struct platform_device *pdev) if (ret) return ret; + /* + * CPU PM notification are not required for controllers that support + * 'HW solver' mode where they can be in autonomous mode executing low + * power mode to power down. + */ + solver_config = readl_relaxed(base + DRV_SOLVER_CONFIG); + solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT; + solver_config = solver_config >> DRV_HW_SOLVER_SHIFT; + if (!solver_config) { + drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback; + cpu_pm_register_notifier(&drv->rsc_pm); + } + /* Enable the active TCS to send requests immediately */ - write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, drv->tcs[ACTIVE_TCS].mask); + writel_relaxed(drv->tcs[ACTIVE_TCS].mask, + drv->tcs_base + RSC_DRV_IRQ_ENABLE); spin_lock_init(&drv->client.cache_lock); INIT_LIST_HEAD(&drv->client.cache); diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c index eb0ded059d2e..f2b5b46ccd1f 100644 --- a/drivers/soc/qcom/rpmh.c +++ b/drivers/soc/qcom/rpmh.c @@ -9,6 +9,7 @@ #include <linux/jiffies.h> #include <linux/kernel.h> #include <linux/list.h> +#include <linux/lockdep.h> #include <linux/module.h> #include <linux/of.h> #include <linux/platform_device.h> @@ -119,6 +120,7 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr, { struct cache_req *req; unsigned long flags; + u32 old_sleep_val, old_wake_val; spin_lock_irqsave(&ctrlr->cache_lock, flags); req = __find_req(ctrlr, cmd->addr); @@ -133,26 +135,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr, req->addr = cmd->addr; req->sleep_val = req->wake_val = UINT_MAX; - INIT_LIST_HEAD(&req->list); list_add_tail(&req->list, &ctrlr->cache); existing: + old_sleep_val = req->sleep_val; + old_wake_val = req->wake_val; + switch (state) { case RPMH_ACTIVE_ONLY_STATE: - if (req->sleep_val != UINT_MAX) - req->wake_val = cmd->data; - break; case RPMH_WAKE_ONLY_STATE: req->wake_val = cmd->data; break; case RPMH_SLEEP_STATE: req->sleep_val = cmd->data; break; - default: - break; } - ctrlr->dirty = true; + ctrlr->dirty |= (req->sleep_val != old_sleep_val || + req->wake_val != old_wake_val) && + req->sleep_val != UINT_MAX && + req->wake_val != UINT_MAX; + unlock: spin_unlock_irqrestore(&ctrlr->cache_lock, flags); @@ -287,6 +290,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req) spin_lock_irqsave(&ctrlr->cache_lock, flags); list_add_tail(&req->list, &ctrlr->batch_cache); + ctrlr->dirty = true; spin_unlock_irqrestore(&ctrlr->cache_lock, flags); } @@ -294,12 +298,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr) { struct batch_cache_req *req; const struct rpmh_request *rpm_msg; - unsigned long flags; int ret = 0; int i; /* Send Sleep/Wake requests to the controller, expect no response */ - spin_lock_irqsave(&ctrlr->cache_lock, flags); list_for_each_entry(req, &ctrlr->batch_cache, list) { for (i = 0; i < req->count; i++) { rpm_msg = req->rpm_msgs + i; @@ -309,23 +311,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr) break; } } - spin_unlock_irqrestore(&ctrlr->cache_lock, flags); return ret; } -static void invalidate_batch(struct rpmh_ctrlr *ctrlr) -{ - struct batch_cache_req *req, *tmp; - unsigned long flags; - - spin_lock_irqsave(&ctrlr->cache_lock, flags); - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) - kfree(req); - INIT_LIST_HEAD(&ctrlr->batch_cache); - spin_unlock_irqrestore(&ctrlr->cache_lock, flags); -} - /** * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the * batch to finish. @@ -442,36 +431,42 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state, } /** - * rpmh_flush: Flushes the buffered active and sleep sets to TCS - * - * @ctrlr: controller making request to flush cached data + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes * - * Return: -EBUSY if the controller is busy, probably waiting on a response - * to a RPMH request sent earlier. + * @ctrlr: Controller making request to flush cached data * - * This function is always called from the sleep code from the last CPU - * that is powering down the entire system. Since no other RPMH API would be - * executing at this time, it is safe to run lockless. + * Return: + * * 0 - Success + * * Error code - Otherwise */ int rpmh_flush(struct rpmh_ctrlr *ctrlr) { struct cache_req *p; - int ret; + int ret = 0; + + lockdep_assert_irqs_disabled(); + + /* + * Currently rpmh_flush() is only called when we think we're running + * on the last processor. If the lock is busy it means another + * processor is up and it's better to abort than spin. + */ + if (!spin_trylock(&ctrlr->cache_lock)) + return -EBUSY; if (!ctrlr->dirty) { pr_debug("Skipping flush, TCS has latest data.\n"); - return 0; + goto exit; } + /* Invalidate the TCSes first to avoid stale data */ + rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); + /* First flush the cached batch requests */ ret = flush_batch(ctrlr); if (ret) - return ret; + goto exit; - /* - * Nobody else should be calling this function other than system PM, - * hence we can run without locks. - */ list_for_each_entry(p, &ctrlr->cache, list) { if (!is_req_valid(p)) { pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x", @@ -481,38 +476,40 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) ret = send_single(ctrlr, RPMH_SLEEP_STATE, p->addr, p->sleep_val); if (ret) - return ret; + goto exit; ret = send_single(ctrlr, RPMH_WAKE_ONLY_STATE, p->addr, p->wake_val); if (ret) - return ret; + goto exit; } ctrlr->dirty = false; - return 0; +exit: + spin_unlock(&ctrlr->cache_lock); + return ret; } /** - * rpmh_invalidate: Invalidate all sleep and active sets - * sets. + * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache * * @dev: The device making the request * - * Invalidate the sleep and active values in the TCS blocks. + * Invalidate the sleep and wake values in batch_cache. */ int rpmh_invalidate(const struct device *dev) { struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); - int ret; + struct batch_cache_req *req, *tmp; + unsigned long flags; - invalidate_batch(ctrlr); + spin_lock_irqsave(&ctrlr->cache_lock, flags); + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) + kfree(req); + INIT_LIST_HEAD(&ctrlr->batch_cache); ctrlr->dirty = true; + spin_unlock_irqrestore(&ctrlr->cache_lock, flags); - do { - ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); - } while (ret == -EAGAIN); - - return ret; + return 0; } EXPORT_SYMBOL(rpmh_invalidate); diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c index 4d264d0672c4..e72426221a69 100644 --- a/drivers/soc/qcom/rpmhpd.c +++ b/drivers/soc/qcom/rpmhpd.c @@ -4,6 +4,7 @@ #include <linux/err.h> #include <linux/init.h> #include <linux/kernel.h> +#include <linux/module.h> #include <linux/mutex.h> #include <linux/pm_domain.h> #include <linux/slab.h> @@ -166,6 +167,24 @@ static const struct rpmhpd_desc sm8150_desc = { .num_pds = ARRAY_SIZE(sm8150_rpmhpds), }; +static struct rpmhpd *sm8250_rpmhpds[] = { + [SM8250_CX] = &sdm845_cx, + [SM8250_CX_AO] = &sdm845_cx_ao, + [SM8250_EBI] = &sdm845_ebi, + [SM8250_GFX] = &sdm845_gfx, + [SM8250_LCX] = &sdm845_lcx, + [SM8250_LMX] = &sdm845_lmx, + [SM8250_MMCX] = &sm8150_mmcx, + [SM8250_MMCX_AO] = &sm8150_mmcx_ao, + [SM8250_MX] = &sdm845_mx, + [SM8250_MX_AO] = &sdm845_mx_ao, +}; + +static const struct rpmhpd_desc sm8250_desc = { + .rpmhpds = sm8250_rpmhpds, + .num_pds = ARRAY_SIZE(sm8250_rpmhpds), +}; + /* SC7180 RPMH powerdomains */ static struct rpmhpd *sc7180_rpmhpds[] = { [SC7180_CX] = &sdm845_cx, @@ -187,8 +206,10 @@ static const struct of_device_id rpmhpd_match_table[] = { { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc }, { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, { .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc }, + { .compatible = "qcom,sm8250-rpmhpd", .data = &sm8250_desc }, { } }; +MODULE_DEVICE_TABLE(of, rpmhpd_match_table); static int rpmhpd_send_corner(struct rpmhpd *pd, int state, unsigned int corner, bool sync) @@ -460,3 +481,6 @@ static int __init rpmhpd_init(void) return platform_driver_register(&rpmhpd_driver); } core_initcall(rpmhpd_init); + +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. RPMh Power Domain Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/soc/qcom/rpmpd.c b/drivers/soc/qcom/rpmpd.c index 2b1834c5609a..f2168e4259b2 100644 --- a/drivers/soc/qcom/rpmpd.c +++ b/drivers/soc/qcom/rpmpd.c @@ -4,6 +4,7 @@ #include <linux/err.h> #include <linux/init.h> #include <linux/kernel.h> +#include <linux/module.h> #include <linux/mutex.h> #include <linux/pm_domain.h> #include <linux/of.h> @@ -226,6 +227,7 @@ static const struct of_device_id rpmpd_match_table[] = { { .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc }, { } }; +MODULE_DEVICE_TABLE(of, rpmpd_match_table); static int rpmpd_send_enable(struct rpmpd *pd, bool enable) { @@ -422,3 +424,6 @@ static int __init rpmpd_init(void) return platform_driver_register(&rpmpd_driver); } core_initcall(rpmpd_init); + +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. RPM Power Domain Driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c index c7300d54e444..07183d731d74 100644 --- a/drivers/soc/qcom/smp2p.c +++ b/drivers/soc/qcom/smp2p.c @@ -474,10 +474,8 @@ static int qcom_smp2p_probe(struct platform_device *pdev) goto report_read_failure; irq = platform_get_irq(pdev, 0); - if (irq < 0) { - dev_err(&pdev->dev, "unable to acquire smp2p interrupt\n"); + if (irq < 0) return irq; - } smp2p->mbox_client.dev = &pdev->dev; smp2p->mbox_client.knows_txdone = true; diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c index ebb49aee179b..5983c6ffb078 100644 --- a/drivers/soc/qcom/socinfo.c +++ b/drivers/soc/qcom/socinfo.c @@ -188,6 +188,10 @@ static const struct soc_id soc_id[] = { { 216, "MSM8674PRO" }, { 217, "MSM8974-AA" }, { 218, "MSM8974-AB" }, + { 233, "MSM8936" }, + { 239, "MSM8939" }, + { 240, "APQ8036" }, + { 241, "APQ8039" }, { 246, "MSM8996" }, { 247, "APQ8016" }, { 248, "MSM8216" }, @@ -430,6 +434,8 @@ static int qcom_socinfo_probe(struct platform_device *pdev) qs->attr.family = "Snapdragon"; qs->attr.machine = socinfo_machine(&pdev->dev, le32_to_cpu(info->id)); + qs->attr.soc_id = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u", + le32_to_cpu(info->id)); qs->attr.revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u.%u", SOCINFO_MAJOR(le32_to_cpu(info->ver)), SOCINFO_MINOR(le32_to_cpu(info->ver))); diff --git a/drivers/soc/renesas/Kconfig b/drivers/soc/renesas/Kconfig index 1982c7fb45fa..53cd8d2d0cd2 100644 --- a/drivers/soc/renesas/Kconfig +++ b/drivers/soc/renesas/Kconfig @@ -83,6 +83,13 @@ config ARCH_R8A7740 select ARM_ERRATA_754322 select RENESAS_INTC_IRQPIN +config ARCH_R8A7742 + bool "RZ/G1H (R8A77420)" + select ARCH_RCAR_GEN2 + select ARM_ERRATA_798181 if SMP + select ARM_ERRATA_814220 + select SYSC_R8A7742 + config ARCH_R8A7743 bool "RZ/G1M (R8A77430)" select ARCH_RCAR_GEN2 @@ -261,6 +268,10 @@ config ARCH_R8A77995 endif # ARM64 # SoC +config SYSC_R8A7742 + bool "RZ/G1H System Controller support" if COMPILE_TEST + select SYSC_RCAR + config SYSC_R8A7743 bool "RZ/G1M System Controller support" if COMPILE_TEST select SYSC_RCAR diff --git a/drivers/soc/renesas/Makefile b/drivers/soc/renesas/Makefile index e595c3c3bd10..08296d78e2ad 100644 --- a/drivers/soc/renesas/Makefile +++ b/drivers/soc/renesas/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_SOC_RENESAS) += renesas-soc.o # SoC +obj-$(CONFIG_SYSC_R8A7742) += r8a7742-sysc.o obj-$(CONFIG_SYSC_R8A7743) += r8a7743-sysc.o obj-$(CONFIG_SYSC_R8A7745) += r8a7745-sysc.o obj-$(CONFIG_SYSC_R8A77470) += r8a77470-sysc.o diff --git a/drivers/soc/renesas/r8a7742-sysc.c b/drivers/soc/renesas/r8a7742-sysc.c new file mode 100644 index 000000000000..219a675f83f4 --- /dev/null +++ b/drivers/soc/renesas/r8a7742-sysc.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Renesas RZ/G1H System Controller + * + * Copyright (C) 2020 Renesas Electronics Corp. + */ + +#include <linux/kernel.h> + +#include <dt-bindings/power/r8a7742-sysc.h> + +#include "rcar-sysc.h" + +static const struct rcar_sysc_area r8a7742_areas[] __initconst = { + { "always-on", 0, 0, R8A7742_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, + { "ca15-scu", 0x180, 0, R8A7742_PD_CA15_SCU, R8A7742_PD_ALWAYS_ON, + PD_SCU }, + { "ca15-cpu0", 0x40, 0, R8A7742_PD_CA15_CPU0, R8A7742_PD_CA15_SCU, + PD_CPU_NOCR }, + { "ca15-cpu1", 0x40, 1, R8A7742_PD_CA15_CPU1, R8A7742_PD_CA15_SCU, + PD_CPU_NOCR }, + { "ca15-cpu2", 0x40, 2, R8A7742_PD_CA15_CPU2, R8A7742_PD_CA15_SCU, + PD_CPU_NOCR }, + { "ca15-cpu3", 0x40, 3, R8A7742_PD_CA15_CPU3, R8A7742_PD_CA15_SCU, + PD_CPU_NOCR }, + { "ca7-scu", 0x100, 0, R8A7742_PD_CA7_SCU, R8A7742_PD_ALWAYS_ON, + PD_SCU }, + { "ca7-cpu0", 0x1c0, 0, R8A7742_PD_CA7_CPU0, R8A7742_PD_CA7_SCU, + PD_CPU_NOCR }, + { "ca7-cpu1", 0x1c0, 1, R8A7742_PD_CA7_CPU1, R8A7742_PD_CA7_SCU, + PD_CPU_NOCR }, + { "ca7-cpu2", 0x1c0, 2, R8A7742_PD_CA7_CPU2, R8A7742_PD_CA7_SCU, + PD_CPU_NOCR }, + { "ca7-cpu3", 0x1c0, 3, R8A7742_PD_CA7_CPU3, R8A7742_PD_CA7_SCU, + PD_CPU_NOCR }, + { "rgx", 0xc0, 0, R8A7742_PD_RGX, R8A7742_PD_ALWAYS_ON }, +}; + +const struct rcar_sysc_info r8a7742_sysc_info __initconst = { + .areas = r8a7742_areas, + .num_areas = ARRAY_SIZE(r8a7742_areas), +}; diff --git a/drivers/soc/renesas/rcar-rst.c b/drivers/soc/renesas/rcar-rst.c index 2af2e0dd83fe..a2b2b1768768 100644 --- a/drivers/soc/renesas/rcar-rst.c +++ b/drivers/soc/renesas/rcar-rst.c @@ -39,6 +39,7 @@ static const struct rst_config rcar_rst_gen3 __initconst = { static const struct of_device_id rcar_rst_matches[] __initconst = { /* RZ/G1 is handled like R-Car Gen2 */ + { .compatible = "renesas,r8a7742-rst", .data = &rcar_rst_gen2 }, { .compatible = "renesas,r8a7743-rst", .data = &rcar_rst_gen2 }, { .compatible = "renesas,r8a7744-rst", .data = &rcar_rst_gen2 }, { .compatible = "renesas,r8a7745-rst", .data = &rcar_rst_gen2 }, diff --git a/drivers/soc/renesas/rcar-sysc.c b/drivers/soc/renesas/rcar-sysc.c index f0b291e02b8a..04ea87a188f1 100644 --- a/drivers/soc/renesas/rcar-sysc.c +++ b/drivers/soc/renesas/rcar-sysc.c @@ -273,6 +273,9 @@ finalize: } static const struct of_device_id rcar_sysc_matches[] __initconst = { +#ifdef CONFIG_SYSC_R8A7742 + { .compatible = "renesas,r8a7742-sysc", .data = &r8a7742_sysc_info }, +#endif #ifdef CONFIG_SYSC_R8A7743 { .compatible = "renesas,r8a7743-sysc", .data = &r8a7743_sysc_info }, /* RZ/G1N is identical to RZ/G2M w.r.t. power domains. */ diff --git a/drivers/soc/renesas/rcar-sysc.h b/drivers/soc/renesas/rcar-sysc.h index 0fc3b119930a..e417f26fe155 100644 --- a/drivers/soc/renesas/rcar-sysc.h +++ b/drivers/soc/renesas/rcar-sysc.h @@ -49,6 +49,7 @@ struct rcar_sysc_info { u32 extmask_val; /* SYSCEXTMASK register mask value */ }; +extern const struct rcar_sysc_info r8a7742_sysc_info; extern const struct rcar_sysc_info r8a7743_sysc_info; extern const struct rcar_sysc_info r8a7745_sysc_info; extern const struct rcar_sysc_info r8a77470_sysc_info; diff --git a/drivers/soc/tegra/Kconfig b/drivers/soc/tegra/Kconfig index 3693532949b8..6bc603d0b9d9 100644 --- a/drivers/soc/tegra/Kconfig +++ b/drivers/soc/tegra/Kconfig @@ -133,6 +133,7 @@ config SOC_TEGRA_FLOWCTRL config SOC_TEGRA_PMC bool + select GENERIC_PINCONF config SOC_TEGRA_POWERGATE_BPMP def_bool y diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c index 802717b9f6a3..d1f8dd0289e6 100644 --- a/drivers/soc/tegra/fuse/fuse-tegra.c +++ b/drivers/soc/tegra/fuse/fuse-tegra.c @@ -300,6 +300,59 @@ static void tegra_enable_fuse_clk(void __iomem *base) writel(reg, base + 0x14); } +static ssize_t major_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "%d\n", tegra_get_major_rev()); +} + +static DEVICE_ATTR_RO(major); + +static ssize_t minor_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "%d\n", tegra_get_minor_rev()); +} + +static DEVICE_ATTR_RO(minor); + +static struct attribute *tegra_soc_attr[] = { + &dev_attr_major.attr, + &dev_attr_minor.attr, + NULL, +}; + +const struct attribute_group tegra_soc_attr_group = { + .attrs = tegra_soc_attr, +}; + +#ifdef CONFIG_ARCH_TEGRA_194_SOC +static ssize_t platform_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + /* + * Displays the value in the 'pre_si_platform' field of the HIDREV + * register for Tegra194 devices. A value of 0 indicates that the + * platform type is silicon and all other non-zero values indicate + * the type of simulation platform is being used. + */ + return sprintf(buf, "%d\n", (tegra_read_chipid() >> 20) & 0xf); +} + +static DEVICE_ATTR_RO(platform); + +static struct attribute *tegra194_soc_attr[] = { + &dev_attr_major.attr, + &dev_attr_minor.attr, + &dev_attr_platform.attr, + NULL, +}; + +const struct attribute_group tegra194_soc_attr_group = { + .attrs = tegra194_soc_attr, +}; +#endif + struct device * __init tegra_soc_device_register(void) { struct soc_device_attribute *attr; @@ -310,8 +363,10 @@ struct device * __init tegra_soc_device_register(void) return NULL; attr->family = kasprintf(GFP_KERNEL, "Tegra"); - attr->revision = kasprintf(GFP_KERNEL, "%d", tegra_sku_info.revision); + attr->revision = kasprintf(GFP_KERNEL, "%s", + tegra_revision_name[tegra_sku_info.revision]); attr->soc_id = kasprintf(GFP_KERNEL, "%u", tegra_get_chip_id()); + attr->custom_attr_group = fuse->soc->soc_attr_group; dev = soc_device_register(attr); if (IS_ERR(dev)) { diff --git a/drivers/soc/tegra/fuse/fuse-tegra20.c b/drivers/soc/tegra/fuse/fuse-tegra20.c index d4aef9c4a94c..16aaa28573ac 100644 --- a/drivers/soc/tegra/fuse/fuse-tegra20.c +++ b/drivers/soc/tegra/fuse/fuse-tegra20.c @@ -164,4 +164,5 @@ const struct tegra_fuse_soc tegra20_fuse_soc = { .speedo_init = tegra20_init_speedo_data, .probe = tegra20_fuse_probe, .info = &tegra20_fuse_info, + .soc_attr_group = &tegra_soc_attr_group, }; diff --git a/drivers/soc/tegra/fuse/fuse-tegra30.c b/drivers/soc/tegra/fuse/fuse-tegra30.c index e6037f900fb7..85accef41fa1 100644 --- a/drivers/soc/tegra/fuse/fuse-tegra30.c +++ b/drivers/soc/tegra/fuse/fuse-tegra30.c @@ -111,6 +111,7 @@ const struct tegra_fuse_soc tegra30_fuse_soc = { .init = tegra30_fuse_init, .speedo_init = tegra30_init_speedo_data, .info = &tegra30_fuse_info, + .soc_attr_group = &tegra_soc_attr_group, }; #endif @@ -125,6 +126,7 @@ const struct tegra_fuse_soc tegra114_fuse_soc = { .init = tegra30_fuse_init, .speedo_init = tegra114_init_speedo_data, .info = &tegra114_fuse_info, + .soc_attr_group = &tegra_soc_attr_group, }; #endif @@ -205,6 +207,7 @@ const struct tegra_fuse_soc tegra124_fuse_soc = { .info = &tegra124_fuse_info, .lookups = tegra124_fuse_lookups, .num_lookups = ARRAY_SIZE(tegra124_fuse_lookups), + .soc_attr_group = &tegra_soc_attr_group, }; #endif @@ -290,6 +293,7 @@ const struct tegra_fuse_soc tegra210_fuse_soc = { .info = &tegra210_fuse_info, .lookups = tegra210_fuse_lookups, .num_lookups = ARRAY_SIZE(tegra210_fuse_lookups), + .soc_attr_group = &tegra_soc_attr_group, }; #endif @@ -319,6 +323,7 @@ const struct tegra_fuse_soc tegra186_fuse_soc = { .info = &tegra186_fuse_info, .lookups = tegra186_fuse_lookups, .num_lookups = ARRAY_SIZE(tegra186_fuse_lookups), + .soc_attr_group = &tegra_soc_attr_group, }; #endif @@ -348,5 +353,6 @@ const struct tegra_fuse_soc tegra194_fuse_soc = { .info = &tegra194_fuse_info, .lookups = tegra194_fuse_lookups, .num_lookups = ARRAY_SIZE(tegra194_fuse_lookups), + .soc_attr_group = &tegra194_soc_attr_group, }; #endif diff --git a/drivers/soc/tegra/fuse/fuse.h b/drivers/soc/tegra/fuse/fuse.h index 94a059e577a1..9d4fc315a007 100644 --- a/drivers/soc/tegra/fuse/fuse.h +++ b/drivers/soc/tegra/fuse/fuse.h @@ -32,6 +32,8 @@ struct tegra_fuse_soc { const struct nvmem_cell_lookup *lookups; unsigned int num_lookups; + + const struct attribute_group *soc_attr_group; }; struct tegra_fuse { @@ -64,6 +66,11 @@ void tegra_init_apbmisc(void); bool __init tegra_fuse_read_spare(unsigned int spare); u32 __init tegra_fuse_read_early(unsigned int offset); +u8 tegra_get_major_rev(void); +u8 tegra_get_minor_rev(void); + +extern const struct attribute_group tegra_soc_attr_group; + #ifdef CONFIG_ARCH_TEGRA_2x_SOC void tegra20_init_speedo_data(struct tegra_sku_info *sku_info); #endif @@ -110,6 +117,7 @@ extern const struct tegra_fuse_soc tegra186_fuse_soc; #ifdef CONFIG_ARCH_TEGRA_194_SOC extern const struct tegra_fuse_soc tegra194_fuse_soc; +extern const struct attribute_group tegra194_soc_attr_group; #endif #endif diff --git a/drivers/soc/tegra/fuse/tegra-apbmisc.c b/drivers/soc/tegra/fuse/tegra-apbmisc.c index 089d9340564b..3cdd69d1bd4d 100644 --- a/drivers/soc/tegra/fuse/tegra-apbmisc.c +++ b/drivers/soc/tegra/fuse/tegra-apbmisc.c @@ -37,6 +37,16 @@ u8 tegra_get_chip_id(void) return (tegra_read_chipid() >> 8) & 0xff; } +u8 tegra_get_major_rev(void) +{ + return (tegra_read_chipid() >> 4) & 0xf; +} + +u8 tegra_get_minor_rev(void) +{ + return (tegra_read_chipid() >> 16) & 0xf; +} + u32 tegra_read_straps(void) { WARN(!chipid, "Tegra ABP MISC not yet available\n"); @@ -65,36 +75,32 @@ static const struct of_device_id apbmisc_match[] __initconst = { void __init tegra_init_revision(void) { - u32 id, chip_id, minor_rev; - int rev; + u8 chip_id, minor_rev; - id = tegra_read_chipid(); - chip_id = (id >> 8) & 0xff; - minor_rev = (id >> 16) & 0xf; + chip_id = tegra_get_chip_id(); + minor_rev = tegra_get_minor_rev(); switch (minor_rev) { case 1: - rev = TEGRA_REVISION_A01; + tegra_sku_info.revision = TEGRA_REVISION_A01; break; case 2: - rev = TEGRA_REVISION_A02; + tegra_sku_info.revision = TEGRA_REVISION_A02; break; case 3: if (chip_id == TEGRA20 && (tegra_fuse_read_spare(18) || tegra_fuse_read_spare(19))) - rev = TEGRA_REVISION_A03p; + tegra_sku_info.revision = TEGRA_REVISION_A03p; else - rev = TEGRA_REVISION_A03; + tegra_sku_info.revision = TEGRA_REVISION_A03; break; case 4: - rev = TEGRA_REVISION_A04; + tegra_sku_info.revision = TEGRA_REVISION_A04; break; default: - rev = TEGRA_REVISION_UNKNOWN; + tegra_sku_info.revision = TEGRA_REVISION_UNKNOWN; } - tegra_sku_info.revision = rev; - tegra_sku_info.sku_id = tegra_fuse_read_early(FUSE_SKU_INFO); } diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c index 1c533a969f54..42cf37a0556b 100644 --- a/drivers/soc/tegra/pmc.c +++ b/drivers/soc/tegra/pmc.c @@ -3063,6 +3063,7 @@ static const struct pinctrl_pin_desc tegra210_pin_descs[] = { static const struct tegra_wake_event tegra210_wake_events[] = { TEGRA_WAKE_IRQ("rtc", 16, 2), + TEGRA_WAKE_IRQ("pmu", 51, 86), }; static const struct tegra_pmc_soc tegra210_pmc_soc = { @@ -3193,6 +3194,7 @@ static void tegra186_pmc_setup_irq_polarity(struct tegra_pmc *pmc, } static const struct tegra_wake_event tegra186_wake_events[] = { + TEGRA_WAKE_IRQ("pmu", 24, 209), TEGRA_WAKE_GPIO("power", 29, 1, TEGRA186_AON_GPIO(FF, 0)), TEGRA_WAKE_IRQ("rtc", 73, 10), }; @@ -3325,6 +3327,7 @@ static const char * const tegra194_reset_sources[] = { }; static const struct tegra_wake_event tegra194_wake_events[] = { + TEGRA_WAKE_IRQ("pmu", 24, 209), TEGRA_WAKE_GPIO("power", 29, 1, TEGRA194_AON_GPIO(EE, 4)), TEGRA_WAKE_IRQ("rtc", 73, 10), }; diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig index 4486e055794c..e192fb788836 100644 --- a/drivers/soc/ti/Kconfig +++ b/drivers/soc/ti/Kconfig @@ -91,6 +91,16 @@ config TI_K3_RINGACC and a consumer. There is one RINGACC module per NAVSS on TI AM65x SoCs If unsure, say N. +config TI_K3_SOCINFO + bool + depends on ARCH_K3 || COMPILE_TEST + select SOC_BUS + select MFD_SYSCON + help + Include support for the SoC bus socinfo for the TI K3 Multicore SoC + platforms to provide information about the SoC family and + variant to user space. + endif # SOC_TI config TI_SCI_INTA_MSI_DOMAIN diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile index bec827937a5f..1110e5c98685 100644 --- a/drivers/soc/ti/Makefile +++ b/drivers/soc/ti/Makefile @@ -11,3 +11,4 @@ obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o obj-$(CONFIG_TI_K3_RINGACC) += k3-ringacc.o +obj-$(CONFIG_TI_K3_SOCINFO) += k3-socinfo.o diff --git a/drivers/soc/ti/k3-socinfo.c b/drivers/soc/ti/k3-socinfo.c new file mode 100644 index 000000000000..af0ba5288e58 --- /dev/null +++ b/drivers/soc/ti/k3-socinfo.c @@ -0,0 +1,152 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * TI K3 SoC info driver + * + * Copyright (C) 2020 Texas Instruments Incorporated - http://www.ti.com + */ + +#include <linux/mfd/syscon.h> +#include <linux/of.h> +#include <linux/of_address.h> +#include <linux/regmap.h> +#include <linux/platform_device.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/sys_soc.h> + +#define CTRLMMR_WKUP_JTAGID_REG 0 +/* + * Bits: + * 31-28 VARIANT Device variant + * 27-12 PARTNO Part number + * 11-1 MFG Indicates TI as manufacturer (0x17) + * 1 Always 1 + */ +#define CTRLMMR_WKUP_JTAGID_VARIANT_SHIFT (28) +#define CTRLMMR_WKUP_JTAGID_VARIANT_MASK GENMASK(31, 28) + +#define CTRLMMR_WKUP_JTAGID_PARTNO_SHIFT (12) +#define CTRLMMR_WKUP_JTAGID_PARTNO_MASK GENMASK(27, 12) + +#define CTRLMMR_WKUP_JTAGID_MFG_SHIFT (1) +#define CTRLMMR_WKUP_JTAGID_MFG_MASK GENMASK(11, 1) + +#define CTRLMMR_WKUP_JTAGID_MFG_TI 0x17 + +static const struct k3_soc_id { + unsigned int id; + const char *family_name; +} k3_soc_ids[] = { + { 0xBB5A, "AM65X" }, + { 0xBB64, "J721E" }, +}; + +static int +k3_chipinfo_partno_to_names(unsigned int partno, + struct soc_device_attribute *soc_dev_attr) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(k3_soc_ids); i++) + if (partno == k3_soc_ids[i].id) { + soc_dev_attr->family = k3_soc_ids[i].family_name; + return 0; + } + + return -EINVAL; +} + +static int k3_chipinfo_probe(struct platform_device *pdev) +{ + struct device_node *node = pdev->dev.of_node; + struct soc_device_attribute *soc_dev_attr; + struct device *dev = &pdev->dev; + struct soc_device *soc_dev; + struct regmap *regmap; + u32 partno_id; + u32 variant; + u32 jtag_id; + u32 mfg; + int ret; + + regmap = device_node_to_regmap(node); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + ret = regmap_read(regmap, CTRLMMR_WKUP_JTAGID_REG, &jtag_id); + if (ret < 0) + return ret; + + mfg = (jtag_id & CTRLMMR_WKUP_JTAGID_MFG_MASK) >> + CTRLMMR_WKUP_JTAGID_MFG_SHIFT; + + if (mfg != CTRLMMR_WKUP_JTAGID_MFG_TI) { + dev_err(dev, "Invalid MFG SoC\n"); + return -ENODEV; + } + + variant = (jtag_id & CTRLMMR_WKUP_JTAGID_VARIANT_MASK) >> + CTRLMMR_WKUP_JTAGID_VARIANT_SHIFT; + variant++; + + partno_id = (jtag_id & CTRLMMR_WKUP_JTAGID_PARTNO_MASK) >> + CTRLMMR_WKUP_JTAGID_PARTNO_SHIFT; + + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); + if (!soc_dev_attr) + return -ENOMEM; + + soc_dev_attr->revision = kasprintf(GFP_KERNEL, "SR%x.0", variant); + if (!soc_dev_attr->revision) { + ret = -ENOMEM; + goto err; + } + + ret = k3_chipinfo_partno_to_names(partno_id, soc_dev_attr); + if (ret) { + dev_err(dev, "Unknown SoC JTAGID[0x%08X]\n", jtag_id); + ret = -ENODEV; + goto err_free_rev; + } + + node = of_find_node_by_path("/"); + of_property_read_string(node, "model", &soc_dev_attr->machine); + of_node_put(node); + + soc_dev = soc_device_register(soc_dev_attr); + if (IS_ERR(soc_dev)) { + ret = PTR_ERR(soc_dev); + goto err_free_rev; + } + + dev_info(dev, "Family:%s rev:%s JTAGID[0x%08x] Detected\n", + soc_dev_attr->family, + soc_dev_attr->revision, jtag_id); + + return 0; + +err_free_rev: + kfree(soc_dev_attr->revision); +err: + kfree(soc_dev_attr); + return ret; +} + +static const struct of_device_id k3_chipinfo_of_match[] = { + { .compatible = "ti,am654-chipid", }, + { /* sentinel */ }, +}; + +static struct platform_driver k3_chipinfo_driver = { + .driver = { + .name = "k3-chipinfo", + .of_match_table = k3_chipinfo_of_match, + }, + .probe = k3_chipinfo_probe, +}; + +static int __init k3_chipinfo_init(void) +{ + return platform_driver_register(&k3_chipinfo_driver); +} +subsys_initcall(k3_chipinfo_init); diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c index 37f3db6c041c..aa071d96ef36 100644 --- a/drivers/soc/ti/knav_qmss_queue.c +++ b/drivers/soc/ti/knav_qmss_queue.c @@ -409,7 +409,7 @@ static int knav_gp_close_queue(struct knav_range_info *range, return 0; } -struct knav_range_ops knav_gp_range_ops = { +static struct knav_range_ops knav_gp_range_ops = { .set_notify = knav_gp_set_notify, .open_queue = knav_gp_open_queue, .close_queue = knav_gp_close_queue, diff --git a/drivers/staging/media/Kconfig b/drivers/staging/media/Kconfig index 053f485eb994..4bb1eca6f597 100644 --- a/drivers/staging/media/Kconfig +++ b/drivers/staging/media/Kconfig @@ -38,6 +38,8 @@ source "drivers/staging/media/sunxi/Kconfig" source "drivers/staging/media/tegra-vde/Kconfig" +source "drivers/staging/media/tegra-video/Kconfig" + source "drivers/staging/media/ipu3/Kconfig" source "drivers/staging/media/soc_camera/Kconfig" diff --git a/drivers/staging/media/Makefile b/drivers/staging/media/Makefile index e01f13a1b4a2..71a47b61836d 100644 --- a/drivers/staging/media/Makefile +++ b/drivers/staging/media/Makefile @@ -6,6 +6,7 @@ obj-$(CONFIG_VIDEO_MESON_VDEC) += meson/vdec/ obj-$(CONFIG_VIDEO_OMAP4) += omap4iss/ obj-$(CONFIG_VIDEO_ROCKCHIP_VDEC) += rkvdec/ obj-$(CONFIG_VIDEO_SUNXI) += sunxi/ +obj-$(CONFIG_VIDEO_TEGRA) += tegra-video/ obj-$(CONFIG_TEGRA_VDE) += tegra-vde/ obj-$(CONFIG_VIDEO_HANTRO) += hantro/ obj-$(CONFIG_VIDEO_IPU3_IMGU) += ipu3/ diff --git a/drivers/staging/media/tegra-video/Kconfig b/drivers/staging/media/tegra-video/Kconfig new file mode 100644 index 000000000000..f6c61ec74386 --- /dev/null +++ b/drivers/staging/media/tegra-video/Kconfig @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0-only +config VIDEO_TEGRA + tristate "NVIDIA Tegra VI driver" + depends on TEGRA_HOST1X + depends on VIDEO_V4L2 + select MEDIA_CONTROLLER + select VIDEOBUF2_DMA_CONTIG + help + Choose this option if you have an NVIDIA Tegra SoC. + + To compile this driver as a module, choose M here: the module + will be called tegra-video. diff --git a/drivers/staging/media/tegra-video/Makefile b/drivers/staging/media/tegra-video/Makefile new file mode 100644 index 000000000000..dfa2ef8f99ef --- /dev/null +++ b/drivers/staging/media/tegra-video/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +tegra-video-objs := \ + video.o \ + vi.o \ + csi.o + +tegra-video-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o +obj-$(CONFIG_VIDEO_TEGRA) += tegra-video.o diff --git a/drivers/staging/media/tegra-video/TODO b/drivers/staging/media/tegra-video/TODO new file mode 100644 index 000000000000..6ceb7549c218 --- /dev/null +++ b/drivers/staging/media/tegra-video/TODO @@ -0,0 +1,11 @@ +TODO list +* Currently driver supports Tegra build-in TPG only with direct media links + from CSI to VI. Add kernel config CONFIG_VIDEO_TEGRA_TPG and update the + driver to do TPG Vs Sensor media links based on CONFIG_VIDEO_TEGRA_TPG. +* Add real camera sensor capture support. +* Add Tegra CSI MIPI pads calibration. +* Add MIPI clock Settle time computation based on the data rate. +* Add support for Ganged mode. +* Add RAW10 packed video format support to Tegra210 video formats. +* Add support for suspend and resume. +* Make sure v4l2-compliance tests pass with all of the above implementations. diff --git a/drivers/staging/media/tegra-video/csi.c b/drivers/staging/media/tegra-video/csi.c new file mode 100644 index 000000000000..40ea195d141d --- /dev/null +++ b/drivers/staging/media/tegra-video/csi.c @@ -0,0 +1,539 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#include <linux/clk.h> +#include <linux/clk/tegra.h> +#include <linux/device.h> +#include <linux/host1x.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include <linux/pm_runtime.h> + +#include "csi.h" +#include "video.h" + +static inline struct tegra_csi * +host1x_client_to_csi(struct host1x_client *client) +{ + return container_of(client, struct tegra_csi, client); +} + +static inline struct tegra_csi_channel *to_csi_chan(struct v4l2_subdev *subdev) +{ + return container_of(subdev, struct tegra_csi_channel, subdev); +} + +/* + * CSI is a separate subdevice which has 6 source pads to generate + * test pattern. CSI subdevice pad ops are used only for TPG and + * allows below TPG formats. + */ +static const struct v4l2_mbus_framefmt tegra_csi_tpg_fmts[] = { + { + TEGRA_DEF_WIDTH, + TEGRA_DEF_HEIGHT, + MEDIA_BUS_FMT_SRGGB10_1X10, + V4L2_FIELD_NONE, + V4L2_COLORSPACE_SRGB + }, + { + TEGRA_DEF_WIDTH, + TEGRA_DEF_HEIGHT, + MEDIA_BUS_FMT_RGB888_1X32_PADHI, + V4L2_FIELD_NONE, + V4L2_COLORSPACE_SRGB + }, +}; + +static const struct v4l2_frmsize_discrete tegra_csi_tpg_sizes[] = { + { 1280, 720 }, + { 1920, 1080 }, + { 3840, 2160 }, +}; + +/* + * V4L2 Subdevice Pad Operations + */ +static int csi_enum_bus_code(struct v4l2_subdev *subdev, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_mbus_code_enum *code) +{ + if (code->index >= ARRAY_SIZE(tegra_csi_tpg_fmts)) + return -EINVAL; + + code->code = tegra_csi_tpg_fmts[code->index].code; + + return 0; +} + +static int csi_get_format(struct v4l2_subdev *subdev, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_format *fmt) +{ + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); + + fmt->format = csi_chan->format; + + return 0; +} + +static int csi_get_frmrate_table_index(struct tegra_csi *csi, u32 code, + u32 width, u32 height) +{ + const struct tpg_framerate *frmrate; + unsigned int i; + + frmrate = csi->soc->tpg_frmrate_table; + for (i = 0; i < csi->soc->tpg_frmrate_table_size; i++) { + if (frmrate[i].code == code && + frmrate[i].frmsize.width == width && + frmrate[i].frmsize.height == height) { + return i; + } + } + + return -EINVAL; +} + +static void csi_chan_update_blank_intervals(struct tegra_csi_channel *csi_chan, + u32 code, u32 width, u32 height) +{ + struct tegra_csi *csi = csi_chan->csi; + const struct tpg_framerate *frmrate = csi->soc->tpg_frmrate_table; + int index; + + index = csi_get_frmrate_table_index(csi_chan->csi, code, + width, height); + if (index >= 0) { + csi_chan->h_blank = frmrate[index].h_blank; + csi_chan->v_blank = frmrate[index].v_blank; + csi_chan->framerate = frmrate[index].framerate; + } +} + +static int csi_enum_framesizes(struct v4l2_subdev *subdev, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_frame_size_enum *fse) +{ + unsigned int i; + + if (fse->index >= ARRAY_SIZE(tegra_csi_tpg_sizes)) + return -EINVAL; + + for (i = 0; i < ARRAY_SIZE(tegra_csi_tpg_fmts); i++) + if (fse->code == tegra_csi_tpg_fmts[i].code) + break; + + if (i == ARRAY_SIZE(tegra_csi_tpg_fmts)) + return -EINVAL; + + fse->min_width = tegra_csi_tpg_sizes[fse->index].width; + fse->max_width = tegra_csi_tpg_sizes[fse->index].width; + fse->min_height = tegra_csi_tpg_sizes[fse->index].height; + fse->max_height = tegra_csi_tpg_sizes[fse->index].height; + + return 0; +} + +static int csi_enum_frameintervals(struct v4l2_subdev *subdev, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_frame_interval_enum *fie) +{ + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); + struct tegra_csi *csi = csi_chan->csi; + const struct tpg_framerate *frmrate = csi->soc->tpg_frmrate_table; + int index; + + /* one framerate per format and resolution */ + if (fie->index > 0) + return -EINVAL; + + index = csi_get_frmrate_table_index(csi_chan->csi, fie->code, + fie->width, fie->height); + if (index < 0) + return -EINVAL; + + fie->interval.numerator = 1; + fie->interval.denominator = frmrate[index].framerate; + + return 0; +} + +static int csi_set_format(struct v4l2_subdev *subdev, + struct v4l2_subdev_pad_config *cfg, + struct v4l2_subdev_format *fmt) +{ + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); + struct v4l2_mbus_framefmt *format = &fmt->format; + const struct v4l2_frmsize_discrete *sizes; + unsigned int i; + + sizes = v4l2_find_nearest_size(tegra_csi_tpg_sizes, + ARRAY_SIZE(tegra_csi_tpg_sizes), + width, height, + format->width, format->width); + format->width = sizes->width; + format->height = sizes->height; + + for (i = 0; i < ARRAY_SIZE(tegra_csi_tpg_fmts); i++) + if (format->code == tegra_csi_tpg_fmts[i].code) + break; + + if (i == ARRAY_SIZE(tegra_csi_tpg_fmts)) + i = 0; + + format->code = tegra_csi_tpg_fmts[i].code; + format->field = V4L2_FIELD_NONE; + + if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) + return 0; + + /* update blanking intervals from frame rate table and format */ + csi_chan_update_blank_intervals(csi_chan, format->code, + format->width, format->height); + csi_chan->format = *format; + + return 0; +} + +/* + * V4L2 Subdevice Video Operations + */ +static int tegra_csi_g_frame_interval(struct v4l2_subdev *subdev, + struct v4l2_subdev_frame_interval *vfi) +{ + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); + + vfi->interval.numerator = 1; + vfi->interval.denominator = csi_chan->framerate; + + return 0; +} + +static int tegra_csi_s_stream(struct v4l2_subdev *subdev, int enable) +{ + struct tegra_vi_channel *chan = v4l2_get_subdev_hostdata(subdev); + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); + struct tegra_csi *csi = csi_chan->csi; + int ret = 0; + + csi_chan->pg_mode = chan->pg_mode; + if (enable) { + ret = pm_runtime_get_sync(csi->dev); + if (ret < 0) { + dev_err(csi->dev, + "failed to get runtime PM: %d\n", ret); + pm_runtime_put_noidle(csi->dev); + return ret; + } + + ret = csi->ops->csi_start_streaming(csi_chan); + if (ret < 0) + goto rpm_put; + + return 0; + } + + csi->ops->csi_stop_streaming(csi_chan); + +rpm_put: + pm_runtime_put(csi->dev); + return ret; +} + +/* + * V4L2 Subdevice Operations + */ +static const struct v4l2_subdev_video_ops tegra_csi_video_ops = { + .s_stream = tegra_csi_s_stream, + .g_frame_interval = tegra_csi_g_frame_interval, + .s_frame_interval = tegra_csi_g_frame_interval, +}; + +static const struct v4l2_subdev_pad_ops tegra_csi_pad_ops = { + .enum_mbus_code = csi_enum_bus_code, + .enum_frame_size = csi_enum_framesizes, + .enum_frame_interval = csi_enum_frameintervals, + .get_fmt = csi_get_format, + .set_fmt = csi_set_format, +}; + +static const struct v4l2_subdev_ops tegra_csi_ops = { + .video = &tegra_csi_video_ops, + .pad = &tegra_csi_pad_ops, +}; + +static int tegra_csi_tpg_channels_alloc(struct tegra_csi *csi) +{ + struct device_node *node = csi->dev->of_node; + unsigned int port_num; + struct tegra_csi_channel *chan; + unsigned int tpg_channels = csi->soc->csi_max_channels; + + /* allocate CSI channel for each CSI x2 ports */ + for (port_num = 0; port_num < tpg_channels; port_num++) { + chan = kzalloc(sizeof(*chan), GFP_KERNEL); + if (!chan) + return -ENOMEM; + + list_add_tail(&chan->list, &csi->csi_chans); + chan->csi = csi; + chan->csi_port_num = port_num; + chan->numlanes = 2; + chan->of_node = node; + chan->numpads = 1; + chan->pads[0].flags = MEDIA_PAD_FL_SOURCE; + } + + return 0; +} + +static int tegra_csi_channel_init(struct tegra_csi_channel *chan) +{ + struct tegra_csi *csi = chan->csi; + struct v4l2_subdev *subdev; + int ret; + + /* initialize the default format */ + chan->format.code = MEDIA_BUS_FMT_SRGGB10_1X10; + chan->format.field = V4L2_FIELD_NONE; + chan->format.colorspace = V4L2_COLORSPACE_SRGB; + chan->format.width = TEGRA_DEF_WIDTH; + chan->format.height = TEGRA_DEF_HEIGHT; + csi_chan_update_blank_intervals(chan, chan->format.code, + chan->format.width, + chan->format.height); + /* initialize V4L2 subdevice and media entity */ + subdev = &chan->subdev; + v4l2_subdev_init(subdev, &tegra_csi_ops); + subdev->dev = csi->dev; + snprintf(subdev->name, V4L2_SUBDEV_NAME_SIZE, "%s-%d", "tpg", + chan->csi_port_num); + + v4l2_set_subdevdata(subdev, chan); + subdev->fwnode = of_fwnode_handle(chan->of_node); + subdev->entity.function = MEDIA_ENT_F_VID_IF_BRIDGE; + + /* initialize media entity pads */ + ret = media_entity_pads_init(&subdev->entity, chan->numpads, + chan->pads); + if (ret < 0) { + dev_err(csi->dev, + "failed to initialize media entity: %d\n", ret); + subdev->dev = NULL; + return ret; + } + + return 0; +} + +void tegra_csi_error_recover(struct v4l2_subdev *sd) +{ + struct tegra_csi_channel *csi_chan = to_csi_chan(sd); + struct tegra_csi *csi = csi_chan->csi; + + /* stop streaming during error recovery */ + csi->ops->csi_stop_streaming(csi_chan); + csi->ops->csi_err_recover(csi_chan); + csi->ops->csi_start_streaming(csi_chan); +} + +static int tegra_csi_channels_init(struct tegra_csi *csi) +{ + struct tegra_csi_channel *chan; + int ret; + + list_for_each_entry(chan, &csi->csi_chans, list) { + ret = tegra_csi_channel_init(chan); + if (ret) { + dev_err(csi->dev, + "failed to initialize channel-%d: %d\n", + chan->csi_port_num, ret); + return ret; + } + } + + return 0; +} + +static void tegra_csi_channels_cleanup(struct tegra_csi *csi) +{ + struct v4l2_subdev *subdev; + struct tegra_csi_channel *chan, *tmp; + + list_for_each_entry_safe(chan, tmp, &csi->csi_chans, list) { + subdev = &chan->subdev; + if (subdev->dev) + media_entity_cleanup(&subdev->entity); + list_del(&chan->list); + kfree(chan); + } +} + +static int __maybe_unused csi_runtime_suspend(struct device *dev) +{ + struct tegra_csi *csi = dev_get_drvdata(dev); + + clk_bulk_disable_unprepare(csi->soc->num_clks, csi->clks); + + return 0; +} + +static int __maybe_unused csi_runtime_resume(struct device *dev) +{ + struct tegra_csi *csi = dev_get_drvdata(dev); + int ret; + + ret = clk_bulk_prepare_enable(csi->soc->num_clks, csi->clks); + if (ret < 0) { + dev_err(csi->dev, "failed to enable clocks: %d\n", ret); + return ret; + } + + return 0; +} + +static int tegra_csi_init(struct host1x_client *client) +{ + struct tegra_csi *csi = host1x_client_to_csi(client); + struct tegra_video_device *vid = dev_get_drvdata(client->host); + int ret; + + INIT_LIST_HEAD(&csi->csi_chans); + + ret = tegra_csi_tpg_channels_alloc(csi); + if (ret < 0) { + dev_err(csi->dev, + "failed to allocate tpg channels: %d\n", ret); + goto cleanup; + } + + ret = tegra_csi_channels_init(csi); + if (ret < 0) + goto cleanup; + + vid->csi = csi; + + return 0; + +cleanup: + tegra_csi_channels_cleanup(csi); + return ret; +} + +static int tegra_csi_exit(struct host1x_client *client) +{ + struct tegra_csi *csi = host1x_client_to_csi(client); + + tegra_csi_channels_cleanup(csi); + + return 0; +} + +static const struct host1x_client_ops csi_client_ops = { + .init = tegra_csi_init, + .exit = tegra_csi_exit, +}; + +static int tegra_csi_probe(struct platform_device *pdev) +{ + struct tegra_csi *csi; + unsigned int i; + int ret; + + csi = devm_kzalloc(&pdev->dev, sizeof(*csi), GFP_KERNEL); + if (!csi) + return -ENOMEM; + + csi->iomem = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(csi->iomem)) + return PTR_ERR(csi->iomem); + + csi->soc = of_device_get_match_data(&pdev->dev); + + csi->clks = devm_kcalloc(&pdev->dev, csi->soc->num_clks, + sizeof(*csi->clks), GFP_KERNEL); + if (!csi->clks) + return -ENOMEM; + + for (i = 0; i < csi->soc->num_clks; i++) + csi->clks[i].id = csi->soc->clk_names[i]; + + ret = devm_clk_bulk_get(&pdev->dev, csi->soc->num_clks, csi->clks); + if (ret) { + dev_err(&pdev->dev, "failed to get the clocks: %d\n", ret); + return ret; + } + + if (!pdev->dev.pm_domain) { + ret = -ENOENT; + dev_warn(&pdev->dev, "PM domain is not attached: %d\n", ret); + return ret; + } + + csi->dev = &pdev->dev; + csi->ops = csi->soc->ops; + platform_set_drvdata(pdev, csi); + pm_runtime_enable(&pdev->dev); + + /* initialize host1x interface */ + INIT_LIST_HEAD(&csi->client.list); + csi->client.ops = &csi_client_ops; + csi->client.dev = &pdev->dev; + + ret = host1x_client_register(&csi->client); + if (ret < 0) { + dev_err(&pdev->dev, + "failed to register host1x client: %d\n", ret); + goto rpm_disable; + } + + return 0; + +rpm_disable: + pm_runtime_disable(&pdev->dev); + return ret; +} + +static int tegra_csi_remove(struct platform_device *pdev) +{ + struct tegra_csi *csi = platform_get_drvdata(pdev); + int err; + + err = host1x_client_unregister(&csi->client); + if (err < 0) { + dev_err(&pdev->dev, + "failed to unregister host1x client: %d\n", err); + return err; + } + + pm_runtime_disable(&pdev->dev); + + return 0; +} + +static const struct of_device_id tegra_csi_of_id_table[] = { +#if defined(CONFIG_ARCH_TEGRA_210_SOC) + { .compatible = "nvidia,tegra210-csi", .data = &tegra210_csi_soc }, +#endif + { } +}; +MODULE_DEVICE_TABLE(of, tegra_csi_of_id_table); + +static const struct dev_pm_ops tegra_csi_pm_ops = { + SET_RUNTIME_PM_OPS(csi_runtime_suspend, csi_runtime_resume, NULL) +}; + +struct platform_driver tegra_csi_driver = { + .driver = { + .name = "tegra-csi", + .of_match_table = tegra_csi_of_id_table, + .pm = &tegra_csi_pm_ops, + }, + .probe = tegra_csi_probe, + .remove = tegra_csi_remove, +}; diff --git a/drivers/staging/media/tegra-video/csi.h b/drivers/staging/media/tegra-video/csi.h new file mode 100644 index 000000000000..93bd2a05797d --- /dev/null +++ b/drivers/staging/media/tegra-video/csi.h @@ -0,0 +1,147 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#ifndef __TEGRA_CSI_H__ +#define __TEGRA_CSI_H__ + +#include <media/media-entity.h> +#include <media/v4l2-subdev.h> + +/* + * Each CSI brick supports max of 4 lanes that can be used as either + * one x4 port using both CILA and CILB partitions of a CSI brick or can + * be used as two x2 ports with one x2 from CILA and the other x2 from + * CILB. + */ +#define CSI_PORTS_PER_BRICK 2 + +/* each CSI channel can have one sink and one source pads */ +#define TEGRA_CSI_PADS_NUM 2 + +enum tegra_csi_cil_port { + PORT_A = 0, + PORT_B, +}; + +enum tegra_csi_block { + CSI_CIL_AB = 0, + CSI_CIL_CD, + CSI_CIL_EF, +}; + +struct tegra_csi; + +/** + * struct tegra_csi_channel - Tegra CSI channel + * + * @list: list head for this entry + * @subdev: V4L2 subdevice associated with this channel + * @pads: media pads for the subdevice entity + * @numpads: number of pads. + * @csi: Tegra CSI device structure + * @of_node: csi device tree node + * @numlanes: number of lanes used per port/channel + * @csi_port_num: CSI channel port number + * @pg_mode: test pattern generator mode for channel + * @format: active format of the channel + * @framerate: active framerate for TPG + * @h_blank: horizontal blanking for TPG active format + * @v_blank: vertical blanking for TPG active format + */ +struct tegra_csi_channel { + struct list_head list; + struct v4l2_subdev subdev; + struct media_pad pads[TEGRA_CSI_PADS_NUM]; + unsigned int numpads; + struct tegra_csi *csi; + struct device_node *of_node; + unsigned int numlanes; + u8 csi_port_num; + u8 pg_mode; + struct v4l2_mbus_framefmt format; + unsigned int framerate; + unsigned int h_blank; + unsigned int v_blank; +}; + +/** + * struct tpg_framerate - Tegra CSI TPG framerate configuration + * + * @frmsize: frame resolution + * @code: media bus format code + * @h_blank: horizontal blanking used for TPG + * @v_blank: vertical blanking interval used for TPG + * @framerate: framerate achieved with the corresponding blanking intervals, + * format and resolution. + */ +struct tpg_framerate { + struct v4l2_frmsize_discrete frmsize; + u32 code; + unsigned int h_blank; + unsigned int v_blank; + unsigned int framerate; +}; + +/** + * struct tegra_csi_ops - Tegra CSI operations + * + * @csi_start_streaming: programs csi hardware to enable streaming. + * @csi_stop_streaming: programs csi hardware to disable streaming. + * @csi_err_recover: csi hardware block recovery in case of any capture errors + * due to missing source stream or due to improper csi input from + * the external source. + */ +struct tegra_csi_ops { + int (*csi_start_streaming)(struct tegra_csi_channel *csi_chan); + void (*csi_stop_streaming)(struct tegra_csi_channel *csi_chan); + void (*csi_err_recover)(struct tegra_csi_channel *csi_chan); +}; + +/** + * struct tegra_csi_soc - NVIDIA Tegra CSI SoC structure + * + * @ops: csi hardware operations + * @csi_max_channels: supported max streaming channels + * @clk_names: csi and cil clock names + * @num_clks: total clocks count + * @tpg_frmrate_table: csi tpg frame rate table with blanking intervals + * @tpg_frmrate_table_size: size of frame rate table + */ +struct tegra_csi_soc { + const struct tegra_csi_ops *ops; + unsigned int csi_max_channels; + const char * const *clk_names; + unsigned int num_clks; + const struct tpg_framerate *tpg_frmrate_table; + unsigned int tpg_frmrate_table_size; +}; + +/** + * struct tegra_csi - NVIDIA Tegra CSI device structure + * + * @dev: device struct + * @client: host1x_client struct + * @iomem: register base + * @clks: clock for CSI and CIL + * @soc: pointer to SoC data structure + * @ops: csi operations + * @channels: list head for CSI channels + */ +struct tegra_csi { + struct device *dev; + struct host1x_client client; + void __iomem *iomem; + struct clk_bulk_data *clks; + const struct tegra_csi_soc *soc; + const struct tegra_csi_ops *ops; + struct list_head csi_chans; +}; + +#if defined(CONFIG_ARCH_TEGRA_210_SOC) +extern const struct tegra_csi_soc tegra210_csi_soc; +#endif + +void tegra_csi_error_recover(struct v4l2_subdev *subdev); +#endif diff --git a/drivers/staging/media/tegra-video/tegra210.c b/drivers/staging/media/tegra-video/tegra210.c new file mode 100644 index 000000000000..3baa4e314203 --- /dev/null +++ b/drivers/staging/media/tegra-video/tegra210.c @@ -0,0 +1,978 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +/* + * This source file contains Tegra210 supported video formats, + * VI and CSI SoC specific data, operations and registers accessors. + */ +#include <linux/clk.h> +#include <linux/clk/tegra.h> +#include <linux/delay.h> +#include <linux/host1x.h> +#include <linux/kthread.h> + +#include "csi.h" +#include "vi.h" + +#define TEGRA_VI_SYNCPT_WAIT_TIMEOUT msecs_to_jiffies(200) + +/* Tegra210 VI registers */ +#define TEGRA_VI_CFG_VI_INCR_SYNCPT 0x000 +#define VI_CFG_VI_INCR_SYNCPT_COND(x) (((x) & 0xff) << 8) +#define VI_CSI_PP_FRAME_START(port) (5 + (port) * 4) +#define VI_CSI_MW_ACK_DONE(port) (7 + (port) * 4) +#define TEGRA_VI_CFG_VI_INCR_SYNCPT_CNTRL 0x004 +#define VI_INCR_SYNCPT_NO_STALL BIT(8) +#define TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR 0x008 +#define TEGRA_VI_CFG_CG_CTRL 0x0b8 +#define VI_CG_2ND_LEVEL_EN 0x1 + +/* Tegra210 VI CSI registers */ +#define TEGRA_VI_CSI_SW_RESET 0x000 +#define TEGRA_VI_CSI_SINGLE_SHOT 0x004 +#define SINGLE_SHOT_CAPTURE 0x1 +#define TEGRA_VI_CSI_IMAGE_DEF 0x00c +#define BYPASS_PXL_TRANSFORM_OFFSET 24 +#define IMAGE_DEF_FORMAT_OFFSET 16 +#define IMAGE_DEF_DEST_MEM 0x1 +#define TEGRA_VI_CSI_IMAGE_SIZE 0x018 +#define IMAGE_SIZE_HEIGHT_OFFSET 16 +#define TEGRA_VI_CSI_IMAGE_SIZE_WC 0x01c +#define TEGRA_VI_CSI_IMAGE_DT 0x020 +#define TEGRA_VI_CSI_SURFACE0_OFFSET_MSB 0x024 +#define TEGRA_VI_CSI_SURFACE0_OFFSET_LSB 0x028 +#define TEGRA_VI_CSI_SURFACE1_OFFSET_MSB 0x02c +#define TEGRA_VI_CSI_SURFACE1_OFFSET_LSB 0x030 +#define TEGRA_VI_CSI_SURFACE2_OFFSET_MSB 0x034 +#define TEGRA_VI_CSI_SURFACE2_OFFSET_LSB 0x038 +#define TEGRA_VI_CSI_SURFACE0_STRIDE 0x054 +#define TEGRA_VI_CSI_SURFACE1_STRIDE 0x058 +#define TEGRA_VI_CSI_SURFACE2_STRIDE 0x05c +#define TEGRA_VI_CSI_SURFACE_HEIGHT0 0x060 +#define TEGRA_VI_CSI_ERROR_STATUS 0x084 + +/* Tegra210 CSI Pixel Parser registers: Starts from 0x838, offset 0x0 */ +#define TEGRA_CSI_INPUT_STREAM_CONTROL 0x000 +#define CSI_SKIP_PACKET_THRESHOLD_OFFSET 16 +#define TEGRA_CSI_PIXEL_STREAM_CONTROL0 0x004 +#define CSI_PP_PACKET_HEADER_SENT BIT(4) +#define CSI_PP_DATA_IDENTIFIER_ENABLE BIT(5) +#define CSI_PP_WORD_COUNT_SELECT_HEADER BIT(6) +#define CSI_PP_CRC_CHECK_ENABLE BIT(7) +#define CSI_PP_WC_CHECK BIT(8) +#define CSI_PP_OUTPUT_FORMAT_STORE (0x3 << 16) +#define CSI_PPA_PAD_LINE_NOPAD (0x2 << 24) +#define CSI_PP_HEADER_EC_DISABLE (0x1 << 27) +#define CSI_PPA_PAD_FRAME_NOPAD (0x2 << 28) +#define TEGRA_CSI_PIXEL_STREAM_CONTROL1 0x008 +#define CSI_PP_TOP_FIELD_FRAME_OFFSET 0 +#define CSI_PP_TOP_FIELD_FRAME_MASK_OFFSET 4 +#define TEGRA_CSI_PIXEL_STREAM_GAP 0x00c +#define PP_FRAME_MIN_GAP_OFFSET 16 +#define TEGRA_CSI_PIXEL_STREAM_PP_COMMAND 0x010 +#define CSI_PP_ENABLE 0x1 +#define CSI_PP_DISABLE 0x2 +#define CSI_PP_RST 0x3 +#define CSI_PP_SINGLE_SHOT_ENABLE (0x1 << 2) +#define CSI_PP_START_MARKER_FRAME_MAX_OFFSET 12 +#define TEGRA_CSI_PIXEL_STREAM_EXPECTED_FRAME 0x014 +#define TEGRA_CSI_PIXEL_PARSER_INTERRUPT_MASK 0x018 +#define TEGRA_CSI_PIXEL_PARSER_STATUS 0x01c + +/* Tegra210 CSI PHY registers */ +/* CSI_PHY_CIL_COMMAND_0 offset 0x0d0 from TEGRA_CSI_PIXEL_PARSER_0_BASE */ +#define TEGRA_CSI_PHY_CIL_COMMAND 0x0d0 +#define CSI_A_PHY_CIL_NOP 0x0 +#define CSI_A_PHY_CIL_ENABLE 0x1 +#define CSI_A_PHY_CIL_DISABLE 0x2 +#define CSI_A_PHY_CIL_ENABLE_MASK 0x3 +#define CSI_B_PHY_CIL_NOP (0x0 << 8) +#define CSI_B_PHY_CIL_ENABLE (0x1 << 8) +#define CSI_B_PHY_CIL_DISABLE (0x2 << 8) +#define CSI_B_PHY_CIL_ENABLE_MASK (0x3 << 8) + +#define TEGRA_CSI_CIL_PAD_CONFIG0 0x000 +#define BRICK_CLOCK_A_4X (0x1 << 16) +#define BRICK_CLOCK_B_4X (0x2 << 16) +#define TEGRA_CSI_CIL_PAD_CONFIG1 0x004 +#define TEGRA_CSI_CIL_PHY_CONTROL 0x008 +#define TEGRA_CSI_CIL_INTERRUPT_MASK 0x00c +#define TEGRA_CSI_CIL_STATUS 0x010 +#define TEGRA_CSI_CILX_STATUS 0x014 +#define TEGRA_CSI_CIL_SW_SENSOR_RESET 0x020 + +#define TEGRA_CSI_PATTERN_GENERATOR_CTRL 0x000 +#define PG_MODE_OFFSET 2 +#define PG_ENABLE 0x1 +#define PG_DISABLE 0x0 +#define TEGRA_CSI_PG_BLANK 0x004 +#define PG_VBLANK_OFFSET 16 +#define TEGRA_CSI_PG_PHASE 0x008 +#define TEGRA_CSI_PG_RED_FREQ 0x00c +#define PG_RED_VERT_INIT_FREQ_OFFSET 16 +#define PG_RED_HOR_INIT_FREQ_OFFSET 0 +#define TEGRA_CSI_PG_RED_FREQ_RATE 0x010 +#define TEGRA_CSI_PG_GREEN_FREQ 0x014 +#define PG_GREEN_VERT_INIT_FREQ_OFFSET 16 +#define PG_GREEN_HOR_INIT_FREQ_OFFSET 0 +#define TEGRA_CSI_PG_GREEN_FREQ_RATE 0x018 +#define TEGRA_CSI_PG_BLUE_FREQ 0x01c +#define PG_BLUE_VERT_INIT_FREQ_OFFSET 16 +#define PG_BLUE_HOR_INIT_FREQ_OFFSET 0 +#define TEGRA_CSI_PG_BLUE_FREQ_RATE 0x020 +#define TEGRA_CSI_PG_AOHDR 0x024 +#define TEGRA_CSI_CSI_SW_STATUS_RESET 0x214 +#define TEGRA_CSI_CLKEN_OVERRIDE 0x218 + +#define TEGRA210_CSI_PORT_OFFSET 0x34 +#define TEGRA210_CSI_CIL_OFFSET 0x0f4 +#define TEGRA210_CSI_TPG_OFFSET 0x18c + +#define CSI_PP_OFFSET(block) ((block) * 0x800) +#define TEGRA210_VI_CSI_BASE(x) (0x100 + (x) * 0x100) + +/* Tegra210 VI registers accessors */ +static void tegra_vi_write(struct tegra_vi_channel *chan, unsigned int addr, + u32 val) +{ + writel_relaxed(val, chan->vi->iomem + addr); +} + +static u32 tegra_vi_read(struct tegra_vi_channel *chan, unsigned int addr) +{ + return readl_relaxed(chan->vi->iomem + addr); +} + +/* Tegra210 VI_CSI registers accessors */ +static void vi_csi_write(struct tegra_vi_channel *chan, unsigned int addr, + u32 val) +{ + void __iomem *vi_csi_base; + + vi_csi_base = chan->vi->iomem + TEGRA210_VI_CSI_BASE(chan->portno); + + writel_relaxed(val, vi_csi_base + addr); +} + +static u32 vi_csi_read(struct tegra_vi_channel *chan, unsigned int addr) +{ + void __iomem *vi_csi_base; + + vi_csi_base = chan->vi->iomem + TEGRA210_VI_CSI_BASE(chan->portno); + + return readl_relaxed(vi_csi_base + addr); +} + +/* + * Tegra210 VI channel capture operations + */ +static int tegra_channel_capture_setup(struct tegra_vi_channel *chan) +{ + u32 height = chan->format.height; + u32 width = chan->format.width; + u32 format = chan->fmtinfo->img_fmt; + u32 data_type = chan->fmtinfo->img_dt; + u32 word_count = (width * chan->fmtinfo->bit_width) / 8; + + vi_csi_write(chan, TEGRA_VI_CSI_ERROR_STATUS, 0xffffffff); + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_DEF, + ((chan->pg_mode ? 0 : 1) << BYPASS_PXL_TRANSFORM_OFFSET) | + (format << IMAGE_DEF_FORMAT_OFFSET) | + IMAGE_DEF_DEST_MEM); + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_DT, data_type); + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_SIZE_WC, word_count); + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_SIZE, + (height << IMAGE_SIZE_HEIGHT_OFFSET) | width); + return 0; +} + +static void tegra_channel_vi_soft_reset(struct tegra_vi_channel *chan) +{ + /* disable clock gating to enable continuous clock */ + tegra_vi_write(chan, TEGRA_VI_CFG_CG_CTRL, 0); + /* + * Soft reset memory client interface, pixel format logic, sensor + * control logic, and a shadow copy logic to bring VI to clean state. + */ + vi_csi_write(chan, TEGRA_VI_CSI_SW_RESET, 0xf); + usleep_range(100, 200); + vi_csi_write(chan, TEGRA_VI_CSI_SW_RESET, 0x0); + + /* enable back VI clock gating */ + tegra_vi_write(chan, TEGRA_VI_CFG_CG_CTRL, VI_CG_2ND_LEVEL_EN); +} + +static void tegra_channel_capture_error_recover(struct tegra_vi_channel *chan) +{ + struct v4l2_subdev *subdev; + u32 val; + + /* + * Recover VI and CSI hardware blocks in case of missing frame start + * events due to source not streaming or noisy csi inputs from the + * external source or many outstanding frame start or MW_ACK_DONE + * events which can cause CSI and VI hardware hang. + * This helps to have a clean capture for next frame. + */ + val = vi_csi_read(chan, TEGRA_VI_CSI_ERROR_STATUS); + dev_dbg(&chan->video.dev, "TEGRA_VI_CSI_ERROR_STATUS 0x%08x\n", val); + vi_csi_write(chan, TEGRA_VI_CSI_ERROR_STATUS, val); + + val = tegra_vi_read(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR); + dev_dbg(&chan->video.dev, + "TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR 0x%08x\n", val); + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR, val); + + /* recover VI by issuing software reset and re-setup for capture */ + tegra_channel_vi_soft_reset(chan); + tegra_channel_capture_setup(chan); + + /* recover CSI block */ + subdev = tegra_channel_get_remote_subdev(chan); + tegra_csi_error_recover(subdev); +} + +static struct tegra_channel_buffer * +dequeue_buf_done(struct tegra_vi_channel *chan) +{ + struct tegra_channel_buffer *buf = NULL; + + spin_lock(&chan->done_lock); + if (list_empty(&chan->done)) { + spin_unlock(&chan->done_lock); + return NULL; + } + + buf = list_first_entry(&chan->done, + struct tegra_channel_buffer, queue); + if (buf) + list_del_init(&buf->queue); + spin_unlock(&chan->done_lock); + + return buf; +} + +static void release_buffer(struct tegra_vi_channel *chan, + struct tegra_channel_buffer *buf, + enum vb2_buffer_state state) +{ + struct vb2_v4l2_buffer *vb = &buf->buf; + + vb->sequence = chan->sequence++; + vb->field = V4L2_FIELD_NONE; + vb->vb2_buf.timestamp = ktime_get_ns(); + vb2_buffer_done(&vb->vb2_buf, state); +} + +static int tegra_channel_capture_frame(struct tegra_vi_channel *chan, + struct tegra_channel_buffer *buf) +{ + u32 thresh, value, frame_start, mw_ack_done; + int bytes_per_line = chan->format.bytesperline; + int err; + + /* program buffer address by using surface 0 */ + vi_csi_write(chan, TEGRA_VI_CSI_SURFACE0_OFFSET_MSB, + (u64)buf->addr >> 32); + vi_csi_write(chan, TEGRA_VI_CSI_SURFACE0_OFFSET_LSB, buf->addr); + vi_csi_write(chan, TEGRA_VI_CSI_SURFACE0_STRIDE, bytes_per_line); + + /* + * Tegra VI block interacts with host1x syncpt for synchronizing + * programmed condition of capture state and hardware operation. + * Frame start and Memory write acknowledge syncpts has their own + * FIFO of depth 2. + * + * Syncpoint trigger conditions set through VI_INCR_SYNCPT register + * are added to HW syncpt FIFO and when the HW triggers, syncpt + * condition is removed from the FIFO and counter at syncpoint index + * will be incremented by the hardware and software can wait for + * counter to reach threshold to synchronize capturing frame with the + * hardware capture events. + */ + + /* increase channel syncpoint threshold for FRAME_START */ + thresh = host1x_syncpt_incr_max(chan->frame_start_sp, 1); + + /* Program FRAME_START trigger condition syncpt request */ + frame_start = VI_CSI_PP_FRAME_START(chan->portno); + value = VI_CFG_VI_INCR_SYNCPT_COND(frame_start) | + host1x_syncpt_id(chan->frame_start_sp); + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT, value); + + /* increase channel syncpoint threshold for MW_ACK_DONE */ + buf->mw_ack_sp_thresh = host1x_syncpt_incr_max(chan->mw_ack_sp, 1); + + /* Program MW_ACK_DONE trigger condition syncpt request */ + mw_ack_done = VI_CSI_MW_ACK_DONE(chan->portno); + value = VI_CFG_VI_INCR_SYNCPT_COND(mw_ack_done) | + host1x_syncpt_id(chan->mw_ack_sp); + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT, value); + + /* enable single shot capture */ + vi_csi_write(chan, TEGRA_VI_CSI_SINGLE_SHOT, SINGLE_SHOT_CAPTURE); + + /* wait for syncpt counter to reach frame start event threshold */ + err = host1x_syncpt_wait(chan->frame_start_sp, thresh, + TEGRA_VI_SYNCPT_WAIT_TIMEOUT, &value); + if (err) { + dev_err_ratelimited(&chan->video.dev, + "frame start syncpt timeout: %d\n", err); + /* increment syncpoint counter for timedout events */ + host1x_syncpt_incr(chan->frame_start_sp); + spin_lock(&chan->sp_incr_lock); + host1x_syncpt_incr(chan->mw_ack_sp); + spin_unlock(&chan->sp_incr_lock); + /* clear errors and recover */ + tegra_channel_capture_error_recover(chan); + release_buffer(chan, buf, VB2_BUF_STATE_ERROR); + return err; + } + + /* move buffer to capture done queue */ + spin_lock(&chan->done_lock); + list_add_tail(&buf->queue, &chan->done); + spin_unlock(&chan->done_lock); + + /* wait up kthread for capture done */ + wake_up_interruptible(&chan->done_wait); + + return 0; +} + +static void tegra_channel_capture_done(struct tegra_vi_channel *chan, + struct tegra_channel_buffer *buf) +{ + enum vb2_buffer_state state = VB2_BUF_STATE_DONE; + u32 value; + int ret; + + /* wait for syncpt counter to reach MW_ACK_DONE event threshold */ + ret = host1x_syncpt_wait(chan->mw_ack_sp, buf->mw_ack_sp_thresh, + TEGRA_VI_SYNCPT_WAIT_TIMEOUT, &value); + if (ret) { + dev_err_ratelimited(&chan->video.dev, + "MW_ACK_DONE syncpt timeout: %d\n", ret); + state = VB2_BUF_STATE_ERROR; + /* increment syncpoint counter for timedout event */ + spin_lock(&chan->sp_incr_lock); + host1x_syncpt_incr(chan->mw_ack_sp); + spin_unlock(&chan->sp_incr_lock); + } + + release_buffer(chan, buf, state); +} + +static int chan_capture_kthread_start(void *data) +{ + struct tegra_vi_channel *chan = data; + struct tegra_channel_buffer *buf; + int err = 0; + + while (1) { + /* + * Source is not streaming if error is non-zero. + * So, do not dequeue buffers on error and let the thread sleep + * till kthread stop signal is received. + */ + wait_event_interruptible(chan->start_wait, + kthread_should_stop() || + (!list_empty(&chan->capture) && + !err)); + + if (kthread_should_stop()) + break; + + /* dequeue the buffer and start capture */ + spin_lock(&chan->start_lock); + if (list_empty(&chan->capture)) { + spin_unlock(&chan->start_lock); + continue; + } + + buf = list_first_entry(&chan->capture, + struct tegra_channel_buffer, queue); + list_del_init(&buf->queue); + spin_unlock(&chan->start_lock); + + err = tegra_channel_capture_frame(chan, buf); + if (err) + vb2_queue_error(&chan->queue); + } + + return 0; +} + +static int chan_capture_kthread_finish(void *data) +{ + struct tegra_vi_channel *chan = data; + struct tegra_channel_buffer *buf; + + while (1) { + wait_event_interruptible(chan->done_wait, + !list_empty(&chan->done) || + kthread_should_stop()); + + /* dequeue buffers and finish capture */ + buf = dequeue_buf_done(chan); + while (buf) { + tegra_channel_capture_done(chan, buf); + buf = dequeue_buf_done(chan); + } + + if (kthread_should_stop()) + break; + } + + return 0; +} + +static int tegra210_vi_start_streaming(struct vb2_queue *vq, u32 count) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); + struct media_pipeline *pipe = &chan->video.pipe; + u32 val; + int ret; + + tegra_vi_write(chan, TEGRA_VI_CFG_CG_CTRL, VI_CG_2ND_LEVEL_EN); + + /* clear errors */ + val = vi_csi_read(chan, TEGRA_VI_CSI_ERROR_STATUS); + vi_csi_write(chan, TEGRA_VI_CSI_ERROR_STATUS, val); + + val = tegra_vi_read(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR); + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR, val); + + /* + * Sync point FIFO full stalls the host interface. + * Setting NO_STALL will drop INCR_SYNCPT methods when fifos are + * full and the corresponding condition bits in INCR_SYNCPT_ERROR + * register will be set. + * This allows SW to process error recovery. + */ + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_CNTRL, + VI_INCR_SYNCPT_NO_STALL); + + /* start the pipeline */ + ret = media_pipeline_start(&chan->video.entity, pipe); + if (ret < 0) + goto error_pipeline_start; + + tegra_channel_capture_setup(chan); + ret = tegra_channel_set_stream(chan, true); + if (ret < 0) + goto error_set_stream; + + chan->sequence = 0; + + /* start kthreads to capture data to buffer and return them */ + chan->kthread_start_capture = kthread_run(chan_capture_kthread_start, + chan, "%s:0", + chan->video.name); + if (IS_ERR(chan->kthread_start_capture)) { + ret = PTR_ERR(chan->kthread_start_capture); + chan->kthread_start_capture = NULL; + dev_err(&chan->video.dev, + "failed to run capture start kthread: %d\n", ret); + goto error_kthread_start; + } + + chan->kthread_finish_capture = kthread_run(chan_capture_kthread_finish, + chan, "%s:1", + chan->video.name); + if (IS_ERR(chan->kthread_finish_capture)) { + ret = PTR_ERR(chan->kthread_finish_capture); + chan->kthread_finish_capture = NULL; + dev_err(&chan->video.dev, + "failed to run capture finish kthread: %d\n", ret); + goto error_kthread_done; + } + + return 0; + +error_kthread_done: + kthread_stop(chan->kthread_start_capture); +error_kthread_start: + tegra_channel_set_stream(chan, false); +error_set_stream: + media_pipeline_stop(&chan->video.entity); +error_pipeline_start: + tegra_channel_release_buffers(chan, VB2_BUF_STATE_QUEUED); + return ret; +} + +static void tegra210_vi_stop_streaming(struct vb2_queue *vq) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); + + if (chan->kthread_start_capture) { + kthread_stop(chan->kthread_start_capture); + chan->kthread_start_capture = NULL; + } + + if (chan->kthread_finish_capture) { + kthread_stop(chan->kthread_finish_capture); + chan->kthread_finish_capture = NULL; + } + + tegra_channel_release_buffers(chan, VB2_BUF_STATE_ERROR); + tegra_channel_set_stream(chan, false); + media_pipeline_stop(&chan->video.entity); +} + +/* + * Tegra210 VI Pixel memory format enum. + * These format enum value gets programmed into corresponding Tegra VI + * channel register bits. + */ +enum tegra210_image_format { + TEGRA210_IMAGE_FORMAT_T_L8 = 16, + + TEGRA210_IMAGE_FORMAT_T_R16_I = 32, + TEGRA210_IMAGE_FORMAT_T_B5G6R5, + TEGRA210_IMAGE_FORMAT_T_R5G6B5, + TEGRA210_IMAGE_FORMAT_T_A1B5G5R5, + TEGRA210_IMAGE_FORMAT_T_A1R5G5B5, + TEGRA210_IMAGE_FORMAT_T_B5G5R5A1, + TEGRA210_IMAGE_FORMAT_T_R5G5B5A1, + TEGRA210_IMAGE_FORMAT_T_A4B4G4R4, + TEGRA210_IMAGE_FORMAT_T_A4R4G4B4, + TEGRA210_IMAGE_FORMAT_T_B4G4R4A4, + TEGRA210_IMAGE_FORMAT_T_R4G4B4A4, + + TEGRA210_IMAGE_FORMAT_T_A8B8G8R8 = 64, + TEGRA210_IMAGE_FORMAT_T_A8R8G8B8, + TEGRA210_IMAGE_FORMAT_T_B8G8R8A8, + TEGRA210_IMAGE_FORMAT_T_R8G8B8A8, + TEGRA210_IMAGE_FORMAT_T_A2B10G10R10, + TEGRA210_IMAGE_FORMAT_T_A2R10G10B10, + TEGRA210_IMAGE_FORMAT_T_B10G10R10A2, + TEGRA210_IMAGE_FORMAT_T_R10G10B10A2, + + TEGRA210_IMAGE_FORMAT_T_A8Y8U8V8 = 193, + TEGRA210_IMAGE_FORMAT_T_V8U8Y8A8, + + TEGRA210_IMAGE_FORMAT_T_A2Y10U10V10 = 197, + TEGRA210_IMAGE_FORMAT_T_V10U10Y10A2, + TEGRA210_IMAGE_FORMAT_T_Y8_U8__Y8_V8, + TEGRA210_IMAGE_FORMAT_T_Y8_V8__Y8_U8, + TEGRA210_IMAGE_FORMAT_T_U8_Y8__V8_Y8, + TEGRA210_IMAGE_FORMAT_T_V8_Y8__U8_Y8, + + TEGRA210_IMAGE_FORMAT_T_Y8__U8__V8_N444 = 224, + TEGRA210_IMAGE_FORMAT_T_Y8__U8V8_N444, + TEGRA210_IMAGE_FORMAT_T_Y8__V8U8_N444, + TEGRA210_IMAGE_FORMAT_T_Y8__U8__V8_N422, + TEGRA210_IMAGE_FORMAT_T_Y8__U8V8_N422, + TEGRA210_IMAGE_FORMAT_T_Y8__V8U8_N422, + TEGRA210_IMAGE_FORMAT_T_Y8__U8__V8_N420, + TEGRA210_IMAGE_FORMAT_T_Y8__U8V8_N420, + TEGRA210_IMAGE_FORMAT_T_Y8__V8U8_N420, + TEGRA210_IMAGE_FORMAT_T_X2LC10LB10LA10, + TEGRA210_IMAGE_FORMAT_T_A2R6R6R6R6R6, +}; + +#define TEGRA210_VIDEO_FMT(DATA_TYPE, BIT_WIDTH, MBUS_CODE, BPP, \ + FORMAT, FOURCC) \ +{ \ + TEGRA_IMAGE_DT_##DATA_TYPE, \ + BIT_WIDTH, \ + MEDIA_BUS_FMT_##MBUS_CODE, \ + BPP, \ + TEGRA210_IMAGE_FORMAT_##FORMAT, \ + V4L2_PIX_FMT_##FOURCC, \ +} + +/* Tegra210 supported video formats */ +static const struct tegra_video_format tegra210_video_formats[] = { + /* RAW 8 */ + TEGRA210_VIDEO_FMT(RAW8, 8, SRGGB8_1X8, 1, T_L8, SRGGB8), + TEGRA210_VIDEO_FMT(RAW8, 8, SGRBG8_1X8, 1, T_L8, SGRBG8), + TEGRA210_VIDEO_FMT(RAW8, 8, SGBRG8_1X8, 1, T_L8, SGBRG8), + TEGRA210_VIDEO_FMT(RAW8, 8, SBGGR8_1X8, 1, T_L8, SBGGR8), + /* RAW 10 */ + TEGRA210_VIDEO_FMT(RAW10, 10, SRGGB10_1X10, 2, T_R16_I, SRGGB10), + TEGRA210_VIDEO_FMT(RAW10, 10, SGRBG10_1X10, 2, T_R16_I, SGRBG10), + TEGRA210_VIDEO_FMT(RAW10, 10, SGBRG10_1X10, 2, T_R16_I, SGBRG10), + TEGRA210_VIDEO_FMT(RAW10, 10, SBGGR10_1X10, 2, T_R16_I, SBGGR10), + /* RAW 12 */ + TEGRA210_VIDEO_FMT(RAW12, 12, SRGGB12_1X12, 2, T_R16_I, SRGGB12), + TEGRA210_VIDEO_FMT(RAW12, 12, SGRBG12_1X12, 2, T_R16_I, SGRBG12), + TEGRA210_VIDEO_FMT(RAW12, 12, SGBRG12_1X12, 2, T_R16_I, SGBRG12), + TEGRA210_VIDEO_FMT(RAW12, 12, SBGGR12_1X12, 2, T_R16_I, SBGGR12), + /* RGB888 */ + TEGRA210_VIDEO_FMT(RGB888, 24, RGB888_1X24, 4, T_A8R8G8B8, RGB24), + TEGRA210_VIDEO_FMT(RGB888, 24, RGB888_1X32_PADHI, 4, T_A8B8G8R8, + XBGR32), + /* YUV422 */ + TEGRA210_VIDEO_FMT(YUV422_8, 16, UYVY8_1X16, 2, T_U8_Y8__V8_Y8, UYVY), + TEGRA210_VIDEO_FMT(YUV422_8, 16, VYUY8_1X16, 2, T_V8_Y8__U8_Y8, VYUY), + TEGRA210_VIDEO_FMT(YUV422_8, 16, YUYV8_1X16, 2, T_Y8_U8__Y8_V8, YUYV), + TEGRA210_VIDEO_FMT(YUV422_8, 16, YVYU8_1X16, 2, T_Y8_V8__Y8_U8, YVYU), + TEGRA210_VIDEO_FMT(YUV422_8, 16, UYVY8_1X16, 1, T_Y8__V8U8_N422, NV16), + TEGRA210_VIDEO_FMT(YUV422_8, 16, UYVY8_2X8, 2, T_U8_Y8__V8_Y8, UYVY), + TEGRA210_VIDEO_FMT(YUV422_8, 16, VYUY8_2X8, 2, T_V8_Y8__U8_Y8, VYUY), + TEGRA210_VIDEO_FMT(YUV422_8, 16, YUYV8_2X8, 2, T_Y8_U8__Y8_V8, YUYV), + TEGRA210_VIDEO_FMT(YUV422_8, 16, YVYU8_2X8, 2, T_Y8_V8__Y8_U8, YVYU), +}; + +/* Tegra210 VI operations */ +static const struct tegra_vi_ops tegra210_vi_ops = { + .vi_start_streaming = tegra210_vi_start_streaming, + .vi_stop_streaming = tegra210_vi_stop_streaming, +}; + +/* Tegra210 VI SoC data */ +const struct tegra_vi_soc tegra210_vi_soc = { + .video_formats = tegra210_video_formats, + .nformats = ARRAY_SIZE(tegra210_video_formats), + .ops = &tegra210_vi_ops, + .hw_revision = 3, + .vi_max_channels = 6, + .vi_max_clk_hz = 499200000, +}; + +/* Tegra210 CSI PHY registers accessors */ +static void csi_write(struct tegra_csi *csi, u8 portno, unsigned int addr, + u32 val) +{ + void __iomem *csi_pp_base; + + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); + + writel_relaxed(val, csi_pp_base + addr); +} + +/* Tegra210 CSI Pixel parser registers accessors */ +static void pp_write(struct tegra_csi *csi, u8 portno, u32 addr, u32 val) +{ + void __iomem *csi_pp_base; + unsigned int offset; + + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; + + writel_relaxed(val, csi_pp_base + offset + addr); +} + +static u32 pp_read(struct tegra_csi *csi, u8 portno, u32 addr) +{ + void __iomem *csi_pp_base; + unsigned int offset; + + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; + + return readl_relaxed(csi_pp_base + offset + addr); +} + +/* Tegra210 CSI CIL A/B port registers accessors */ +static void cil_write(struct tegra_csi *csi, u8 portno, u32 addr, u32 val) +{ + void __iomem *csi_cil_base; + unsigned int offset; + + csi_cil_base = csi->iomem + CSI_PP_OFFSET(portno >> 1) + + TEGRA210_CSI_CIL_OFFSET; + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; + + writel_relaxed(val, csi_cil_base + offset + addr); +} + +static u32 cil_read(struct tegra_csi *csi, u8 portno, u32 addr) +{ + void __iomem *csi_cil_base; + unsigned int offset; + + csi_cil_base = csi->iomem + CSI_PP_OFFSET(portno >> 1) + + TEGRA210_CSI_CIL_OFFSET; + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; + + return readl_relaxed(csi_cil_base + offset + addr); +} + +/* Tegra210 CSI Test pattern generator registers accessor */ +static void tpg_write(struct tegra_csi *csi, u8 portno, unsigned int addr, + u32 val) +{ + void __iomem *csi_pp_base; + unsigned int offset; + + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET + + TEGRA210_CSI_TPG_OFFSET; + + writel_relaxed(val, csi_pp_base + offset + addr); +} + +/* + * Tegra210 CSI operations + */ +static void tegra210_csi_error_recover(struct tegra_csi_channel *csi_chan) +{ + struct tegra_csi *csi = csi_chan->csi; + unsigned int portno = csi_chan->csi_port_num; + u32 val; + + /* + * Recover CSI hardware in case of capture errors by issuing + * software reset to CSICIL sensor, pixel parser, and clear errors + * to have clean capture on next streaming. + */ + val = pp_read(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS); + dev_dbg(csi->dev, "TEGRA_CSI_PIXEL_PARSER_STATUS 0x%08x\n", val); + + val = cil_read(csi, portno, TEGRA_CSI_CIL_STATUS); + dev_dbg(csi->dev, "TEGRA_CSI_CIL_STATUS 0x%08x\n", val); + + val = cil_read(csi, portno, TEGRA_CSI_CILX_STATUS); + dev_dbg(csi->dev, "TEGRA_CSI_CILX_STATUS 0x%08x\n", val); + + if (csi_chan->numlanes == 4) { + /* reset CSI CIL sensor */ + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x1); + cil_write(csi, portno + 1, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x1); + /* + * SW_STATUS_RESET resets all status bits of PPA, PPB, CILA, + * CILB status registers and debug counters. + * So, SW_STATUS_RESET can be used only when CSI brick is in + * x4 mode. + */ + csi_write(csi, portno, TEGRA_CSI_CSI_SW_STATUS_RESET, 0x1); + + /* sleep for 20 clock cycles to drain the FIFO */ + usleep_range(10, 20); + + cil_write(csi, portno + 1, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x0); + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x0); + csi_write(csi, portno, TEGRA_CSI_CSI_SW_STATUS_RESET, 0x0); + } else { + /* reset CSICIL sensor */ + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x1); + usleep_range(10, 20); + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x0); + + /* clear the errors */ + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS, + 0xffffffff); + cil_write(csi, portno, TEGRA_CSI_CIL_STATUS, 0xffffffff); + cil_write(csi, portno, TEGRA_CSI_CILX_STATUS, 0xffffffff); + } +} + +static int tegra210_csi_start_streaming(struct tegra_csi_channel *csi_chan) +{ + struct tegra_csi *csi = csi_chan->csi; + unsigned int portno = csi_chan->csi_port_num; + u32 val; + + csi_write(csi, portno, TEGRA_CSI_CLKEN_OVERRIDE, 0); + + /* clean up status */ + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS, 0xffffffff); + cil_write(csi, portno, TEGRA_CSI_CIL_STATUS, 0xffffffff); + cil_write(csi, portno, TEGRA_CSI_CILX_STATUS, 0xffffffff); + cil_write(csi, portno, TEGRA_CSI_CIL_INTERRUPT_MASK, 0x0); + + /* CIL PHY registers setup */ + cil_write(csi, portno, TEGRA_CSI_CIL_PAD_CONFIG0, 0x0); + cil_write(csi, portno, TEGRA_CSI_CIL_PHY_CONTROL, 0xa); + + /* + * The CSI unit provides for connection of up to six cameras in + * the system and is organized as three identical instances of + * two MIPI support blocks, each with a separate 4-lane + * interface that can be configured as a single camera with 4 + * lanes or as a dual camera with 2 lanes available for each + * camera. + */ + if (csi_chan->numlanes == 4) { + cil_write(csi, portno + 1, TEGRA_CSI_CIL_STATUS, 0xffffffff); + cil_write(csi, portno + 1, TEGRA_CSI_CILX_STATUS, 0xffffffff); + cil_write(csi, portno + 1, TEGRA_CSI_CIL_INTERRUPT_MASK, 0x0); + + cil_write(csi, portno, TEGRA_CSI_CIL_PAD_CONFIG0, + BRICK_CLOCK_A_4X); + cil_write(csi, portno + 1, TEGRA_CSI_CIL_PAD_CONFIG0, 0x0); + cil_write(csi, portno + 1, TEGRA_CSI_CIL_INTERRUPT_MASK, 0x0); + cil_write(csi, portno + 1, TEGRA_CSI_CIL_PHY_CONTROL, 0xa); + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, + CSI_A_PHY_CIL_ENABLE | CSI_B_PHY_CIL_ENABLE); + } else { + val = ((portno & 1) == PORT_A) ? + CSI_A_PHY_CIL_ENABLE | CSI_B_PHY_CIL_NOP : + CSI_B_PHY_CIL_ENABLE | CSI_A_PHY_CIL_NOP; + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, val); + } + + /* CSI pixel parser registers setup */ + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_PP_COMMAND, + (0xf << CSI_PP_START_MARKER_FRAME_MAX_OFFSET) | + CSI_PP_SINGLE_SHOT_ENABLE | CSI_PP_RST); + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_INTERRUPT_MASK, 0x0); + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_CONTROL0, + CSI_PP_PACKET_HEADER_SENT | + CSI_PP_DATA_IDENTIFIER_ENABLE | + CSI_PP_WORD_COUNT_SELECT_HEADER | + CSI_PP_CRC_CHECK_ENABLE | CSI_PP_WC_CHECK | + CSI_PP_OUTPUT_FORMAT_STORE | CSI_PPA_PAD_LINE_NOPAD | + CSI_PP_HEADER_EC_DISABLE | CSI_PPA_PAD_FRAME_NOPAD | + (portno & 1)); + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_CONTROL1, + (0x1 << CSI_PP_TOP_FIELD_FRAME_OFFSET) | + (0x1 << CSI_PP_TOP_FIELD_FRAME_MASK_OFFSET)); + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_GAP, + 0x14 << PP_FRAME_MIN_GAP_OFFSET); + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_EXPECTED_FRAME, 0x0); + pp_write(csi, portno, TEGRA_CSI_INPUT_STREAM_CONTROL, + (0x3f << CSI_SKIP_PACKET_THRESHOLD_OFFSET) | + (csi_chan->numlanes - 1)); + + /* TPG setup */ + if (csi_chan->pg_mode) { + tpg_write(csi, portno, TEGRA_CSI_PATTERN_GENERATOR_CTRL, + ((csi_chan->pg_mode - 1) << PG_MODE_OFFSET) | + PG_ENABLE); + tpg_write(csi, portno, TEGRA_CSI_PG_BLANK, + csi_chan->v_blank << PG_VBLANK_OFFSET | + csi_chan->h_blank); + tpg_write(csi, portno, TEGRA_CSI_PG_PHASE, 0x0); + tpg_write(csi, portno, TEGRA_CSI_PG_RED_FREQ, + (0x10 << PG_RED_VERT_INIT_FREQ_OFFSET) | + (0x10 << PG_RED_HOR_INIT_FREQ_OFFSET)); + tpg_write(csi, portno, TEGRA_CSI_PG_RED_FREQ_RATE, 0x0); + tpg_write(csi, portno, TEGRA_CSI_PG_GREEN_FREQ, + (0x10 << PG_GREEN_VERT_INIT_FREQ_OFFSET) | + (0x10 << PG_GREEN_HOR_INIT_FREQ_OFFSET)); + tpg_write(csi, portno, TEGRA_CSI_PG_GREEN_FREQ_RATE, 0x0); + tpg_write(csi, portno, TEGRA_CSI_PG_BLUE_FREQ, + (0x10 << PG_BLUE_VERT_INIT_FREQ_OFFSET) | + (0x10 << PG_BLUE_HOR_INIT_FREQ_OFFSET)); + tpg_write(csi, portno, TEGRA_CSI_PG_BLUE_FREQ_RATE, 0x0); + } + + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_PP_COMMAND, + (0xf << CSI_PP_START_MARKER_FRAME_MAX_OFFSET) | + CSI_PP_SINGLE_SHOT_ENABLE | CSI_PP_ENABLE); + + return 0; +} + +static void tegra210_csi_stop_streaming(struct tegra_csi_channel *csi_chan) +{ + struct tegra_csi *csi = csi_chan->csi; + unsigned int portno = csi_chan->csi_port_num; + u32 val; + + val = pp_read(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS); + + dev_dbg(csi->dev, "TEGRA_CSI_PIXEL_PARSER_STATUS 0x%08x\n", val); + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS, val); + + val = cil_read(csi, portno, TEGRA_CSI_CIL_STATUS); + dev_dbg(csi->dev, "TEGRA_CSI_CIL_STATUS 0x%08x\n", val); + cil_write(csi, portno, TEGRA_CSI_CIL_STATUS, val); + + val = cil_read(csi, portno, TEGRA_CSI_CILX_STATUS); + dev_dbg(csi->dev, "TEGRA_CSI_CILX_STATUS 0x%08x\n", val); + cil_write(csi, portno, TEGRA_CSI_CILX_STATUS, val); + + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_PP_COMMAND, + (0xf << CSI_PP_START_MARKER_FRAME_MAX_OFFSET) | + CSI_PP_DISABLE); + + if (csi_chan->pg_mode) { + tpg_write(csi, portno, TEGRA_CSI_PATTERN_GENERATOR_CTRL, + PG_DISABLE); + return; + } + + if (csi_chan->numlanes == 4) { + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, + CSI_A_PHY_CIL_DISABLE | + CSI_B_PHY_CIL_DISABLE); + } else { + val = ((portno & 1) == PORT_A) ? + CSI_A_PHY_CIL_DISABLE | CSI_B_PHY_CIL_NOP : + CSI_B_PHY_CIL_DISABLE | CSI_A_PHY_CIL_NOP; + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, val); + } +} + +/* + * Tegra210 CSI TPG frame rate table with horizontal and vertical + * blanking intervals for corresponding format and resolution. + * Blanking intervals are tuned values from design team for max TPG + * clock rate. + */ +static const struct tpg_framerate tegra210_tpg_frmrate_table[] = { + { + .frmsize = { 1280, 720 }, + .code = MEDIA_BUS_FMT_SRGGB10_1X10, + .framerate = 120, + .h_blank = 512, + .v_blank = 8, + }, + { + .frmsize = { 1920, 1080 }, + .code = MEDIA_BUS_FMT_SRGGB10_1X10, + .framerate = 60, + .h_blank = 512, + .v_blank = 8, + }, + { + .frmsize = { 3840, 2160 }, + .code = MEDIA_BUS_FMT_SRGGB10_1X10, + .framerate = 20, + .h_blank = 8, + .v_blank = 8, + }, + { + .frmsize = { 1280, 720 }, + .code = MEDIA_BUS_FMT_RGB888_1X32_PADHI, + .framerate = 60, + .h_blank = 512, + .v_blank = 8, + }, + { + .frmsize = { 1920, 1080 }, + .code = MEDIA_BUS_FMT_RGB888_1X32_PADHI, + .framerate = 30, + .h_blank = 512, + .v_blank = 8, + }, + { + .frmsize = { 3840, 2160 }, + .code = MEDIA_BUS_FMT_RGB888_1X32_PADHI, + .framerate = 8, + .h_blank = 8, + .v_blank = 8, + }, +}; + +static const char * const tegra210_csi_cil_clks[] = { + "csi", + "cilab", + "cilcd", + "cile", + "csi_tpg", +}; + +/* Tegra210 CSI operations */ +static const struct tegra_csi_ops tegra210_csi_ops = { + .csi_start_streaming = tegra210_csi_start_streaming, + .csi_stop_streaming = tegra210_csi_stop_streaming, + .csi_err_recover = tegra210_csi_error_recover, +}; + +/* Tegra210 CSI SoC data */ +const struct tegra_csi_soc tegra210_csi_soc = { + .ops = &tegra210_csi_ops, + .csi_max_channels = 6, + .clk_names = tegra210_csi_cil_clks, + .num_clks = ARRAY_SIZE(tegra210_csi_cil_clks), + .tpg_frmrate_table = tegra210_tpg_frmrate_table, + .tpg_frmrate_table_size = ARRAY_SIZE(tegra210_tpg_frmrate_table), +}; diff --git a/drivers/staging/media/tegra-video/vi.c b/drivers/staging/media/tegra-video/vi.c new file mode 100644 index 000000000000..1b5e660155f5 --- /dev/null +++ b/drivers/staging/media/tegra-video/vi.c @@ -0,0 +1,1074 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#include <linux/bitmap.h> +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/host1x.h> +#include <linux/lcm.h> +#include <linux/list.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include <linux/regulator/consumer.h> +#include <linux/pm_runtime.h> +#include <linux/slab.h> + +#include <media/v4l2-event.h> +#include <media/v4l2-fh.h> +#include <media/v4l2-fwnode.h> +#include <media/v4l2-ioctl.h> +#include <media/videobuf2-dma-contig.h> + +#include <soc/tegra/pmc.h> + +#include "vi.h" +#include "video.h" + +#define SURFACE_ALIGN_BYTES 64 +#define MAX_CID_CONTROLS 1 + +static const struct tegra_video_format tegra_default_format = { + .img_dt = TEGRA_IMAGE_DT_RAW10, + .bit_width = 10, + .code = MEDIA_BUS_FMT_SRGGB10_1X10, + .bpp = 2, + .img_fmt = TEGRA_IMAGE_FORMAT_DEF, + .fourcc = V4L2_PIX_FMT_SRGGB10, +}; + +static inline struct tegra_vi * +host1x_client_to_vi(struct host1x_client *client) +{ + return container_of(client, struct tegra_vi, client); +} + +static inline struct tegra_channel_buffer * +to_tegra_channel_buffer(struct vb2_v4l2_buffer *vb) +{ + return container_of(vb, struct tegra_channel_buffer, buf); +} + +static int tegra_get_format_idx_by_code(struct tegra_vi *vi, + unsigned int code) +{ + unsigned int i; + + for (i = 0; i < vi->soc->nformats; ++i) { + if (vi->soc->video_formats[i].code == code) + return i; + } + + return -1; +} + +static u32 tegra_get_format_fourcc_by_idx(struct tegra_vi *vi, + unsigned int index) +{ + if (index >= vi->soc->nformats) + return -EINVAL; + + return vi->soc->video_formats[index].fourcc; +} + +static const struct tegra_video_format * +tegra_get_format_by_fourcc(struct tegra_vi *vi, u32 fourcc) +{ + unsigned int i; + + for (i = 0; i < vi->soc->nformats; ++i) { + if (vi->soc->video_formats[i].fourcc == fourcc) + return &vi->soc->video_formats[i]; + } + + return NULL; +} + +/* + * videobuf2 queue operations + */ +static int tegra_channel_queue_setup(struct vb2_queue *vq, + unsigned int *nbuffers, + unsigned int *nplanes, + unsigned int sizes[], + struct device *alloc_devs[]) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); + + if (*nplanes) + return sizes[0] < chan->format.sizeimage ? -EINVAL : 0; + + *nplanes = 1; + sizes[0] = chan->format.sizeimage; + alloc_devs[0] = chan->vi->dev; + + return 0; +} + +static int tegra_channel_buffer_prepare(struct vb2_buffer *vb) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vb->vb2_queue); + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct tegra_channel_buffer *buf = to_tegra_channel_buffer(vbuf); + unsigned long size = chan->format.sizeimage; + + if (vb2_plane_size(vb, 0) < size) { + v4l2_err(chan->video.v4l2_dev, + "buffer too small (%lu < %lu)\n", + vb2_plane_size(vb, 0), size); + return -EINVAL; + } + + vb2_set_plane_payload(vb, 0, size); + buf->chan = chan; + buf->addr = vb2_dma_contig_plane_dma_addr(vb, 0); + + return 0; +} + +static void tegra_channel_buffer_queue(struct vb2_buffer *vb) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vb->vb2_queue); + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct tegra_channel_buffer *buf = to_tegra_channel_buffer(vbuf); + + /* put buffer into the capture queue */ + spin_lock(&chan->start_lock); + list_add_tail(&buf->queue, &chan->capture); + spin_unlock(&chan->start_lock); + + /* wait up kthread for capture */ + wake_up_interruptible(&chan->start_wait); +} + +struct v4l2_subdev * +tegra_channel_get_remote_subdev(struct tegra_vi_channel *chan) +{ + struct media_pad *pad; + struct v4l2_subdev *subdev; + struct media_entity *entity; + + pad = media_entity_remote_pad(&chan->pad); + entity = pad->entity; + subdev = media_entity_to_v4l2_subdev(entity); + + return subdev; +} + +int tegra_channel_set_stream(struct tegra_vi_channel *chan, bool on) +{ + struct v4l2_subdev *subdev; + int ret; + + /* stream CSI */ + subdev = tegra_channel_get_remote_subdev(chan); + ret = v4l2_subdev_call(subdev, video, s_stream, on); + if (on && ret < 0 && ret != -ENOIOCTLCMD) + return ret; + + return 0; +} + +void tegra_channel_release_buffers(struct tegra_vi_channel *chan, + enum vb2_buffer_state state) +{ + struct tegra_channel_buffer *buf, *nbuf; + + spin_lock(&chan->start_lock); + list_for_each_entry_safe(buf, nbuf, &chan->capture, queue) { + vb2_buffer_done(&buf->buf.vb2_buf, state); + list_del(&buf->queue); + } + spin_unlock(&chan->start_lock); + + spin_lock(&chan->done_lock); + list_for_each_entry_safe(buf, nbuf, &chan->done, queue) { + vb2_buffer_done(&buf->buf.vb2_buf, state); + list_del(&buf->queue); + } + spin_unlock(&chan->done_lock); +} + +static int tegra_channel_start_streaming(struct vb2_queue *vq, u32 count) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); + int ret; + + ret = pm_runtime_get_sync(chan->vi->dev); + if (ret < 0) { + dev_err(chan->vi->dev, "failed to get runtime PM: %d\n", ret); + pm_runtime_put_noidle(chan->vi->dev); + return ret; + } + + ret = chan->vi->ops->vi_start_streaming(vq, count); + if (ret < 0) + pm_runtime_put(chan->vi->dev); + + return ret; +} + +static void tegra_channel_stop_streaming(struct vb2_queue *vq) +{ + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); + + chan->vi->ops->vi_stop_streaming(vq); + pm_runtime_put(chan->vi->dev); +} + +static const struct vb2_ops tegra_channel_queue_qops = { + .queue_setup = tegra_channel_queue_setup, + .buf_prepare = tegra_channel_buffer_prepare, + .buf_queue = tegra_channel_buffer_queue, + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, + .start_streaming = tegra_channel_start_streaming, + .stop_streaming = tegra_channel_stop_streaming, +}; + +/* + * V4L2 ioctl operations + */ +static int tegra_channel_querycap(struct file *file, void *fh, + struct v4l2_capability *cap) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + + strscpy(cap->driver, "tegra-video", sizeof(cap->driver)); + strscpy(cap->card, chan->video.name, sizeof(cap->card)); + snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s", + dev_name(chan->vi->dev)); + + return 0; +} + +static int tegra_channel_g_parm(struct file *file, void *fh, + struct v4l2_streamparm *a) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + struct v4l2_subdev *subdev; + + subdev = tegra_channel_get_remote_subdev(chan); + return v4l2_g_parm_cap(&chan->video, subdev, a); +} + +static int tegra_channel_s_parm(struct file *file, void *fh, + struct v4l2_streamparm *a) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + struct v4l2_subdev *subdev; + + subdev = tegra_channel_get_remote_subdev(chan); + return v4l2_s_parm_cap(&chan->video, subdev, a); +} + +static int tegra_channel_enum_framesizes(struct file *file, void *fh, + struct v4l2_frmsizeenum *sizes) +{ + int ret; + struct tegra_vi_channel *chan = video_drvdata(file); + struct v4l2_subdev *subdev; + const struct tegra_video_format *fmtinfo; + struct v4l2_subdev_frame_size_enum fse = { + .index = sizes->index, + .which = V4L2_SUBDEV_FORMAT_ACTIVE, + }; + + fmtinfo = tegra_get_format_by_fourcc(chan->vi, sizes->pixel_format); + if (!fmtinfo) + return -EINVAL; + + fse.code = fmtinfo->code; + + subdev = tegra_channel_get_remote_subdev(chan); + ret = v4l2_subdev_call(subdev, pad, enum_frame_size, NULL, &fse); + if (ret) + return ret; + + sizes->type = V4L2_FRMSIZE_TYPE_DISCRETE; + sizes->discrete.width = fse.max_width; + sizes->discrete.height = fse.max_height; + + return 0; +} + +static int tegra_channel_enum_frameintervals(struct file *file, void *fh, + struct v4l2_frmivalenum *ivals) +{ + int ret; + struct tegra_vi_channel *chan = video_drvdata(file); + struct v4l2_subdev *subdev; + const struct tegra_video_format *fmtinfo; + struct v4l2_subdev_frame_interval_enum fie = { + .index = ivals->index, + .width = ivals->width, + .height = ivals->height, + .which = V4L2_SUBDEV_FORMAT_ACTIVE, + }; + + fmtinfo = tegra_get_format_by_fourcc(chan->vi, ivals->pixel_format); + if (!fmtinfo) + return -EINVAL; + + fie.code = fmtinfo->code; + + subdev = tegra_channel_get_remote_subdev(chan); + ret = v4l2_subdev_call(subdev, pad, enum_frame_interval, NULL, &fie); + if (ret) + return ret; + + ivals->type = V4L2_FRMIVAL_TYPE_DISCRETE; + ivals->discrete.numerator = fie.interval.numerator; + ivals->discrete.denominator = fie.interval.denominator; + + return 0; +} + +static int tegra_channel_enum_format(struct file *file, void *fh, + struct v4l2_fmtdesc *f) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + unsigned int index = 0, i; + unsigned long *fmts_bitmap = chan->tpg_fmts_bitmap; + + if (f->index >= bitmap_weight(fmts_bitmap, MAX_FORMAT_NUM)) + return -EINVAL; + + for (i = 0; i < f->index + 1; i++, index++) + index = find_next_bit(fmts_bitmap, MAX_FORMAT_NUM, index); + + f->pixelformat = tegra_get_format_fourcc_by_idx(chan->vi, index - 1); + + return 0; +} + +static int tegra_channel_get_format(struct file *file, void *fh, + struct v4l2_format *format) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + + format->fmt.pix = chan->format; + + return 0; +} + +static void tegra_channel_fmt_align(struct tegra_vi_channel *chan, + struct v4l2_pix_format *pix, + unsigned int bpp) +{ + unsigned int align; + unsigned int min_width; + unsigned int max_width; + unsigned int width; + unsigned int min_bpl; + unsigned int max_bpl; + unsigned int bpl; + + /* + * The transfer alignment requirements are expressed in bytes. Compute + * minimum and maximum values, clamp the requested width and convert + * it back to pixels. Use bytesperline to adjust the width. + */ + align = lcm(SURFACE_ALIGN_BYTES, bpp); + min_width = roundup(TEGRA_MIN_WIDTH, align); + max_width = rounddown(TEGRA_MAX_WIDTH, align); + width = roundup(pix->width * bpp, align); + + pix->width = clamp(width, min_width, max_width) / bpp; + pix->height = clamp(pix->height, TEGRA_MIN_HEIGHT, TEGRA_MAX_HEIGHT); + + /* Clamp the requested bytes per line value. If the maximum bytes per + * line value is zero, the module doesn't support user configurable + * line sizes. Override the requested value with the minimum in that + * case. + */ + min_bpl = pix->width * bpp; + max_bpl = rounddown(TEGRA_MAX_WIDTH, SURFACE_ALIGN_BYTES); + bpl = roundup(pix->bytesperline, SURFACE_ALIGN_BYTES); + + pix->bytesperline = clamp(bpl, min_bpl, max_bpl); + pix->sizeimage = pix->bytesperline * pix->height; +} + +static int __tegra_channel_try_format(struct tegra_vi_channel *chan, + struct v4l2_pix_format *pix) +{ + const struct tegra_video_format *fmtinfo; + struct v4l2_subdev *subdev; + struct v4l2_subdev_format fmt; + struct v4l2_subdev_pad_config *pad_cfg; + + subdev = tegra_channel_get_remote_subdev(chan); + pad_cfg = v4l2_subdev_alloc_pad_config(subdev); + if (!pad_cfg) + return -ENOMEM; + /* + * Retrieve the format information and if requested format isn't + * supported, keep the current format. + */ + fmtinfo = tegra_get_format_by_fourcc(chan->vi, pix->pixelformat); + if (!fmtinfo) { + pix->pixelformat = chan->format.pixelformat; + pix->colorspace = chan->format.colorspace; + fmtinfo = tegra_get_format_by_fourcc(chan->vi, + pix->pixelformat); + } + + pix->field = V4L2_FIELD_NONE; + fmt.which = V4L2_SUBDEV_FORMAT_TRY; + fmt.pad = 0; + v4l2_fill_mbus_format(&fmt.format, pix, fmtinfo->code); + v4l2_subdev_call(subdev, pad, set_fmt, pad_cfg, &fmt); + v4l2_fill_pix_format(pix, &fmt.format); + tegra_channel_fmt_align(chan, pix, fmtinfo->bpp); + + v4l2_subdev_free_pad_config(pad_cfg); + + return 0; +} + +static int tegra_channel_try_format(struct file *file, void *fh, + struct v4l2_format *format) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + + return __tegra_channel_try_format(chan, &format->fmt.pix); +} + +static int tegra_channel_set_format(struct file *file, void *fh, + struct v4l2_format *format) +{ + struct tegra_vi_channel *chan = video_drvdata(file); + const struct tegra_video_format *fmtinfo; + struct v4l2_subdev_format fmt; + struct v4l2_subdev *subdev; + struct v4l2_pix_format *pix = &format->fmt.pix; + int ret; + + if (vb2_is_busy(&chan->queue)) + return -EBUSY; + + /* get supported format by try_fmt */ + ret = __tegra_channel_try_format(chan, pix); + if (ret) + return ret; + + fmtinfo = tegra_get_format_by_fourcc(chan->vi, pix->pixelformat); + + fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE; + fmt.pad = 0; + v4l2_fill_mbus_format(&fmt.format, pix, fmtinfo->code); + subdev = tegra_channel_get_remote_subdev(chan); + v4l2_subdev_call(subdev, pad, set_fmt, NULL, &fmt); + v4l2_fill_pix_format(pix, &fmt.format); + tegra_channel_fmt_align(chan, pix, fmtinfo->bpp); + + chan->format = *pix; + chan->fmtinfo = fmtinfo; + + return 0; +} + +static int tegra_channel_enum_input(struct file *file, void *fh, + struct v4l2_input *inp) +{ + /* currently driver supports internal TPG only */ + if (inp->index) + return -EINVAL; + + inp->type = V4L2_INPUT_TYPE_CAMERA; + strscpy(inp->name, "Tegra TPG", sizeof(inp->name)); + + return 0; +} + +static int tegra_channel_g_input(struct file *file, void *priv, + unsigned int *i) +{ + *i = 0; + + return 0; +} + +static int tegra_channel_s_input(struct file *file, void *priv, + unsigned int input) +{ + if (input > 0) + return -EINVAL; + + return 0; +} + +static const struct v4l2_ioctl_ops tegra_channel_ioctl_ops = { + .vidioc_querycap = tegra_channel_querycap, + .vidioc_g_parm = tegra_channel_g_parm, + .vidioc_s_parm = tegra_channel_s_parm, + .vidioc_enum_framesizes = tegra_channel_enum_framesizes, + .vidioc_enum_frameintervals = tegra_channel_enum_frameintervals, + .vidioc_enum_fmt_vid_cap = tegra_channel_enum_format, + .vidioc_g_fmt_vid_cap = tegra_channel_get_format, + .vidioc_s_fmt_vid_cap = tegra_channel_set_format, + .vidioc_try_fmt_vid_cap = tegra_channel_try_format, + .vidioc_enum_input = tegra_channel_enum_input, + .vidioc_g_input = tegra_channel_g_input, + .vidioc_s_input = tegra_channel_s_input, + .vidioc_reqbufs = vb2_ioctl_reqbufs, + .vidioc_prepare_buf = vb2_ioctl_prepare_buf, + .vidioc_querybuf = vb2_ioctl_querybuf, + .vidioc_qbuf = vb2_ioctl_qbuf, + .vidioc_dqbuf = vb2_ioctl_dqbuf, + .vidioc_create_bufs = vb2_ioctl_create_bufs, + .vidioc_expbuf = vb2_ioctl_expbuf, + .vidioc_streamon = vb2_ioctl_streamon, + .vidioc_streamoff = vb2_ioctl_streamoff, + .vidioc_subscribe_event = v4l2_ctrl_subscribe_event, + .vidioc_unsubscribe_event = v4l2_event_unsubscribe, +}; + +/* + * V4L2 file operations + */ +static const struct v4l2_file_operations tegra_channel_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = video_ioctl2, + .open = v4l2_fh_open, + .release = vb2_fop_release, + .read = vb2_fop_read, + .poll = vb2_fop_poll, + .mmap = vb2_fop_mmap, +}; + +/* + * V4L2 control operations + */ +static int vi_s_ctrl(struct v4l2_ctrl *ctrl) +{ + struct tegra_vi_channel *chan = container_of(ctrl->handler, + struct tegra_vi_channel, + ctrl_handler); + + switch (ctrl->id) { + case V4L2_CID_TEST_PATTERN: + /* pattern change takes effect on next stream */ + chan->pg_mode = ctrl->val + 1; + break; + default: + return -EINVAL; + } + + return 0; +} + +static const struct v4l2_ctrl_ops vi_ctrl_ops = { + .s_ctrl = vi_s_ctrl, +}; + +static const char *const vi_pattern_strings[] = { + "Black/White Direct Mode", + "Color Patch Mode", +}; + +static int tegra_channel_setup_ctrl_handler(struct tegra_vi_channel *chan) +{ + int ret; + + /* add test pattern control handler to v4l2 device */ + v4l2_ctrl_new_std_menu_items(&chan->ctrl_handler, &vi_ctrl_ops, + V4L2_CID_TEST_PATTERN, + ARRAY_SIZE(vi_pattern_strings) - 1, + 0, 0, vi_pattern_strings); + if (chan->ctrl_handler.error) { + dev_err(chan->vi->dev, "failed to add TPG ctrl handler: %d\n", + chan->ctrl_handler.error); + v4l2_ctrl_handler_free(&chan->ctrl_handler); + return chan->ctrl_handler.error; + } + + /* setup the controls */ + ret = v4l2_ctrl_handler_setup(&chan->ctrl_handler); + if (ret < 0) { + dev_err(chan->vi->dev, + "failed to setup v4l2 ctrl handler: %d\n", ret); + return ret; + } + + return 0; +} + +/* VI only support 2 formats in TPG mode */ +static void vi_tpg_fmts_bitmap_init(struct tegra_vi_channel *chan) +{ + int index; + + bitmap_zero(chan->tpg_fmts_bitmap, MAX_FORMAT_NUM); + + index = tegra_get_format_idx_by_code(chan->vi, + MEDIA_BUS_FMT_SRGGB10_1X10); + bitmap_set(chan->tpg_fmts_bitmap, index, 1); + + index = tegra_get_format_idx_by_code(chan->vi, + MEDIA_BUS_FMT_RGB888_1X32_PADHI); + bitmap_set(chan->tpg_fmts_bitmap, index, 1); +} + +static void tegra_channel_cleanup(struct tegra_vi_channel *chan) +{ + v4l2_ctrl_handler_free(&chan->ctrl_handler); + media_entity_cleanup(&chan->video.entity); + host1x_syncpt_free(chan->mw_ack_sp); + host1x_syncpt_free(chan->frame_start_sp); + mutex_destroy(&chan->video_lock); +} + +void tegra_channels_cleanup(struct tegra_vi *vi) +{ + struct tegra_vi_channel *chan, *tmp; + + if (!vi) + return; + + list_for_each_entry_safe(chan, tmp, &vi->vi_chans, list) { + tegra_channel_cleanup(chan); + list_del(&chan->list); + kfree(chan); + } +} + +static int tegra_channel_init(struct tegra_vi_channel *chan) +{ + struct tegra_vi *vi = chan->vi; + struct tegra_video_device *vid = dev_get_drvdata(vi->client.host); + unsigned long flags = HOST1X_SYNCPT_CLIENT_MANAGED; + int ret; + + mutex_init(&chan->video_lock); + INIT_LIST_HEAD(&chan->capture); + INIT_LIST_HEAD(&chan->done); + spin_lock_init(&chan->start_lock); + spin_lock_init(&chan->done_lock); + spin_lock_init(&chan->sp_incr_lock); + init_waitqueue_head(&chan->start_wait); + init_waitqueue_head(&chan->done_wait); + + /* initialize the video format */ + chan->fmtinfo = &tegra_default_format; + chan->format.pixelformat = chan->fmtinfo->fourcc; + chan->format.colorspace = V4L2_COLORSPACE_SRGB; + chan->format.field = V4L2_FIELD_NONE; + chan->format.width = TEGRA_DEF_WIDTH; + chan->format.height = TEGRA_DEF_HEIGHT; + chan->format.bytesperline = TEGRA_DEF_WIDTH * chan->fmtinfo->bpp; + chan->format.sizeimage = chan->format.bytesperline * TEGRA_DEF_HEIGHT; + tegra_channel_fmt_align(chan, &chan->format, chan->fmtinfo->bpp); + + chan->frame_start_sp = host1x_syncpt_request(&vi->client, flags); + if (!chan->frame_start_sp) { + dev_err(vi->dev, "failed to request frame start syncpoint\n"); + return -ENOMEM; + } + + chan->mw_ack_sp = host1x_syncpt_request(&vi->client, flags); + if (!chan->mw_ack_sp) { + dev_err(vi->dev, "failed to request memory ack syncpoint\n"); + ret = -ENOMEM; + goto free_fs_syncpt; + } + + /* initialize the media entity */ + chan->pad.flags = MEDIA_PAD_FL_SINK; + ret = media_entity_pads_init(&chan->video.entity, 1, &chan->pad); + if (ret < 0) { + dev_err(vi->dev, + "failed to initialize media entity: %d\n", ret); + goto free_mw_ack_syncpt; + } + + ret = v4l2_ctrl_handler_init(&chan->ctrl_handler, MAX_CID_CONTROLS); + if (chan->ctrl_handler.error) { + dev_err(vi->dev, + "failed to initialize v4l2 ctrl handler: %d\n", ret); + goto cleanup_media; + } + + /* initialize the video_device */ + chan->video.fops = &tegra_channel_fops; + chan->video.v4l2_dev = &vid->v4l2_dev; + chan->video.release = video_device_release_empty; + chan->video.queue = &chan->queue; + snprintf(chan->video.name, sizeof(chan->video.name), "%s-%s-%u", + dev_name(vi->dev), "output", chan->portno); + chan->video.vfl_type = VFL_TYPE_VIDEO; + chan->video.vfl_dir = VFL_DIR_RX; + chan->video.ioctl_ops = &tegra_channel_ioctl_ops; + chan->video.ctrl_handler = &chan->ctrl_handler; + chan->video.lock = &chan->video_lock; + chan->video.device_caps = V4L2_CAP_VIDEO_CAPTURE | + V4L2_CAP_STREAMING | + V4L2_CAP_READWRITE; + video_set_drvdata(&chan->video, chan); + + chan->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; + chan->queue.io_modes = VB2_MMAP | VB2_DMABUF | VB2_READ; + chan->queue.lock = &chan->video_lock; + chan->queue.drv_priv = chan; + chan->queue.buf_struct_size = sizeof(struct tegra_channel_buffer); + chan->queue.ops = &tegra_channel_queue_qops; + chan->queue.mem_ops = &vb2_dma_contig_memops; + chan->queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; + chan->queue.min_buffers_needed = 2; + chan->queue.dev = vi->dev; + ret = vb2_queue_init(&chan->queue); + if (ret < 0) { + dev_err(vi->dev, "failed to initialize vb2 queue: %d\n", ret); + goto free_v4l2_ctrl_hdl; + } + + return 0; + +free_v4l2_ctrl_hdl: + v4l2_ctrl_handler_free(&chan->ctrl_handler); +cleanup_media: + media_entity_cleanup(&chan->video.entity); +free_mw_ack_syncpt: + host1x_syncpt_free(chan->mw_ack_sp); +free_fs_syncpt: + host1x_syncpt_free(chan->frame_start_sp); + return ret; +} + +static int tegra_vi_tpg_channels_alloc(struct tegra_vi *vi) +{ + struct tegra_vi_channel *chan; + unsigned int port_num; + unsigned int nchannels = vi->soc->vi_max_channels; + + for (port_num = 0; port_num < nchannels; port_num++) { + /* + * Do not use devm_kzalloc as memory is freed immediately + * when device instance is unbound but application might still + * be holding the device node open. Channel memory allocated + * with kzalloc is freed during video device release callback. + */ + chan = kzalloc(sizeof(*chan), GFP_KERNEL); + if (!chan) + return -ENOMEM; + + chan->vi = vi; + chan->portno = port_num; + list_add_tail(&chan->list, &vi->vi_chans); + } + + return 0; +} + +static int tegra_vi_channels_init(struct tegra_vi *vi) +{ + struct tegra_vi_channel *chan; + int ret; + + list_for_each_entry(chan, &vi->vi_chans, list) { + ret = tegra_channel_init(chan); + if (ret < 0) { + dev_err(vi->dev, + "failed to initialize channel-%d: %d\n", + chan->portno, ret); + goto cleanup; + } + } + + return 0; + +cleanup: + list_for_each_entry_continue_reverse(chan, &vi->vi_chans, list) + tegra_channel_cleanup(chan); + + return ret; +} + +void tegra_v4l2_nodes_cleanup_tpg(struct tegra_video_device *vid) +{ + struct tegra_vi *vi = vid->vi; + struct tegra_csi *csi = vid->csi; + struct tegra_csi_channel *csi_chan; + struct tegra_vi_channel *chan; + + list_for_each_entry(chan, &vi->vi_chans, list) { + video_unregister_device(&chan->video); + mutex_lock(&chan->video_lock); + vb2_queue_release(&chan->queue); + mutex_unlock(&chan->video_lock); + } + + list_for_each_entry(csi_chan, &csi->csi_chans, list) + v4l2_device_unregister_subdev(&csi_chan->subdev); +} + +int tegra_v4l2_nodes_setup_tpg(struct tegra_video_device *vid) +{ + struct tegra_vi *vi = vid->vi; + struct tegra_csi *csi = vid->csi; + struct tegra_vi_channel *vi_chan; + struct tegra_csi_channel *csi_chan; + u32 link_flags = MEDIA_LNK_FL_ENABLED; + int ret; + + if (!vi || !csi) + return -ENODEV; + + csi_chan = list_first_entry(&csi->csi_chans, + struct tegra_csi_channel, list); + + list_for_each_entry(vi_chan, &vi->vi_chans, list) { + struct media_entity *source = &csi_chan->subdev.entity; + struct media_entity *sink = &vi_chan->video.entity; + struct media_pad *source_pad = csi_chan->pads; + struct media_pad *sink_pad = &vi_chan->pad; + + ret = v4l2_device_register_subdev(&vid->v4l2_dev, + &csi_chan->subdev); + if (ret) { + dev_err(vi->dev, + "failed to register subdev: %d\n", ret); + goto cleanup; + } + + ret = video_register_device(&vi_chan->video, + VFL_TYPE_VIDEO, -1); + if (ret < 0) { + dev_err(vi->dev, + "failed to register video device: %d\n", ret); + goto cleanup; + } + + dev_dbg(vi->dev, "creating %s:%u -> %s:%u link\n", + source->name, source_pad->index, + sink->name, sink_pad->index); + + ret = media_create_pad_link(source, source_pad->index, + sink, sink_pad->index, + link_flags); + if (ret < 0) { + dev_err(vi->dev, + "failed to create %s:%u -> %s:%u link: %d\n", + source->name, source_pad->index, + sink->name, sink_pad->index, ret); + goto cleanup; + } + + ret = tegra_channel_setup_ctrl_handler(vi_chan); + if (ret < 0) + goto cleanup; + + v4l2_set_subdev_hostdata(&csi_chan->subdev, vi_chan); + vi_tpg_fmts_bitmap_init(vi_chan); + csi_chan = list_next_entry(csi_chan, list); + } + + return 0; + +cleanup: + tegra_v4l2_nodes_cleanup_tpg(vid); + return ret; +} + +static int __maybe_unused vi_runtime_resume(struct device *dev) +{ + struct tegra_vi *vi = dev_get_drvdata(dev); + int ret; + + ret = regulator_enable(vi->vdd); + if (ret) { + dev_err(dev, "failed to enable VDD supply: %d\n", ret); + return ret; + } + + ret = clk_set_rate(vi->clk, vi->soc->vi_max_clk_hz); + if (ret) { + dev_err(dev, "failed to set vi clock rate: %d\n", ret); + goto disable_vdd; + } + + ret = clk_prepare_enable(vi->clk); + if (ret) { + dev_err(dev, "failed to enable vi clock: %d\n", ret); + goto disable_vdd; + } + + return 0; + +disable_vdd: + regulator_disable(vi->vdd); + return ret; +} + +static int __maybe_unused vi_runtime_suspend(struct device *dev) +{ + struct tegra_vi *vi = dev_get_drvdata(dev); + + clk_disable_unprepare(vi->clk); + + regulator_disable(vi->vdd); + + return 0; +} + +static int tegra_vi_init(struct host1x_client *client) +{ + struct tegra_video_device *vid = dev_get_drvdata(client->host); + struct tegra_vi *vi = host1x_client_to_vi(client); + struct tegra_vi_channel *chan, *tmp; + int ret; + + vid->media_dev.hw_revision = vi->soc->hw_revision; + snprintf(vid->media_dev.bus_info, sizeof(vid->media_dev.bus_info), + "platform:%s", dev_name(vi->dev)); + + INIT_LIST_HEAD(&vi->vi_chans); + + ret = tegra_vi_tpg_channels_alloc(vi); + if (ret < 0) { + dev_err(vi->dev, "failed to allocate tpg channels: %d\n", ret); + goto free_chans; + } + + ret = tegra_vi_channels_init(vi); + if (ret < 0) + goto free_chans; + + vid->vi = vi; + + return 0; + +free_chans: + list_for_each_entry_safe(chan, tmp, &vi->vi_chans, list) { + list_del(&chan->list); + kfree(chan); + } + + return ret; +} + +static int tegra_vi_exit(struct host1x_client *client) +{ + /* + * Do not cleanup the channels here as application might still be + * holding video device nodes. Channels cleanup will happen during + * v4l2_device release callback which gets called after all video + * device nodes are released. + */ + + return 0; +} + +static const struct host1x_client_ops vi_client_ops = { + .init = tegra_vi_init, + .exit = tegra_vi_exit, +}; + +static int tegra_vi_probe(struct platform_device *pdev) +{ + struct tegra_vi *vi; + int ret; + + vi = devm_kzalloc(&pdev->dev, sizeof(*vi), GFP_KERNEL); + if (!vi) + return -ENOMEM; + + vi->iomem = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(vi->iomem)) + return PTR_ERR(vi->iomem); + + vi->soc = of_device_get_match_data(&pdev->dev); + + vi->clk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR(vi->clk)) { + ret = PTR_ERR(vi->clk); + dev_err(&pdev->dev, "failed to get vi clock: %d\n", ret); + return ret; + } + + vi->vdd = devm_regulator_get(&pdev->dev, "avdd-dsi-csi"); + if (IS_ERR(vi->vdd)) { + ret = PTR_ERR(vi->vdd); + dev_err(&pdev->dev, "failed to get VDD supply: %d\n", ret); + return ret; + } + + if (!pdev->dev.pm_domain) { + ret = -ENOENT; + dev_warn(&pdev->dev, "PM domain is not attached: %d\n", ret); + return ret; + } + + ret = devm_of_platform_populate(&pdev->dev); + if (ret < 0) { + dev_err(&pdev->dev, + "failed to populate vi child device: %d\n", ret); + return ret; + } + + vi->dev = &pdev->dev; + vi->ops = vi->soc->ops; + platform_set_drvdata(pdev, vi); + pm_runtime_enable(&pdev->dev); + + /* initialize host1x interface */ + INIT_LIST_HEAD(&vi->client.list); + vi->client.ops = &vi_client_ops; + vi->client.dev = &pdev->dev; + + ret = host1x_client_register(&vi->client); + if (ret < 0) { + dev_err(&pdev->dev, + "failed to register host1x client: %d\n", ret); + goto rpm_disable; + } + + return 0; + +rpm_disable: + pm_runtime_disable(&pdev->dev); + return ret; +} + +static int tegra_vi_remove(struct platform_device *pdev) +{ + struct tegra_vi *vi = platform_get_drvdata(pdev); + int err; + + err = host1x_client_unregister(&vi->client); + if (err < 0) { + dev_err(&pdev->dev, + "failed to unregister host1x client: %d\n", err); + return err; + } + + pm_runtime_disable(&pdev->dev); + + return 0; +} + +static const struct of_device_id tegra_vi_of_id_table[] = { +#if defined(CONFIG_ARCH_TEGRA_210_SOC) + { .compatible = "nvidia,tegra210-vi", .data = &tegra210_vi_soc }, +#endif + { } +}; +MODULE_DEVICE_TABLE(of, tegra_vi_of_id_table); + +static const struct dev_pm_ops tegra_vi_pm_ops = { + SET_RUNTIME_PM_OPS(vi_runtime_suspend, vi_runtime_resume, NULL) +}; + +struct platform_driver tegra_vi_driver = { + .driver = { + .name = "tegra-vi", + .of_match_table = tegra_vi_of_id_table, + .pm = &tegra_vi_pm_ops, + }, + .probe = tegra_vi_probe, + .remove = tegra_vi_remove, +}; diff --git a/drivers/staging/media/tegra-video/vi.h b/drivers/staging/media/tegra-video/vi.h new file mode 100644 index 000000000000..6272c9a61809 --- /dev/null +++ b/drivers/staging/media/tegra-video/vi.h @@ -0,0 +1,257 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#ifndef __TEGRA_VI_H__ +#define __TEGRA_VI_H__ + +#include <linux/host1x.h> +#include <linux/list.h> + +#include <linux/mutex.h> +#include <linux/spinlock.h> +#include <linux/wait.h> + +#include <media/media-entity.h> +#include <media/v4l2-ctrls.h> +#include <media/v4l2-device.h> +#include <media/v4l2-dev.h> +#include <media/v4l2-subdev.h> +#include <media/videobuf2-v4l2.h> + +#define TEGRA_MIN_WIDTH 32U +#define TEGRA_MAX_WIDTH 32768U +#define TEGRA_MIN_HEIGHT 32U +#define TEGRA_MAX_HEIGHT 32768U + +#define TEGRA_DEF_WIDTH 1920 +#define TEGRA_DEF_HEIGHT 1080 +#define TEGRA_IMAGE_FORMAT_DEF 32 + +#define MAX_FORMAT_NUM 64 + +enum tegra_vi_pg_mode { + TEGRA_VI_PG_DISABLED = 0, + TEGRA_VI_PG_DIRECT, + TEGRA_VI_PG_PATCH, +}; + +/** + * struct tegra_vi_ops - Tegra VI operations + * @vi_start_streaming: starts media pipeline, subdevice streaming, sets up + * VI for capture and runs capture start and capture finish + * kthreads for capturing frames to buffer and returns them back. + * @vi_stop_streaming: stops media pipeline and subdevice streaming and returns + * back any queued buffers. + */ +struct tegra_vi_ops { + int (*vi_start_streaming)(struct vb2_queue *vq, u32 count); + void (*vi_stop_streaming)(struct vb2_queue *vq); +}; + +/** + * struct tegra_vi_soc - NVIDIA Tegra Video Input SoC structure + * + * @video_formats: supported video formats + * @nformats: total video formats + * @ops: vi operations + * @hw_revision: VI hw_revision + * @vi_max_channels: supported max streaming channels + * @vi_max_clk_hz: VI clock max frequency + */ +struct tegra_vi_soc { + const struct tegra_video_format *video_formats; + const unsigned int nformats; + const struct tegra_vi_ops *ops; + u32 hw_revision; + unsigned int vi_max_channels; + unsigned int vi_max_clk_hz; +}; + +/** + * struct tegra_vi - NVIDIA Tegra Video Input device structure + * + * @dev: device struct + * @client: host1x_client struct + * @iomem: register base + * @clk: main clock for VI block + * @vdd: vdd regulator for VI hardware, normally it is avdd_dsi_csi + * @soc: pointer to SoC data structure + * @ops: vi operations + * @vi_chans: list head for VI channels + */ +struct tegra_vi { + struct device *dev; + struct host1x_client client; + void __iomem *iomem; + struct clk *clk; + struct regulator *vdd; + const struct tegra_vi_soc *soc; + const struct tegra_vi_ops *ops; + struct list_head vi_chans; +}; + +/** + * struct tegra_vi_channel - Tegra video channel + * + * @list: list head for this entry + * @video: V4L2 video device associated with the video channel + * @video_lock: protects the @format and @queue fields + * @pad: media pad for the video device entity + * + * @vi: Tegra video input device structure + * @frame_start_sp: host1x syncpoint pointer to synchronize programmed capture + * start condition with hardware frame start events through host1x + * syncpoint counters. + * @mw_ack_sp: host1x syncpoint pointer to synchronize programmed memory write + * ack trigger condition with hardware memory write done at end of + * frame through host1x syncpoint counters. + * @sp_incr_lock: protects cpu syncpoint increment. + * + * @kthread_start_capture: kthread to start capture of single frame when + * vb buffer is available. This thread programs VI CSI hardware + * for single frame capture and waits for frame start event from + * the hardware. On receiving frame start event, it wakes up + * kthread_finish_capture thread to wait for finishing frame data + * write to the memory. In case of missing frame start event, this + * thread returns buffer back to vb with VB2_BUF_STATE_ERROR. + * @start_wait: waitqueue for starting frame capture when buffer is available. + * @kthread_finish_capture: kthread to finish the buffer capture and return to. + * This thread is woken up by kthread_start_capture on receiving + * frame start event from the hardware and this thread waits for + * MW_ACK_DONE event which indicates completion of writing frame + * data to the memory. On receiving MW_ACK_DONE event, buffer is + * returned back to vb with VB2_BUF_STATE_DONE and in case of + * missing MW_ACK_DONE event, buffer is returned back to vb with + * VB2_BUF_STATE_ERROR. + * @done_wait: waitqueue for finishing capture data writes to memory. + * + * @format: active V4L2 pixel format + * @fmtinfo: format information corresponding to the active @format + * @queue: vb2 buffers queue + * @sequence: V4L2 buffers sequence number + * + * @capture: list of queued buffers for capture + * @start_lock: protects the capture queued list + * @done: list of capture done queued buffers + * @done_lock: protects the capture done queue list + * + * @portno: VI channel port number + * + * @ctrl_handler: V4L2 control handler of this video channel + * @tpg_fmts_bitmap: a bitmap for supported TPG formats + * @pg_mode: test pattern generator mode (disabled/direct/patch) + */ +struct tegra_vi_channel { + struct list_head list; + struct video_device video; + /* protects the @format and @queue fields */ + struct mutex video_lock; + struct media_pad pad; + + struct tegra_vi *vi; + struct host1x_syncpt *frame_start_sp; + struct host1x_syncpt *mw_ack_sp; + /* protects the cpu syncpoint increment */ + spinlock_t sp_incr_lock; + + struct task_struct *kthread_start_capture; + wait_queue_head_t start_wait; + struct task_struct *kthread_finish_capture; + wait_queue_head_t done_wait; + + struct v4l2_pix_format format; + const struct tegra_video_format *fmtinfo; + struct vb2_queue queue; + u32 sequence; + + struct list_head capture; + /* protects the capture queued list */ + spinlock_t start_lock; + struct list_head done; + /* protects the capture done queue list */ + spinlock_t done_lock; + + unsigned char portno; + + struct v4l2_ctrl_handler ctrl_handler; + DECLARE_BITMAP(tpg_fmts_bitmap, MAX_FORMAT_NUM); + enum tegra_vi_pg_mode pg_mode; +}; + +/** + * struct tegra_channel_buffer - video channel buffer + * + * @buf: vb2 buffer base object + * @queue: buffer list entry in the channel queued buffers list + * @chan: channel that uses the buffer + * @addr: Tegra IOVA buffer address for VI output + * @mw_ack_sp_thresh: MW_ACK_DONE syncpoint threshold corresponding + * to the capture buffer. + */ +struct tegra_channel_buffer { + struct vb2_v4l2_buffer buf; + struct list_head queue; + struct tegra_vi_channel *chan; + dma_addr_t addr; + u32 mw_ack_sp_thresh; +}; + +/* + * VI channel input data type enum. + * These data type enum value gets programmed into corresponding Tegra VI + * channel register bits. + */ +enum tegra_image_dt { + TEGRA_IMAGE_DT_YUV420_8 = 24, + TEGRA_IMAGE_DT_YUV420_10, + + TEGRA_IMAGE_DT_YUV420CSPS_8 = 28, + TEGRA_IMAGE_DT_YUV420CSPS_10, + TEGRA_IMAGE_DT_YUV422_8, + TEGRA_IMAGE_DT_YUV422_10, + TEGRA_IMAGE_DT_RGB444, + TEGRA_IMAGE_DT_RGB555, + TEGRA_IMAGE_DT_RGB565, + TEGRA_IMAGE_DT_RGB666, + TEGRA_IMAGE_DT_RGB888, + + TEGRA_IMAGE_DT_RAW6 = 40, + TEGRA_IMAGE_DT_RAW7, + TEGRA_IMAGE_DT_RAW8, + TEGRA_IMAGE_DT_RAW10, + TEGRA_IMAGE_DT_RAW12, + TEGRA_IMAGE_DT_RAW14, +}; + +/** + * struct tegra_video_format - Tegra video format description + * + * @img_dt: image data type + * @bit_width: format width in bits per component + * @code: media bus format code + * @bpp: bytes per pixel (when stored in memory) + * @img_fmt: image format + * @fourcc: V4L2 pixel format FCC identifier + */ +struct tegra_video_format { + enum tegra_image_dt img_dt; + unsigned int bit_width; + unsigned int code; + unsigned int bpp; + u32 img_fmt; + u32 fourcc; +}; + +#if defined(CONFIG_ARCH_TEGRA_210_SOC) +extern const struct tegra_vi_soc tegra210_vi_soc; +#endif + +struct v4l2_subdev * +tegra_channel_get_remote_subdev(struct tegra_vi_channel *chan); +int tegra_channel_set_stream(struct tegra_vi_channel *chan, bool on); +void tegra_channel_release_buffers(struct tegra_vi_channel *chan, + enum vb2_buffer_state state); +void tegra_channels_cleanup(struct tegra_vi *vi); +#endif diff --git a/drivers/staging/media/tegra-video/video.c b/drivers/staging/media/tegra-video/video.c new file mode 100644 index 000000000000..30816aa41e81 --- /dev/null +++ b/drivers/staging/media/tegra-video/video.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#include <linux/host1x.h> +#include <linux/module.h> +#include <linux/platform_device.h> + +#include "video.h" + +static void tegra_v4l2_dev_release(struct v4l2_device *v4l2_dev) +{ + struct tegra_video_device *vid; + + vid = container_of(v4l2_dev, struct tegra_video_device, v4l2_dev); + + /* cleanup channels here as all video device nodes are released */ + tegra_channels_cleanup(vid->vi); + + v4l2_device_unregister(v4l2_dev); + media_device_unregister(&vid->media_dev); + media_device_cleanup(&vid->media_dev); + kfree(vid); +} + +static int host1x_video_probe(struct host1x_device *dev) +{ + struct tegra_video_device *vid; + int ret; + + vid = kzalloc(sizeof(*vid), GFP_KERNEL); + if (!vid) + return -ENOMEM; + + dev_set_drvdata(&dev->dev, vid); + + vid->media_dev.dev = &dev->dev; + strscpy(vid->media_dev.model, "NVIDIA Tegra Video Input Device", + sizeof(vid->media_dev.model)); + + media_device_init(&vid->media_dev); + ret = media_device_register(&vid->media_dev); + if (ret < 0) { + dev_err(&dev->dev, + "failed to register media device: %d\n", ret); + goto cleanup; + } + + vid->v4l2_dev.mdev = &vid->media_dev; + vid->v4l2_dev.release = tegra_v4l2_dev_release; + ret = v4l2_device_register(&dev->dev, &vid->v4l2_dev); + if (ret < 0) { + dev_err(&dev->dev, + "V4L2 device registration failed: %d\n", ret); + goto unregister_media; + } + + ret = host1x_device_init(dev); + if (ret < 0) + goto unregister_v4l2; + + /* + * Both vi and csi channels are available now. + * Register v4l2 nodes and create media links for TPG. + */ + ret = tegra_v4l2_nodes_setup_tpg(vid); + if (ret < 0) { + dev_err(&dev->dev, + "failed to setup tpg graph: %d\n", ret); + goto device_exit; + } + + return 0; + +device_exit: + host1x_device_exit(dev); + /* vi exit ops does not clean channels, so clean them here */ + tegra_channels_cleanup(vid->vi); +unregister_v4l2: + v4l2_device_unregister(&vid->v4l2_dev); +unregister_media: + media_device_unregister(&vid->media_dev); +cleanup: + media_device_cleanup(&vid->media_dev); + kfree(vid); + return ret; +} + +static int host1x_video_remove(struct host1x_device *dev) +{ + struct tegra_video_device *vid = dev_get_drvdata(&dev->dev); + + tegra_v4l2_nodes_cleanup_tpg(vid); + + host1x_device_exit(dev); + + /* This calls v4l2_dev release callback on last reference */ + v4l2_device_put(&vid->v4l2_dev); + + return 0; +} + +static const struct of_device_id host1x_video_subdevs[] = { +#if defined(CONFIG_ARCH_TEGRA_210_SOC) + { .compatible = "nvidia,tegra210-csi", }, + { .compatible = "nvidia,tegra210-vi", }, +#endif + { } +}; + +static struct host1x_driver host1x_video_driver = { + .driver = { + .name = "tegra-video", + }, + .probe = host1x_video_probe, + .remove = host1x_video_remove, + .subdevs = host1x_video_subdevs, +}; + +static struct platform_driver * const drivers[] = { + &tegra_csi_driver, + &tegra_vi_driver, +}; + +static int __init host1x_video_init(void) +{ + int err; + + err = host1x_driver_register(&host1x_video_driver); + if (err < 0) + return err; + + err = platform_register_drivers(drivers, ARRAY_SIZE(drivers)); + if (err < 0) + goto unregister_host1x; + + return 0; + +unregister_host1x: + host1x_driver_unregister(&host1x_video_driver); + return err; +} +module_init(host1x_video_init); + +static void __exit host1x_video_exit(void) +{ + platform_unregister_drivers(drivers, ARRAY_SIZE(drivers)); + host1x_driver_unregister(&host1x_video_driver); +} +module_exit(host1x_video_exit); + +MODULE_AUTHOR("Sowjanya Komatineni <skomatineni@nvidia.com>"); +MODULE_DESCRIPTION("NVIDIA Tegra Host1x Video driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/staging/media/tegra-video/video.h b/drivers/staging/media/tegra-video/video.h new file mode 100644 index 000000000000..fadaf2189dc9 --- /dev/null +++ b/drivers/staging/media/tegra-video/video.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. + */ + +#ifndef __TEGRA_VIDEO_H__ +#define __TEGRA_VIDEO_H__ + +#include <linux/host1x.h> + +#include <media/media-device.h> +#include <media/v4l2-device.h> + +#include "vi.h" +#include "csi.h" + +struct tegra_video_device { + struct v4l2_device v4l2_dev; + struct media_device media_dev; + struct tegra_vi *vi; + struct tegra_csi *csi; +}; + +int tegra_v4l2_nodes_setup_tpg(struct tegra_video_device *vid); +void tegra_v4l2_nodes_cleanup_tpg(struct tegra_video_device *vid); + +extern struct platform_driver tegra_vi_driver; +extern struct platform_driver tegra_csi_driver; +#endif diff --git a/drivers/tee/Kconfig b/drivers/tee/Kconfig index 8da63f38e6bd..e99d840c2511 100644 --- a/drivers/tee/Kconfig +++ b/drivers/tee/Kconfig @@ -3,6 +3,8 @@ config TEE tristate "Trusted Execution Environment support" depends on HAVE_ARM_SMCCC || COMPILE_TEST || CPU_SUP_AMD + select CRYPTO + select CRYPTO_SHA1 select DMA_SHARED_BUFFER select GENERIC_ALLOCATOR help diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index cf2367ba08d6..dbed3f480dc0 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -233,9 +233,13 @@ int optee_open_session(struct tee_context *ctx, msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT | OPTEE_MSG_ATTR_META; memcpy(&msg_arg->params[0].u.value, arg->uuid, sizeof(arg->uuid)); - memcpy(&msg_arg->params[1].u.value, arg->uuid, sizeof(arg->clnt_uuid)); msg_arg->params[1].u.value.c = arg->clnt_login; + rc = tee_session_calc_client_uuid((uuid_t *)&msg_arg->params[1].u.value, + arg->clnt_login, arg->clnt_uuid); + if (rc) + goto out; + rc = optee_to_msg_param(msg_arg->params + 2, arg->num_params, param); if (rc) goto out; diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 6aec502c495c..64637e09a095 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -6,18 +6,33 @@ #define pr_fmt(fmt) "%s: " fmt, __func__ #include <linux/cdev.h> +#include <linux/cred.h> #include <linux/fs.h> #include <linux/idr.h> #include <linux/module.h> #include <linux/slab.h> #include <linux/tee_drv.h> #include <linux/uaccess.h> +#include <crypto/hash.h> +#include <crypto/sha.h> #include "tee_private.h" #define TEE_NUM_DEVICES 32 #define TEE_IOCTL_PARAM_SIZE(x) (sizeof(struct tee_param) * (x)) +#define TEE_UUID_NS_NAME_SIZE 128 + +/* + * TEE Client UUID name space identifier (UUIDv4) + * + * Value here is random UUID that is allocated as name space identifier for + * forming Client UUID's for TEE environment using UUIDv5 scheme. + */ +static const uuid_t tee_client_uuid_ns = UUID_INIT(0x58ac9ca0, 0x2086, 0x4683, + 0xa1, 0xb8, 0xec, 0x4b, + 0xc0, 0x8e, 0x01, 0xb6); + /* * Unprivileged devices in the lower half range and privileged devices in * the upper half range. @@ -110,6 +125,143 @@ static int tee_release(struct inode *inode, struct file *filp) return 0; } +/** + * uuid_v5() - Calculate UUIDv5 + * @uuid: Resulting UUID + * @ns: Name space ID for UUIDv5 function + * @name: Name for UUIDv5 function + * @size: Size of name + * + * UUIDv5 is specific in RFC 4122. + * + * This implements section (for SHA-1): + * 4.3. Algorithm for Creating a Name-Based UUID + */ +static int uuid_v5(uuid_t *uuid, const uuid_t *ns, const void *name, + size_t size) +{ + unsigned char hash[SHA1_DIGEST_SIZE]; + struct crypto_shash *shash = NULL; + struct shash_desc *desc = NULL; + int rc; + + shash = crypto_alloc_shash("sha1", 0, 0); + if (IS_ERR(shash)) { + rc = PTR_ERR(shash); + pr_err("shash(sha1) allocation failed\n"); + return rc; + } + + desc = kzalloc(sizeof(*desc) + crypto_shash_descsize(shash), + GFP_KERNEL); + if (!desc) { + rc = -ENOMEM; + goto out_free_shash; + } + + desc->tfm = shash; + + rc = crypto_shash_init(desc); + if (rc < 0) + goto out_free_desc; + + rc = crypto_shash_update(desc, (const u8 *)ns, sizeof(*ns)); + if (rc < 0) + goto out_free_desc; + + rc = crypto_shash_update(desc, (const u8 *)name, size); + if (rc < 0) + goto out_free_desc; + + rc = crypto_shash_final(desc, hash); + if (rc < 0) + goto out_free_desc; + + memcpy(uuid->b, hash, UUID_SIZE); + + /* Tag for version 5 */ + uuid->b[6] = (hash[6] & 0x0F) | 0x50; + uuid->b[8] = (hash[8] & 0x3F) | 0x80; + +out_free_desc: + kfree(desc); + +out_free_shash: + crypto_free_shash(shash); + return rc; +} + +int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method, + const u8 connection_data[TEE_IOCTL_UUID_LEN]) +{ + gid_t ns_grp = (gid_t)-1; + kgid_t grp = INVALID_GID; + char *name = NULL; + int name_len; + int rc; + + if (connection_method == TEE_IOCTL_LOGIN_PUBLIC) { + /* Nil UUID to be passed to TEE environment */ + uuid_copy(uuid, &uuid_null); + return 0; + } + + /* + * In Linux environment client UUID is based on UUIDv5. + * + * Determine client UUID with following semantics for 'name': + * + * For TEEC_LOGIN_USER: + * uid=<uid> + * + * For TEEC_LOGIN_GROUP: + * gid=<gid> + * + */ + + name = kzalloc(TEE_UUID_NS_NAME_SIZE, GFP_KERNEL); + if (!name) + return -ENOMEM; + + switch (connection_method) { + case TEE_IOCTL_LOGIN_USER: + name_len = snprintf(name, TEE_UUID_NS_NAME_SIZE, "uid=%x", + current_euid().val); + if (name_len >= TEE_UUID_NS_NAME_SIZE) { + rc = -E2BIG; + goto out_free_name; + } + break; + + case TEE_IOCTL_LOGIN_GROUP: + memcpy(&ns_grp, connection_data, sizeof(gid_t)); + grp = make_kgid(current_user_ns(), ns_grp); + if (!gid_valid(grp) || !in_egroup_p(grp)) { + rc = -EPERM; + goto out_free_name; + } + + name_len = snprintf(name, TEE_UUID_NS_NAME_SIZE, "gid=%x", + grp.val); + if (name_len >= TEE_UUID_NS_NAME_SIZE) { + rc = -E2BIG; + goto out_free_name; + } + break; + + default: + rc = -EINVAL; + goto out_free_name; + } + + rc = uuid_v5(uuid, &tee_client_uuid_ns, name, name_len); +out_free_name: + kfree(name); + + return rc; +} +EXPORT_SYMBOL_GPL(tee_session_calc_client_uuid); + static int tee_ioctl_version(struct tee_context *ctx, struct tee_ioctl_version_data __user *uvers) { @@ -333,6 +485,13 @@ static int tee_ioctl_open_session(struct tee_context *ctx, goto out; } + if (arg.clnt_login >= TEE_IOCTL_LOGIN_REE_KERNEL_MIN && + arg.clnt_login <= TEE_IOCTL_LOGIN_REE_KERNEL_MAX) { + pr_debug("login method not allowed for user-space client\n"); + rc = -EPERM; + goto out; + } + rc = ctx->teedev->desc->ops->open_session(ctx, &arg, params); if (rc) goto out; diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index bd679b72bd05..827ac3d0fea9 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -9,6 +9,7 @@ #include <linux/sched.h> #include <linux/slab.h> #include <linux/tee_drv.h> +#include <linux/uio.h> #include "tee_private.h" static void tee_shm_release(struct tee_shm *shm) @@ -161,8 +162,7 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags) } } - if (ctx) - teedev_ctx_get(ctx); + teedev_ctx_get(ctx); return shm; err_rem: @@ -185,14 +185,15 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, size_t length, u32 flags) { struct tee_device *teedev = ctx->teedev; - const u32 req_flags = TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED; + const u32 req_user_flags = TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED; + const u32 req_kernel_flags = TEE_SHM_DMA_BUF | TEE_SHM_KERNEL_MAPPED; struct tee_shm *shm; void *ret; int rc; int num_pages; unsigned long start; - if (flags != req_flags) + if (flags != req_user_flags && flags != req_kernel_flags) return ERR_PTR(-ENOTSUPP); if (!tee_device_get(teedev)) @@ -226,7 +227,27 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, goto err; } - rc = get_user_pages_fast(start, num_pages, FOLL_WRITE, shm->pages); + if (flags & TEE_SHM_USER_MAPPED) { + rc = get_user_pages_fast(start, num_pages, FOLL_WRITE, + shm->pages); + } else { + struct kvec *kiov; + int i; + + kiov = kcalloc(num_pages, sizeof(*kiov), GFP_KERNEL); + if (!kiov) { + ret = ERR_PTR(-ENOMEM); + goto err; + } + + for (i = 0; i < num_pages; i++) { + kiov[i].iov_base = (void *)(start + i * PAGE_SIZE); + kiov[i].iov_len = PAGE_SIZE; + } + + rc = get_kernel_pages(kiov, num_pages, 0, shm->pages); + kfree(kiov); + } if (rc > 0) shm->num_pages = rc; if (rc != num_pages) { diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c index a8723b1eb8b0..8938ea81a525 100644 --- a/drivers/thermal/imx_sc_thermal.c +++ b/drivers/thermal/imx_sc_thermal.c @@ -3,9 +3,9 @@ * Copyright 2018-2020 NXP. */ +#include <dt-bindings/firmware/imx/rsrc.h> #include <linux/err.h> #include <linux/firmware/imx/sci.h> -#include <linux/firmware/imx/types.h> #include <linux/module.h> #include <linux/of.h> #include <linux/of_device.h> diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h index d39ac997dda8..3a7871130112 100644 --- a/include/asm-generic/io.h +++ b/include/asm-generic/io.h @@ -448,17 +448,15 @@ static inline void writesq(volatile void __iomem *addr, const void *buffer, #define IO_SPACE_LIMIT 0xffff #endif -#include <linux/logic_pio.h> - /* * {in,out}{b,w,l}() access little endian I/O. {in,out}{b,w,l}_p() can be * implemented on hardware that needs an additional delay for I/O accesses to * take effect. */ -#ifndef inb -#define inb inb -static inline u8 inb(unsigned long addr) +#if !defined(inb) && !defined(_inb) +#define _inb _inb +static inline u16 _inb(unsigned long addr) { u8 val; @@ -469,9 +467,9 @@ static inline u8 inb(unsigned long addr) } #endif -#ifndef inw -#define inw inw -static inline u16 inw(unsigned long addr) +#if !defined(inw) && !defined(_inw) +#define _inw _inw +static inline u16 _inw(unsigned long addr) { u16 val; @@ -482,9 +480,9 @@ static inline u16 inw(unsigned long addr) } #endif -#ifndef inl -#define inl inl -static inline u32 inl(unsigned long addr) +#if !defined(inl) && !defined(_inl) +#define _inl _inl +static inline u16 _inl(unsigned long addr) { u32 val; @@ -495,9 +493,9 @@ static inline u32 inl(unsigned long addr) } #endif -#ifndef outb -#define outb outb -static inline void outb(u8 value, unsigned long addr) +#if !defined(outb) && !defined(_outb) +#define _outb _outb +static inline void _outb(u8 value, unsigned long addr) { __io_pbw(); __raw_writeb(value, PCI_IOBASE + addr); @@ -505,9 +503,9 @@ static inline void outb(u8 value, unsigned long addr) } #endif -#ifndef outw -#define outw outw -static inline void outw(u16 value, unsigned long addr) +#if !defined(outw) && !defined(_outw) +#define _outw _outw +static inline void _outw(u16 value, unsigned long addr) { __io_pbw(); __raw_writew(cpu_to_le16(value), PCI_IOBASE + addr); @@ -515,9 +513,9 @@ static inline void outw(u16 value, unsigned long addr) } #endif -#ifndef outl -#define outl outl -static inline void outl(u32 value, unsigned long addr) +#if !defined(outl) && !defined(_outl) +#define _outl _outl +static inline void _outl(u32 value, unsigned long addr) { __io_pbw(); __raw_writel(cpu_to_le32(value), PCI_IOBASE + addr); @@ -525,6 +523,32 @@ static inline void outl(u32 value, unsigned long addr) } #endif +#include <linux/logic_pio.h> + +#ifndef inb +#define inb _inb +#endif + +#ifndef inw +#define inw _inw +#endif + +#ifndef inl +#define inl _inl +#endif + +#ifndef outb +#define outb _outb +#endif + +#ifndef outw +#define outw _outw +#endif + +#ifndef outl +#define outl _outl +#endif + #ifndef inb_p #define inb_p inb_p static inline u8 inb_p(unsigned long addr) diff --git a/include/dt-bindings/clock/r8a7742-cpg-mssr.h b/include/dt-bindings/clock/r8a7742-cpg-mssr.h new file mode 100644 index 000000000000..e68191c24881 --- /dev/null +++ b/include/dt-bindings/clock/r8a7742-cpg-mssr.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0+ + * + * Copyright (C) 2020 Renesas Electronics Corp. + */ +#ifndef __DT_BINDINGS_CLOCK_R8A7742_CPG_MSSR_H__ +#define __DT_BINDINGS_CLOCK_R8A7742_CPG_MSSR_H__ + +#include <dt-bindings/clock/renesas-cpg-mssr.h> + +/* r8a7742 CPG Core Clocks */ +#define R8A7742_CLK_Z 0 +#define R8A7742_CLK_Z2 1 +#define R8A7742_CLK_ZG 2 +#define R8A7742_CLK_ZTR 3 +#define R8A7742_CLK_ZTRD2 4 +#define R8A7742_CLK_ZT 5 +#define R8A7742_CLK_ZX 6 +#define R8A7742_CLK_ZS 7 +#define R8A7742_CLK_HP 8 +#define R8A7742_CLK_B 9 +#define R8A7742_CLK_LB 10 +#define R8A7742_CLK_P 11 +#define R8A7742_CLK_CL 12 +#define R8A7742_CLK_M2 13 +#define R8A7742_CLK_ZB3 14 +#define R8A7742_CLK_ZB3D2 15 +#define R8A7742_CLK_DDR 16 +#define R8A7742_CLK_SDH 17 +#define R8A7742_CLK_SD0 18 +#define R8A7742_CLK_SD1 19 +#define R8A7742_CLK_SD2 20 +#define R8A7742_CLK_SD3 21 +#define R8A7742_CLK_MMC0 22 +#define R8A7742_CLK_MMC1 23 +#define R8A7742_CLK_MP 24 +#define R8A7742_CLK_QSPI 25 +#define R8A7742_CLK_CP 26 +#define R8A7742_CLK_RCAN 27 +#define R8A7742_CLK_R 28 +#define R8A7742_CLK_OSC 29 + +#endif /* __DT_BINDINGS_CLOCK_R8A7742_CPG_MSSR_H__ */ diff --git a/include/dt-bindings/clock/tegra114-car.h b/include/dt-bindings/clock/tegra114-car.h index df59aaf5bf34..a93426f008ac 100644 --- a/include/dt-bindings/clock/tegra114-car.h +++ b/include/dt-bindings/clock/tegra114-car.h @@ -272,10 +272,10 @@ #define TEGRA114_CLK_AUDIO3 242 #define TEGRA114_CLK_AUDIO4 243 #define TEGRA114_CLK_SPDIF 244 -#define TEGRA114_CLK_CLK_OUT_1 245 -#define TEGRA114_CLK_CLK_OUT_2 246 -#define TEGRA114_CLK_CLK_OUT_3 247 -#define TEGRA114_CLK_BLINK 248 +/* 245 */ +/* 246 */ +/* 247 */ +/* 248 */ #define TEGRA114_CLK_OSC 249 /* 250 */ /* 251 */ @@ -335,9 +335,9 @@ #define TEGRA114_CLK_AUDIO3_MUX 303 #define TEGRA114_CLK_AUDIO4_MUX 304 #define TEGRA114_CLK_SPDIF_MUX 305 -#define TEGRA114_CLK_CLK_OUT_1_MUX 306 -#define TEGRA114_CLK_CLK_OUT_2_MUX 307 -#define TEGRA114_CLK_CLK_OUT_3_MUX 308 +/* 306 */ +/* 307 */ +/* 308 */ #define TEGRA114_CLK_DSIA_MUX 309 #define TEGRA114_CLK_DSIB_MUX 310 #define TEGRA114_CLK_XUSB_SS_DIV2 311 diff --git a/include/dt-bindings/clock/tegra124-car-common.h b/include/dt-bindings/clock/tegra124-car-common.h index 2a9acd592bff..c59f9de01b4d 100644 --- a/include/dt-bindings/clock/tegra124-car-common.h +++ b/include/dt-bindings/clock/tegra124-car-common.h @@ -271,10 +271,10 @@ #define TEGRA124_CLK_AUDIO3 242 #define TEGRA124_CLK_AUDIO4 243 #define TEGRA124_CLK_SPDIF 244 -#define TEGRA124_CLK_CLK_OUT_1 245 -#define TEGRA124_CLK_CLK_OUT_2 246 -#define TEGRA124_CLK_CLK_OUT_3 247 -#define TEGRA124_CLK_BLINK 248 +/* 245 */ +/* 246 */ +/* 247 */ +/* 248 */ #define TEGRA124_CLK_OSC 249 /* 250 */ /* 251 */ @@ -334,9 +334,9 @@ #define TEGRA124_CLK_AUDIO3_MUX 303 #define TEGRA124_CLK_AUDIO4_MUX 304 #define TEGRA124_CLK_SPDIF_MUX 305 -#define TEGRA124_CLK_CLK_OUT_1_MUX 306 -#define TEGRA124_CLK_CLK_OUT_2_MUX 307 -#define TEGRA124_CLK_CLK_OUT_3_MUX 308 +/* 306 */ +/* 307 */ +/* 308 */ /* 309 */ /* 310 */ #define TEGRA124_CLK_SOR0_LVDS 311 /* deprecated */ diff --git a/include/dt-bindings/clock/tegra20-car.h b/include/dt-bindings/clock/tegra20-car.h index b21a0eb32921..fe541f627965 100644 --- a/include/dt-bindings/clock/tegra20-car.h +++ b/include/dt-bindings/clock/tegra20-car.h @@ -131,7 +131,7 @@ #define TEGRA20_CLK_CCLK 108 #define TEGRA20_CLK_HCLK 109 #define TEGRA20_CLK_PCLK 110 -#define TEGRA20_CLK_BLINK 111 +/* 111 */ #define TEGRA20_CLK_PLL_A 112 #define TEGRA20_CLK_PLL_A_OUT0 113 #define TEGRA20_CLK_PLL_C 114 diff --git a/include/dt-bindings/clock/tegra210-car.h b/include/dt-bindings/clock/tegra210-car.h index 7a8f10b9a66d..ae62cd72da67 100644 --- a/include/dt-bindings/clock/tegra210-car.h +++ b/include/dt-bindings/clock/tegra210-car.h @@ -306,10 +306,10 @@ #define TEGRA210_CLK_AUDIO3 274 #define TEGRA210_CLK_AUDIO4 275 #define TEGRA210_CLK_SPDIF 276 -#define TEGRA210_CLK_CLK_OUT_1 277 -#define TEGRA210_CLK_CLK_OUT_2 278 -#define TEGRA210_CLK_CLK_OUT_3 279 -#define TEGRA210_CLK_BLINK 280 +/* 277 */ +/* 278 */ +/* 279 */ +/* 280 */ #define TEGRA210_CLK_SOR0_LVDS 281 /* deprecated */ #define TEGRA210_CLK_SOR0_OUT 281 #define TEGRA210_CLK_SOR1_OUT 282 @@ -358,7 +358,7 @@ #define TEGRA210_CLK_PLL_A_OUT0_OUT_ADSP 324 /* 325 */ #define TEGRA210_CLK_OSC 326 -/* 327 */ +#define TEGRA210_CLK_CSI_TPG 327 /* 328 */ /* 329 */ /* 330 */ @@ -388,9 +388,9 @@ #define TEGRA210_CLK_AUDIO3_MUX 353 #define TEGRA210_CLK_AUDIO4_MUX 354 #define TEGRA210_CLK_SPDIF_MUX 355 -#define TEGRA210_CLK_CLK_OUT_1_MUX 356 -#define TEGRA210_CLK_CLK_OUT_2_MUX 357 -#define TEGRA210_CLK_CLK_OUT_3_MUX 358 +/* 356 */ +/* 357 */ +/* 358 */ #define TEGRA210_CLK_DSIA_MUX 359 #define TEGRA210_CLK_DSIB_MUX 360 /* 361 */ diff --git a/include/dt-bindings/clock/tegra30-car.h b/include/dt-bindings/clock/tegra30-car.h index 7b542c10fc27..f193663e6f28 100644 --- a/include/dt-bindings/clock/tegra30-car.h +++ b/include/dt-bindings/clock/tegra30-car.h @@ -232,11 +232,11 @@ #define TEGRA30_CLK_AUDIO3 204 #define TEGRA30_CLK_AUDIO4 205 #define TEGRA30_CLK_SPDIF 206 -#define TEGRA30_CLK_CLK_OUT_1 207 /* (extern1) */ -#define TEGRA30_CLK_CLK_OUT_2 208 /* (extern2) */ -#define TEGRA30_CLK_CLK_OUT_3 209 /* (extern3) */ +/* 207 */ +/* 208 */ +/* 209 */ #define TEGRA30_CLK_SCLK 210 -#define TEGRA30_CLK_BLINK 211 +/* 211 */ #define TEGRA30_CLK_CCLK_G 212 #define TEGRA30_CLK_CCLK_LP 213 #define TEGRA30_CLK_TWD 214 @@ -262,9 +262,9 @@ /* 297 */ /* 298 */ /* 299 */ -#define TEGRA30_CLK_CLK_OUT_1_MUX 300 -#define TEGRA30_CLK_CLK_OUT_2_MUX 301 -#define TEGRA30_CLK_CLK_OUT_3_MUX 302 +/* 300 */ +/* 301 */ +/* 302 */ #define TEGRA30_CLK_AUDIO0_MUX 303 #define TEGRA30_CLK_AUDIO1_MUX 304 #define TEGRA30_CLK_AUDIO2_MUX 305 diff --git a/include/dt-bindings/firmware/imx/rsrc.h b/include/dt-bindings/firmware/imx/rsrc.h index 4e61f6485097..54278d5c1856 100644 --- a/include/dt-bindings/firmware/imx/rsrc.h +++ b/include/dt-bindings/firmware/imx/rsrc.h @@ -547,4 +547,88 @@ #define IMX_SC_R_ATTESTATION 545 #define IMX_SC_R_LAST 546 +/* + * Defines for SC PM CLK + */ +#define IMX_SC_PM_CLK_SLV_BUS 0 /* Slave bus clock */ +#define IMX_SC_PM_CLK_MST_BUS 1 /* Master bus clock */ +#define IMX_SC_PM_CLK_PER 2 /* Peripheral clock */ +#define IMX_SC_PM_CLK_PHY 3 /* Phy clock */ +#define IMX_SC_PM_CLK_MISC 4 /* Misc clock */ +#define IMX_SC_PM_CLK_MISC0 0 /* Misc 0 clock */ +#define IMX_SC_PM_CLK_MISC1 1 /* Misc 1 clock */ +#define IMX_SC_PM_CLK_MISC2 2 /* Misc 2 clock */ +#define IMX_SC_PM_CLK_MISC3 3 /* Misc 3 clock */ +#define IMX_SC_PM_CLK_MISC4 4 /* Misc 4 clock */ +#define IMX_SC_PM_CLK_CPU 2 /* CPU clock */ +#define IMX_SC_PM_CLK_PLL 4 /* PLL */ +#define IMX_SC_PM_CLK_BYPASS 4 /* Bypass clock */ + +/* + * Defines for SC CONTROL + */ +#define IMX_SC_C_TEMP 0 +#define IMX_SC_C_TEMP_HI 1 +#define IMX_SC_C_TEMP_LOW 2 +#define IMX_SC_C_PXL_LINK_MST1_ADDR 3 +#define IMX_SC_C_PXL_LINK_MST2_ADDR 4 +#define IMX_SC_C_PXL_LINK_MST_ENB 5 +#define IMX_SC_C_PXL_LINK_MST1_ENB 6 +#define IMX_SC_C_PXL_LINK_MST2_ENB 7 +#define IMX_SC_C_PXL_LINK_SLV1_ADDR 8 +#define IMX_SC_C_PXL_LINK_SLV2_ADDR 9 +#define IMX_SC_C_PXL_LINK_MST_VLD 10 +#define IMX_SC_C_PXL_LINK_MST1_VLD 11 +#define IMX_SC_C_PXL_LINK_MST2_VLD 12 +#define IMX_SC_C_SINGLE_MODE 13 +#define IMX_SC_C_ID 14 +#define IMX_SC_C_PXL_CLK_POLARITY 15 +#define IMX_SC_C_LINESTATE 16 +#define IMX_SC_C_PCIE_G_RST 17 +#define IMX_SC_C_PCIE_BUTTON_RST 18 +#define IMX_SC_C_PCIE_PERST 19 +#define IMX_SC_C_PHY_RESET 20 +#define IMX_SC_C_PXL_LINK_RATE_CORRECTION 21 +#define IMX_SC_C_PANIC 22 +#define IMX_SC_C_PRIORITY_GROUP 23 +#define IMX_SC_C_TXCLK 24 +#define IMX_SC_C_CLKDIV 25 +#define IMX_SC_C_DISABLE_50 26 +#define IMX_SC_C_DISABLE_125 27 +#define IMX_SC_C_SEL_125 28 +#define IMX_SC_C_MODE 29 +#define IMX_SC_C_SYNC_CTRL0 30 +#define IMX_SC_C_KACHUNK_CNT 31 +#define IMX_SC_C_KACHUNK_SEL 32 +#define IMX_SC_C_SYNC_CTRL1 33 +#define IMX_SC_C_DPI_RESET 34 +#define IMX_SC_C_MIPI_RESET 35 +#define IMX_SC_C_DUAL_MODE 36 +#define IMX_SC_C_VOLTAGE 37 +#define IMX_SC_C_PXL_LINK_SEL 38 +#define IMX_SC_C_OFS_SEL 39 +#define IMX_SC_C_OFS_AUDIO 40 +#define IMX_SC_C_OFS_PERIPH 41 +#define IMX_SC_C_OFS_IRQ 42 +#define IMX_SC_C_RST0 43 +#define IMX_SC_C_RST1 44 +#define IMX_SC_C_SEL0 45 +#define IMX_SC_C_CALIB0 46 +#define IMX_SC_C_CALIB1 47 +#define IMX_SC_C_CALIB2 48 +#define IMX_SC_C_IPG_DEBUG 49 +#define IMX_SC_C_IPG_DOZE 50 +#define IMX_SC_C_IPG_WAIT 51 +#define IMX_SC_C_IPG_STOP 52 +#define IMX_SC_C_IPG_STOP_MODE 53 +#define IMX_SC_C_IPG_STOP_ACK 54 +#define IMX_SC_C_SYNC_CTRL 55 +#define IMX_SC_C_OFS_AUDIO_ALT 56 +#define IMX_SC_C_DSP_BYP 57 +#define IMX_SC_C_CLK_GEN_EN 58 +#define IMX_SC_C_INTF_SEL 59 +#define IMX_SC_C_RXC_DLY 60 +#define IMX_SC_C_TIMER_SEL 61 +#define IMX_SC_C_LAST 62 + #endif /* __DT_BINDINGS_RSCRC_IMX_H */ diff --git a/include/dt-bindings/power/meson-gxbb-power.h b/include/dt-bindings/power/meson-gxbb-power.h new file mode 100644 index 000000000000..1262dac696c0 --- /dev/null +++ b/include/dt-bindings/power/meson-gxbb-power.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: (GPL-2.0+ or MIT) */ +/* + * Copyright (c) 2019 BayLibre, SAS + * Author: Neil Armstrong <narmstrong@baylibre.com> + */ + +#ifndef _DT_BINDINGS_MESON_GXBB_POWER_H +#define _DT_BINDINGS_MESON_GXBB_POWER_H + +#define PWRC_GXBB_VPU_ID 0 +#define PWRC_GXBB_ETHERNET_MEM_ID 1 + +#endif diff --git a/include/dt-bindings/power/meson8-power.h b/include/dt-bindings/power/meson8-power.h new file mode 100644 index 000000000000..dd8b2ddb82a7 --- /dev/null +++ b/include/dt-bindings/power/meson8-power.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: (GPL-2.0+ or MIT) */ +/* + * Copyright (c) 2019 Martin Blumenstingl <martin.blumenstingl@googlemail.com> + */ + +#ifndef _DT_BINDINGS_MESON8_POWER_H +#define _DT_BINDINGS_MESON8_POWER_H + +#define PWRC_MESON8_VPU_ID 0 +#define PWRC_MESON8_ETHERNET_MEM_ID 1 +#define PWRC_MESON8_AUDIO_DSP_MEM_ID 2 + +#endif /* _DT_BINDINGS_MESON8_POWER_H */ diff --git a/include/dt-bindings/power/qcom-rpmpd.h b/include/dt-bindings/power/qcom-rpmpd.h index 3f74096d5a7c..dc146e44228b 100644 --- a/include/dt-bindings/power/qcom-rpmpd.h +++ b/include/dt-bindings/power/qcom-rpmpd.h @@ -28,6 +28,18 @@ #define SM8150_MMCX 9 #define SM8150_MMCX_AO 10 +/* SM8250 Power Domain Indexes */ +#define SM8250_CX 0 +#define SM8250_CX_AO 1 +#define SM8250_EBI 2 +#define SM8250_GFX 3 +#define SM8250_LCX 4 +#define SM8250_LMX 5 +#define SM8250_MMCX 6 +#define SM8250_MMCX_AO 7 +#define SM8250_MX 8 +#define SM8250_MX_AO 9 + /* SC7180 Power Domain Indexes */ #define SC7180_CX 0 #define SC7180_CX_AO 1 diff --git a/include/dt-bindings/power/r8a7742-sysc.h b/include/dt-bindings/power/r8a7742-sysc.h new file mode 100644 index 000000000000..1b1bd3cf95db --- /dev/null +++ b/include/dt-bindings/power/r8a7742-sysc.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright (C) 2020 Renesas Electronics Corp. + */ +#ifndef __DT_BINDINGS_POWER_R8A7742_SYSC_H__ +#define __DT_BINDINGS_POWER_R8A7742_SYSC_H__ + +/* + * These power domain indices match the numbers of the interrupt bits + * representing the power areas in the various Interrupt Registers + * (e.g. SYSCISR, Interrupt Status Register) + */ + +#define R8A7742_PD_CA15_CPU0 0 +#define R8A7742_PD_CA15_CPU1 1 +#define R8A7742_PD_CA15_CPU2 2 +#define R8A7742_PD_CA15_CPU3 3 +#define R8A7742_PD_CA7_CPU0 5 +#define R8A7742_PD_CA7_CPU1 6 +#define R8A7742_PD_CA7_CPU2 7 +#define R8A7742_PD_CA7_CPU3 8 +#define R8A7742_PD_CA15_SCU 12 +#define R8A7742_PD_RGX 20 +#define R8A7742_PD_CA7_SCU 21 + +/* Always-on power area */ +#define R8A7742_PD_ALWAYS_ON 32 + +#endif /* __DT_BINDINGS_POWER_R8A7742_SYSC_H__ */ diff --git a/include/dt-bindings/reset/amlogic,meson-gxbb-reset.h b/include/dt-bindings/reset/amlogic,meson-gxbb-reset.h index ea5058618863..883bfd3bcbad 100644 --- a/include/dt-bindings/reset/amlogic,meson-gxbb-reset.h +++ b/include/dt-bindings/reset/amlogic,meson-gxbb-reset.h @@ -69,7 +69,7 @@ #define RESET_SYS_CPU_L2 58 #define RESET_SYS_CPU_P 59 #define RESET_SYS_CPU_MBIST 60 -/* 61 */ +#define RESET_ACODEC 61 /* 62 */ /* 63 */ /* RESET2 */ diff --git a/include/dt-bindings/reset/imx8mp-reset.h b/include/dt-bindings/reset/imx8mp-reset.h new file mode 100644 index 000000000000..2e8c9104b666 --- /dev/null +++ b/include/dt-bindings/reset/imx8mp-reset.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright 2020 NXP + */ + +#ifndef DT_BINDING_RESET_IMX8MP_H +#define DT_BINDING_RESET_IMX8MP_H + +#define IMX8MP_RESET_A53_CORE_POR_RESET0 0 +#define IMX8MP_RESET_A53_CORE_POR_RESET1 1 +#define IMX8MP_RESET_A53_CORE_POR_RESET2 2 +#define IMX8MP_RESET_A53_CORE_POR_RESET3 3 +#define IMX8MP_RESET_A53_CORE_RESET0 4 +#define IMX8MP_RESET_A53_CORE_RESET1 5 +#define IMX8MP_RESET_A53_CORE_RESET2 6 +#define IMX8MP_RESET_A53_CORE_RESET3 7 +#define IMX8MP_RESET_A53_DBG_RESET0 8 +#define IMX8MP_RESET_A53_DBG_RESET1 9 +#define IMX8MP_RESET_A53_DBG_RESET2 10 +#define IMX8MP_RESET_A53_DBG_RESET3 11 +#define IMX8MP_RESET_A53_ETM_RESET0 12 +#define IMX8MP_RESET_A53_ETM_RESET1 13 +#define IMX8MP_RESET_A53_ETM_RESET2 14 +#define IMX8MP_RESET_A53_ETM_RESET3 15 +#define IMX8MP_RESET_A53_SOC_DBG_RESET 16 +#define IMX8MP_RESET_A53_L2RESET 17 +#define IMX8MP_RESET_SW_NON_SCLR_M7C_RST 18 +#define IMX8MP_RESET_OTG1_PHY_RESET 19 +#define IMX8MP_RESET_OTG2_PHY_RESET 20 +#define IMX8MP_RESET_SUPERMIX_RESET 21 +#define IMX8MP_RESET_AUDIOMIX_RESET 22 +#define IMX8MP_RESET_MLMIX_RESET 23 +#define IMX8MP_RESET_PCIEPHY 24 +#define IMX8MP_RESET_PCIEPHY_PERST 25 +#define IMX8MP_RESET_PCIE_CTRL_APPS_EN 26 +#define IMX8MP_RESET_PCIE_CTRL_APPS_TURNOFF 27 +#define IMX8MP_RESET_HDMI_PHY_APB_RESET 28 +#define IMX8MP_RESET_MEDIA_RESET 29 +#define IMX8MP_RESET_GPU2D_RESET 30 +#define IMX8MP_RESET_GPU3D_RESET 31 +#define IMX8MP_RESET_GPU_RESET 32 +#define IMX8MP_RESET_VPU_RESET 33 +#define IMX8MP_RESET_VPU_G1_RESET 34 +#define IMX8MP_RESET_VPU_G2_RESET 35 +#define IMX8MP_RESET_VPUVC8KE_RESET 36 +#define IMX8MP_RESET_NOC_RESET 37 + +#define IMX8MP_RESET_NUM 38 + +#endif diff --git a/include/dt-bindings/reset/imx8mq-reset.h b/include/dt-bindings/reset/imx8mq-reset.h index 9a301082d361..a5b570737582 100644 --- a/include/dt-bindings/reset/imx8mq-reset.h +++ b/include/dt-bindings/reset/imx8mq-reset.h @@ -28,36 +28,36 @@ #define IMX8MQ_RESET_A53_L2RESET 17 #define IMX8MQ_RESET_SW_NON_SCLR_M4C_RST 18 #define IMX8MQ_RESET_OTG1_PHY_RESET 19 -#define IMX8MQ_RESET_OTG2_PHY_RESET 20 -#define IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N 21 -#define IMX8MQ_RESET_MIPI_DSI_RESET_N 22 -#define IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N 23 -#define IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N 24 -#define IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N 25 -#define IMX8MQ_RESET_PCIEPHY 26 -#define IMX8MQ_RESET_PCIEPHY_PERST 27 -#define IMX8MQ_RESET_PCIE_CTRL_APPS_EN 28 -#define IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF 29 -#define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 /* i.MX8MM does NOT support */ +#define IMX8MQ_RESET_OTG2_PHY_RESET 20 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N 21 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_DSI_RESET_N 22 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N 23 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N 24 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N 25 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIEPHY 26 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIEPHY_PERST 27 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIE_CTRL_APPS_EN 28 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF 29 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 /* i.MX8MM/i.MX8MN does NOT support */ #define IMX8MQ_RESET_DISP_RESET 31 #define IMX8MQ_RESET_GPU_RESET 32 -#define IMX8MQ_RESET_VPU_RESET 33 -#define IMX8MQ_RESET_PCIEPHY2 34 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_PCIEPHY2_PERST 35 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_DDRC1_PRST 44 -#define IMX8MQ_RESET_DDRC1_CORE_RESET 45 -#define IMX8MQ_RESET_DDRC1_PHY_RESET 46 -#define IMX8MQ_RESET_DDRC2_PRST 47 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_DDRC2_CORE_RESET 48 /* i.MX8MM does NOT support */ -#define IMX8MQ_RESET_DDRC2_PHY_RESET 49 /* i.MX8MM does NOT support */ +#define IMX8MQ_RESET_VPU_RESET 33 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIEPHY2 34 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIEPHY2_PERST 35 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_DDRC1_PRST 44 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_DDRC1_CORE_RESET 45 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_DDRC1_PHY_RESET 46 /* i.MX8MN does NOT support */ +#define IMX8MQ_RESET_DDRC2_PRST 47 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_DDRC2_CORE_RESET 48 /* i.MX8MM/i.MX8MN does NOT support */ +#define IMX8MQ_RESET_DDRC2_PHY_RESET 49 /* i.MX8MM/i.MX8MN does NOT support */ #define IMX8MQ_RESET_NUM 50 diff --git a/include/linux/firmware/imx/sci.h b/include/linux/firmware/imx/sci.h index 17ba4e405129..3fa418a4ca67 100644 --- a/include/linux/firmware/imx/sci.h +++ b/include/linux/firmware/imx/sci.h @@ -11,7 +11,6 @@ #define _SC_SCI_H #include <linux/firmware/imx/ipc.h> -#include <linux/firmware/imx/types.h> #include <linux/firmware/imx/svc/misc.h> #include <linux/firmware/imx/svc/pm.h> diff --git a/include/linux/firmware/imx/types.h b/include/linux/firmware/imx/types.h deleted file mode 100644 index 80821100e85f..000000000000 --- a/include/linux/firmware/imx/types.h +++ /dev/null @@ -1,65 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0+ */ -/* - * Copyright (C) 2016 Freescale Semiconductor, Inc. - * Copyright 2017~2018 NXP - * - * Header file containing types used across multiple service APIs. - */ - -#ifndef _SC_TYPES_H -#define _SC_TYPES_H - -/* - * This type is used to indicate a control. - */ -enum imx_sc_ctrl { - IMX_SC_C_TEMP = 0, - IMX_SC_C_TEMP_HI = 1, - IMX_SC_C_TEMP_LOW = 2, - IMX_SC_C_PXL_LINK_MST1_ADDR = 3, - IMX_SC_C_PXL_LINK_MST2_ADDR = 4, - IMX_SC_C_PXL_LINK_MST_ENB = 5, - IMX_SC_C_PXL_LINK_MST1_ENB = 6, - IMX_SC_C_PXL_LINK_MST2_ENB = 7, - IMX_SC_C_PXL_LINK_SLV1_ADDR = 8, - IMX_SC_C_PXL_LINK_SLV2_ADDR = 9, - IMX_SC_C_PXL_LINK_MST_VLD = 10, - IMX_SC_C_PXL_LINK_MST1_VLD = 11, - IMX_SC_C_PXL_LINK_MST2_VLD = 12, - IMX_SC_C_SINGLE_MODE = 13, - IMX_SC_C_ID = 14, - IMX_SC_C_PXL_CLK_POLARITY = 15, - IMX_SC_C_LINESTATE = 16, - IMX_SC_C_PCIE_G_RST = 17, - IMX_SC_C_PCIE_BUTTON_RST = 18, - IMX_SC_C_PCIE_PERST = 19, - IMX_SC_C_PHY_RESET = 20, - IMX_SC_C_PXL_LINK_RATE_CORRECTION = 21, - IMX_SC_C_PANIC = 22, - IMX_SC_C_PRIORITY_GROUP = 23, - IMX_SC_C_TXCLK = 24, - IMX_SC_C_CLKDIV = 25, - IMX_SC_C_DISABLE_50 = 26, - IMX_SC_C_DISABLE_125 = 27, - IMX_SC_C_SEL_125 = 28, - IMX_SC_C_MODE = 29, - IMX_SC_C_SYNC_CTRL0 = 30, - IMX_SC_C_KACHUNK_CNT = 31, - IMX_SC_C_KACHUNK_SEL = 32, - IMX_SC_C_SYNC_CTRL1 = 33, - IMX_SC_C_DPI_RESET = 34, - IMX_SC_C_MIPI_RESET = 35, - IMX_SC_C_DUAL_MODE = 36, - IMX_SC_C_VOLTAGE = 37, - IMX_SC_C_PXL_LINK_SEL = 38, - IMX_SC_C_OFS_SEL = 39, - IMX_SC_C_OFS_AUDIO = 40, - IMX_SC_C_OFS_PERIPH = 41, - IMX_SC_C_OFS_IRQ = 42, - IMX_SC_C_RST0 = 43, - IMX_SC_C_RST1 = 44, - IMX_SC_C_SEL0 = 45, - IMX_SC_C_LAST -}; - -#endif /* _SC_TYPES_H */ diff --git a/include/linux/fsl/bestcomm/bestcomm.h b/include/linux/fsl/bestcomm/bestcomm.h index a0e2e6b19b57..154e541ce57e 100644 --- a/include/linux/fsl/bestcomm/bestcomm.h +++ b/include/linux/fsl/bestcomm/bestcomm.h @@ -27,7 +27,7 @@ */ struct bcom_bd { u32 status; - u32 data[0]; /* variable payload size */ + u32 data[]; /* variable payload size */ }; /* ======================================================================== */ diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h index 60f541912ccf..8216a4156263 100644 --- a/include/linux/of_reserved_mem.h +++ b/include/linux/of_reserved_mem.h @@ -3,6 +3,7 @@ #define __OF_RESERVED_MEM_H #include <linux/device.h> +#include <linux/of.h> struct of_phandle_args; struct reserved_mem_ops; @@ -33,6 +34,9 @@ typedef int (*reservedmem_of_init_fn)(struct reserved_mem *rmem); int of_reserved_mem_device_init_by_idx(struct device *dev, struct device_node *np, int idx); +int of_reserved_mem_device_init_by_name(struct device *dev, + struct device_node *np, + const char *name); void of_reserved_mem_device_release(struct device *dev); void fdt_init_reserved_mem(void); @@ -45,6 +49,14 @@ static inline int of_reserved_mem_device_init_by_idx(struct device *dev, { return -ENOSYS; } + +static inline int of_reserved_mem_device_init_by_name(struct device *dev, + struct device_node *np, + const char *name) +{ + return -ENOSYS; +} + static inline void of_reserved_mem_device_release(struct device *pdev) { } static inline void fdt_init_reserved_mem(void) { } diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h index 5c873a59b387..ce2f5c28b2df 100644 --- a/include/linux/scmi_protocol.h +++ b/include/linux/scmi_protocol.h @@ -4,6 +4,10 @@ * * Copyright (C) 2018 ARM Ltd. */ + +#ifndef _LINUX_SCMI_PROTOCOL_H +#define _LINUX_SCMI_PROTOCOL_H + #include <linux/device.h> #include <linux/types.h> @@ -319,3 +323,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {} typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *); int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn); void scmi_protocol_unregister(int protocol_id); + +#endif /* _LINUX_SCMI_PROTOCOL_H */ diff --git a/include/linux/scpi_protocol.h b/include/linux/scpi_protocol.h index ecb004711acf..afbf8037d8db 100644 --- a/include/linux/scpi_protocol.h +++ b/include/linux/scpi_protocol.h @@ -4,6 +4,10 @@ * * Copyright (C) 2014 ARM Ltd. */ + +#ifndef _LINUX_SCPI_PROTOCOL_H +#define _LINUX_SCPI_PROTOCOL_H + #include <linux/types.h> struct scpi_opp { @@ -71,3 +75,5 @@ struct scpi_ops *get_scpi_ops(void); #else static inline struct scpi_ops *get_scpi_ops(void) { return NULL; } #endif + +#endif /* _LINUX_SCPI_PROTOCOL_H */ diff --git a/include/linux/soc/mediatek/mtk-mmsys.h b/include/linux/soc/mediatek/mtk-mmsys.h new file mode 100644 index 000000000000..7bab5d9a3d31 --- /dev/null +++ b/include/linux/soc/mediatek/mtk-mmsys.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2015 MediaTek Inc. + */ + +#ifndef __MTK_MMSYS_H +#define __MTK_MMSYS_H + +enum mtk_ddp_comp_id; +struct device; + +void mtk_mmsys_ddp_connect(struct device *dev, + enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next); + +void mtk_mmsys_ddp_disconnect(struct device *dev, + enum mtk_ddp_comp_id cur, + enum mtk_ddp_comp_id next); + +#endif /* __MTK_MMSYS_H */ diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index 1412e9cc79ce..d074302989dd 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -26,6 +26,7 @@ #define TEE_SHM_REGISTER BIT(3) /* Memory registered in secure world */ #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */ +#define TEE_SHM_KERNEL_MAPPED BIT(6) /* Memory mapped in kernel space */ struct device; struct tee_device; @@ -166,6 +167,22 @@ int tee_device_register(struct tee_device *teedev); void tee_device_unregister(struct tee_device *teedev); /** + * tee_session_calc_client_uuid() - Calculates client UUID for session + * @uuid: Resulting UUID + * @connection_method: Connection method for session (TEE_IOCTL_LOGIN_*) + * @connectuon_data: Connection data for opening session + * + * Based on connection method calculates UUIDv5 based client UUID. + * + * For group based logins verifies that calling process has specified + * credentials. + * + * @return < 0 on failure + */ +int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method, + const u8 connection_data[TEE_IOCTL_UUID_LEN]); + +/** * struct tee_shm - shared memory object * @ctx: context using the object * @paddr: physical address of the shared memory diff --git a/include/soc/fsl/qe/qe.h b/include/soc/fsl/qe/qe.h index e282ac01ec08..3feddfec9f87 100644 --- a/include/soc/fsl/qe/qe.h +++ b/include/soc/fsl/qe/qe.h @@ -307,7 +307,7 @@ struct qe_firmware { u8 revision; /* The microcode version revision */ u8 padding; /* Reserved, for alignment */ u8 reserved[4]; /* Reserved, for future expansion */ - } __attribute__ ((packed)) microcode[1]; + } __packed microcode[]; /* All microcode binaries should be located here */ /* CRC32 should be located here, after the microcode binaries */ } __attribute__ ((packed)); diff --git a/include/soc/qcom/cmd-db.h b/include/soc/qcom/cmd-db.h index af9722223925..c8bb56e6852a 100644 --- a/include/soc/qcom/cmd-db.h +++ b/include/soc/qcom/cmd-db.h @@ -4,6 +4,7 @@ #ifndef __QCOM_COMMAND_DB_H__ #define __QCOM_COMMAND_DB_H__ +#include <linux/err.h> enum cmd_db_hw_type { CMD_DB_HW_INVALID = 0, diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 6596f3a09e54..b619f37ee03e 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -173,6 +173,15 @@ struct tee_ioctl_buf_data { #define TEE_IOCTL_LOGIN_APPLICATION 4 #define TEE_IOCTL_LOGIN_USER_APPLICATION 5 #define TEE_IOCTL_LOGIN_GROUP_APPLICATION 6 +/* + * Disallow user-space to use GP implementation specific login + * method range (0x80000000 - 0xBFFFFFFF). This range is rather + * being reserved for REE kernel clients or TEE implementation. + */ +#define TEE_IOCTL_LOGIN_REE_KERNEL_MIN 0x80000000 +#define TEE_IOCTL_LOGIN_REE_KERNEL_MAX 0xBFFFFFFF +/* Private login method for REE kernel clients */ +#define TEE_IOCTL_LOGIN_REE_KERNEL 0x80000000 /** * struct tee_ioctl_param - parameter diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c index cbca6879ab7d..44a259338e33 100644 --- a/kernel/cpu_pm.c +++ b/kernel/cpu_pm.c @@ -80,7 +80,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier); */ int cpu_pm_enter(void) { - int nr_calls; + int nr_calls = 0; int ret = 0; ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); @@ -131,7 +131,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_exit); */ int cpu_cluster_pm_enter(void) { - int nr_calls; + int nr_calls = 0; int ret = 0; ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls); diff --git a/lib/logic_pio.c b/lib/logic_pio.c index f511a99bb389..f32fe481b492 100644 --- a/lib/logic_pio.c +++ b/lib/logic_pio.c @@ -229,13 +229,13 @@ unsigned long logic_pio_trans_cpuaddr(resource_size_t addr) } #if defined(CONFIG_INDIRECT_PIO) && defined(PCI_IOBASE) -#define BUILD_LOGIC_IO(bw, type) \ -type logic_in##bw(unsigned long addr) \ +#define BUILD_LOGIC_IO(bwl, type) \ +type logic_in##bwl(unsigned long addr) \ { \ type ret = (type)~0; \ \ if (addr < MMIO_UPPER_LIMIT) { \ - ret = read##bw(PCI_IOBASE + addr); \ + ret = _in##bwl(addr); \ } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ struct logic_pio_hwaddr *entry = find_io_range(addr); \ \ @@ -248,10 +248,10 @@ type logic_in##bw(unsigned long addr) \ return ret; \ } \ \ -void logic_out##bw(type value, unsigned long addr) \ +void logic_out##bwl(type value, unsigned long addr) \ { \ if (addr < MMIO_UPPER_LIMIT) { \ - write##bw(value, PCI_IOBASE + addr); \ + _out##bwl(value, addr); \ } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ struct logic_pio_hwaddr *entry = find_io_range(addr); \ \ @@ -263,11 +263,11 @@ void logic_out##bw(type value, unsigned long addr) \ } \ } \ \ -void logic_ins##bw(unsigned long addr, void *buffer, \ - unsigned int count) \ +void logic_ins##bwl(unsigned long addr, void *buffer, \ + unsigned int count) \ { \ if (addr < MMIO_UPPER_LIMIT) { \ - reads##bw(PCI_IOBASE + addr, buffer, count); \ + reads##bwl(PCI_IOBASE + addr, buffer, count); \ } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ struct logic_pio_hwaddr *entry = find_io_range(addr); \ \ @@ -280,11 +280,11 @@ void logic_ins##bw(unsigned long addr, void *buffer, \ \ } \ \ -void logic_outs##bw(unsigned long addr, const void *buffer, \ - unsigned int count) \ +void logic_outs##bwl(unsigned long addr, const void *buffer, \ + unsigned int count) \ { \ if (addr < MMIO_UPPER_LIMIT) { \ - writes##bw(PCI_IOBASE + addr, buffer, count); \ + writes##bwl(PCI_IOBASE + addr, buffer, count); \ } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ struct logic_pio_hwaddr *entry = find_io_range(addr); \ \ |