diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-05-07 22:39:22 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-05-07 22:39:22 +0200 |
commit | f678d6da749983791850876e3421e7c48a0a7127 (patch) | |
tree | 553f818ef8e73bf9d6b1e53bdf623240c1279ffb | |
parent | Merge tag 'char-misc-5.2-rc1-part1' of git://git.kernel.org/pub/scm/linux/ker... (diff) | |
parent | intel_th: msu: Add current window tracking (diff) | |
download | linux-f678d6da749983791850876e3421e7c48a0a7127.tar.xz linux-f678d6da749983791850876e3421e7c48a0a7127.zip |
Merge tag 'char-misc-5.2-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc update part 2 from Greg KH:
"Here is the "real" big set of char/misc driver patches for 5.2-rc1
Loads of different driver subsystem stuff in here, all over the places:
- thunderbolt driver updates
- habanalabs driver updates
- nvmem driver updates
- extcon driver updates
- intel_th driver updates
- mei driver updates
- coresight driver updates
- soundwire driver cleanups and updates
- fastrpc driver updates
- other minor driver updates
- chardev minor fixups
Feels like this tree is getting to be a dumping ground of "small
driver subsystems" these days. Which is fine with me, if it makes
things easier for those subsystem maintainers.
All of these have been in linux-next for a while with no reported
issues"
* tag 'char-misc-5.2-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (255 commits)
intel_th: msu: Add current window tracking
intel_th: msu: Add a sysfs attribute to trigger window switch
intel_th: msu: Correct the block wrap detection
intel_th: Add switch triggering support
intel_th: gth: Factor out trace start/stop
intel_th: msu: Factor out pipeline draining
intel_th: msu: Switch over to scatterlist
intel_th: msu: Replace open-coded list_{first,last,next}_entry variants
intel_th: Only report useful IRQs to subdevices
intel_th: msu: Start handling IRQs
intel_th: pci: Use MSI interrupt signalling
intel_th: Communicate IRQ via resource
intel_th: Add "rtit" source device
intel_th: Skip subdevices if their MMIO is missing
intel_th: Rework resource passing between glue layers and core
intel_th: SPDX-ify the documentation
intel_th: msu: Fix single mode with IOMMU
coresight: funnel: Support static funnel
dt-bindings: arm: coresight: Unify funnel DT binding
coresight: replicator: Add new device id for static replicator
...
274 files changed, 10394 insertions, 4319 deletions
diff --git a/Documentation/ABI/stable/sysfs-bus-nvmem b/Documentation/ABI/stable/sysfs-bus-nvmem index 5923ab4620c5..9ffba8576f7b 100644 --- a/Documentation/ABI/stable/sysfs-bus-nvmem +++ b/Documentation/ABI/stable/sysfs-bus-nvmem @@ -6,6 +6,8 @@ Description: This file allows user to read/write the raw NVMEM contents. Permissions for write to this file depends on the nvmem provider configuration. + Note: This file is only present if CONFIG_NVMEM_SYSFS + is enabled ex: hexdump /sys/bus/nvmem/devices/qfprom0/nvmem diff --git a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc index b940c5d91cf7..f54ae244f3f1 100644 --- a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc +++ b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc @@ -30,4 +30,12 @@ Description: (RW) Configure MSC buffer size for "single" or "multi" modes. there are no active users and tracing is not enabled) and then allocates a new one. +What: /sys/bus/intel_th/devices/<intel_th_id>-msc<msc-id>/win_switch +Date: May 2019 +KernelVersion: 5.2 +Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com> +Description: (RW) Trigger window switch for the MSC's buffer, in + multi-window mode. In "multi" mode, accepts writes of "1", thereby + triggering a window switch for the buffer. Returns an error in any + other operating mode or attempts to write something other than "1". diff --git a/Documentation/ABI/testing/sysfs-class-mei b/Documentation/ABI/testing/sysfs-class-mei index 17d7444a2397..a92d844f806e 100644 --- a/Documentation/ABI/testing/sysfs-class-mei +++ b/Documentation/ABI/testing/sysfs-class-mei @@ -65,3 +65,18 @@ Description: Display the ME firmware version. <platform>:<major>.<minor>.<milestone>.<build_no>. There can be up to three such blocks for different FW components. + +What: /sys/class/mei/meiN/dev_state +Date: Mar 2019 +KernelVersion: 5.1 +Contact: Tomas Winkler <tomas.winkler@intel.com> +Description: Display the ME device state. + + The device state can have following values: + INITIALIZING + INIT_CLIENTS + ENABLED + RESETTING + DISABLED + POWER_DOWN + POWER_UP diff --git a/Documentation/devicetree/bindings/arm/coresight.txt b/Documentation/devicetree/bindings/arm/coresight.txt index f8aff65ab921..8a88ddebc1a2 100644 --- a/Documentation/devicetree/bindings/arm/coresight.txt +++ b/Documentation/devicetree/bindings/arm/coresight.txt @@ -8,7 +8,8 @@ through the intermediate links connecting the source to the currently selected sink. Each CoreSight component device should use these properties to describe its hardware characteristcs. -* Required properties for all components *except* non-configurable replicators: +* Required properties for all components *except* non-configurable replicators + and non-configurable funnels: * compatible: These have to be supplemented with "arm,primecell" as drivers are using the AMBA bus interface. Possible values include: @@ -24,8 +25,10 @@ its hardware characteristcs. discovered at boot time when the device is probed. "arm,coresight-tmc", "arm,primecell"; - - Trace Funnel: - "arm,coresight-funnel", "arm,primecell"; + - Trace Programmable Funnel: + "arm,coresight-dynamic-funnel", "arm,primecell"; + "arm,coresight-funnel", "arm,primecell"; (OBSOLETE. For + backward compatibility and will be removed) - Embedded Trace Macrocell (version 3.x) and Program Flow Trace Macrocell: @@ -65,11 +68,17 @@ its hardware characteristcs. "stm-stimulus-base", each corresponding to the areas defined in "reg". * Required properties for devices that don't show up on the AMBA bus, such as - non-configurable replicators: + non-configurable replicators and non-configurable funnels: * compatible: Currently supported value is (note the absence of the AMBA markee): - - "arm,coresight-replicator" + - Coresight Non-configurable Replicator: + "arm,coresight-static-replicator"; + "arm,coresight-replicator"; (OBSOLETE. For backward + compatibility and will be removed) + + - Coresight Non-configurable Funnel: + "arm,coresight-static-funnel"; * port or ports: see "Graph bindings for Coresight" below. @@ -169,7 +178,7 @@ Example: /* non-configurable replicators don't show up on the * AMBA bus. As such no need to add "arm,primecell". */ - compatible = "arm,coresight-replicator"; + compatible = "arm,coresight-static-replicator"; out-ports { #address-cells = <1>; @@ -200,8 +209,45 @@ Example: }; }; + funnel { + /* + * non-configurable funnel don't show up on the AMBA + * bus. As such no need to add "arm,primecell". + */ + compatible = "arm,coresight-static-funnel"; + clocks = <&crg_ctrl HI3660_PCLK>; + clock-names = "apb_pclk"; + + out-ports { + port { + combo_funnel_out: endpoint { + remote-endpoint = <&top_funnel_in>; + }; + }; + }; + + in-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + combo_funnel_in0: endpoint { + remote-endpoint = <&cluster0_etf_out>; + }; + }; + + port@1 { + reg = <1>; + combo_funnel_in1: endpoint { + remote-endpoint = <&cluster1_etf_out>; + }; + }; + }; + }; + funnel@20040000 { - compatible = "arm,coresight-funnel", "arm,primecell"; + compatible = "arm,coresight-dynamic-funnel", "arm,primecell"; reg = <0 0x20040000 0 0x1000>; clocks = <&oscclk6a>; diff --git a/Documentation/devicetree/bindings/gnss/u-blox.txt b/Documentation/devicetree/bindings/gnss/u-blox.txt index e475659cb85f..7cdefd058fe0 100644 --- a/Documentation/devicetree/bindings/gnss/u-blox.txt +++ b/Documentation/devicetree/bindings/gnss/u-blox.txt @@ -9,6 +9,7 @@ Required properties: - compatible : Must be one of + "u-blox,neo-6m" "u-blox,neo-8" "u-blox,neo-m8" diff --git a/Documentation/devicetree/bindings/misc/aspeed-p2a-ctrl.txt b/Documentation/devicetree/bindings/misc/aspeed-p2a-ctrl.txt new file mode 100644 index 000000000000..854bd67ffec6 --- /dev/null +++ b/Documentation/devicetree/bindings/misc/aspeed-p2a-ctrl.txt @@ -0,0 +1,47 @@ +====================================================================== +Device tree bindings for Aspeed AST2400/AST2500 PCI-to-AHB Bridge Control Driver +====================================================================== + +The bridge is available on platforms with the VGA enabled on the Aspeed device. +In this case, the host has access to a 64KiB window into all of the BMC's +memory. The BMC can disable this bridge. If the bridge is enabled, the host +has read access to all the regions of memory, however the host only has read +and write access depending on a register controlled by the BMC. + +Required properties: +=================== + + - compatible: must be one of: + - "aspeed,ast2400-p2a-ctrl" + - "aspeed,ast2500-p2a-ctrl" + +Optional properties: +=================== + +- memory-region: A phandle to a reserved_memory region to be used for the PCI + to AHB mapping + +The p2a-control node should be the child of a syscon node with the required +property: + +- compatible : Should be one of the following: + "aspeed,ast2400-scu", "syscon", "simple-mfd" + "aspeed,g4-scu", "syscon", "simple-mfd" + "aspeed,ast2500-scu", "syscon", "simple-mfd" + "aspeed,g5-scu", "syscon", "simple-mfd" + +Example +=================== + +g4 Example +---------- + +syscon: scu@1e6e2000 { + compatible = "aspeed,ast2400-scu", "syscon", "simple-mfd"; + reg = <0x1e6e2000 0x1a8>; + + p2a: p2a-control { + compatible = "aspeed,ast2400-p2a-ctrl"; + memory-region = <&reserved_memory>; + }; +}; diff --git a/Documentation/devicetree/bindings/nvmem/allwinner,sunxi-sid.txt b/Documentation/devicetree/bindings/nvmem/allwinner,sunxi-sid.txt index 99c4ba6a3f61..cfb18b4ef8f7 100644 --- a/Documentation/devicetree/bindings/nvmem/allwinner,sunxi-sid.txt +++ b/Documentation/devicetree/bindings/nvmem/allwinner,sunxi-sid.txt @@ -8,11 +8,12 @@ Required properties: "allwinner,sun8i-h3-sid" "allwinner,sun50i-a64-sid" "allwinner,sun50i-h5-sid" + "allwinner,sun50i-h6-sid" - reg: Should contain registers location and length = Data cells = -Are child nodes of qfprom, bindings of which as described in +Are child nodes of sunxi-sid, bindings of which as described in bindings/nvmem/nvmem.txt Example for sun4i: diff --git a/Documentation/devicetree/bindings/nvmem/imx-ocotp.txt b/Documentation/devicetree/bindings/nvmem/imx-ocotp.txt index 7a999a135e56..68f7d6fdd140 100644 --- a/Documentation/devicetree/bindings/nvmem/imx-ocotp.txt +++ b/Documentation/devicetree/bindings/nvmem/imx-ocotp.txt @@ -1,7 +1,8 @@ Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings This binding represents the on-chip eFuse OTP controller found on -i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ and i.MX6SLL SoCs. +i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ, i.MX6SLL, +i.MX7D/S, i.MX7ULP and i.MX8MQ SoCs. Required properties: - compatible: should be one of @@ -13,6 +14,7 @@ Required properties: "fsl,imx7d-ocotp" (i.MX7D/S), "fsl,imx6sll-ocotp" (i.MX6SLL), "fsl,imx7ulp-ocotp" (i.MX7ULP), + "fsl,imx8mq-ocotp" (i.MX8MQ), followed by "syscon". - #address-cells : Should be 1 - #size-cells : Should be 1 diff --git a/Documentation/devicetree/bindings/nvmem/st,stm32-romem.txt b/Documentation/devicetree/bindings/nvmem/st,stm32-romem.txt new file mode 100644 index 000000000000..142a51d5a9be --- /dev/null +++ b/Documentation/devicetree/bindings/nvmem/st,stm32-romem.txt @@ -0,0 +1,31 @@ +STMicroelectronics STM32 Factory-programmed data device tree bindings + +This represents STM32 Factory-programmed read only non-volatile area: locked +flash, OTP, read-only HW regs... This contains various information such as: +analog calibration data for temperature sensor (e.g. TS_CAL1, TS_CAL2), +internal vref (VREFIN_CAL), unique device ID... + +Required properties: +- compatible: Should be one of: + "st,stm32f4-otp" + "st,stm32mp15-bsec" +- reg: Offset and length of factory-programmed area. +- #address-cells: Should be '<1>'. +- #size-cells: Should be '<1>'. + +Optional Data cells: +- Must be child nodes as described in nvmem.txt. + +Example on stm32f4: + romem: nvmem@1fff7800 { + compatible = "st,stm32f4-otp"; + reg = <0x1fff7800 0x400>; + #address-cells = <1>; + #size-cells = <1>; + + /* Data cells: ts_cal1 at 0x1fff7a2c */ + ts_cal1: calib@22c { + reg = <0x22c 0x2>; + }; + ... + }; diff --git a/Documentation/trace/intel_th.rst b/Documentation/trace/intel_th.rst index 19e2d633f3c7..baa12eb09ef4 100644 --- a/Documentation/trace/intel_th.rst +++ b/Documentation/trace/intel_th.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ======================= Intel(R) Trace Hub (TH) ======================= diff --git a/MAINTAINERS b/MAINTAINERS index 978563bcbeac..fbb6e45018f5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8068,6 +8068,7 @@ F: drivers/gpio/gpio-intel-mid.c INTERCONNECT API M: Georgi Djakov <georgi.djakov@linaro.org> +L: linux-pm@vger.kernel.org S: Maintained F: Documentation/interconnect/ F: Documentation/devicetree/bindings/interconnect/ diff --git a/drivers/android/binder.c b/drivers/android/binder.c index 4b9c7ca492e6..6f0712f0767c 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -3121,6 +3121,7 @@ static void binder_transaction(struct binder_proc *proc, if (target_node && target_node->txn_security_ctx) { u32 secid; + size_t added_size; security_task_getsecid(proc->tsk, &secid); ret = security_secid_to_secctx(secid, &secctx, &secctx_sz); @@ -3130,7 +3131,15 @@ static void binder_transaction(struct binder_proc *proc, return_error_line = __LINE__; goto err_get_secctx_failed; } - extra_buffers_size += ALIGN(secctx_sz, sizeof(u64)); + added_size = ALIGN(secctx_sz, sizeof(u64)); + extra_buffers_size += added_size; + if (extra_buffers_size < added_size) { + /* integer overflow of extra_buffers_size */ + return_error = BR_FAILED_REPLY; + return_error_param = EINVAL; + return_error_line = __LINE__; + goto err_bad_extra_size; + } } trace_binder_transaction(reply, t, target_node); @@ -3480,6 +3489,7 @@ err_copy_data_failed: t->buffer->transaction = NULL; binder_alloc_free_buf(&target_proc->alloc, t->buffer); err_binder_alloc_buf_failed: +err_bad_extra_size: if (secctx) security_release_secctx(secctx, secctx_sz); err_get_secctx_failed: diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c index d0ad85900b79..3a1e6b3ccd10 100644 --- a/drivers/char/hpet.c +++ b/drivers/char/hpet.c @@ -973,6 +973,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) if (ACPI_SUCCESS(status)) { hdp->hd_phys_address = addr.address.minimum; hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length); + if (!hdp->hd_address) + return AE_ERROR; if (hpet_is_known(hdp)) { iounmap(hdp->hd_address); diff --git a/drivers/extcon/Kconfig b/drivers/extcon/Kconfig index 540e8cd16ee6..de06fafb52ff 100644 --- a/drivers/extcon/Kconfig +++ b/drivers/extcon/Kconfig @@ -30,7 +30,7 @@ config EXTCON_ARIZONA config EXTCON_AXP288 tristate "X-Power AXP288 EXTCON support" - depends on MFD_AXP20X && USB_SUPPORT && X86 + depends on MFD_AXP20X && USB_SUPPORT && X86 && ACPI select USB_ROLE_SWITCH help Say Y here to enable support for USB peripheral detection @@ -60,6 +60,13 @@ config EXTCON_INTEL_CHT_WC Say Y here to enable extcon support for charger detection / control on the Intel Cherrytrail Whiskey Cove PMIC. +config EXTCON_INTEL_MRFLD + tristate "Intel Merrifield Basin Cove PMIC extcon driver" + depends on INTEL_SOC_PMIC_MRFLD + help + Say Y here to enable extcon support for charger detection / control + on the Intel Merrifield Basin Cove PMIC. + config EXTCON_MAX14577 tristate "Maxim MAX14577/77836 EXTCON Support" depends on MFD_MAX14577 diff --git a/drivers/extcon/Makefile b/drivers/extcon/Makefile index 261ce4cfe209..d3941a735df3 100644 --- a/drivers/extcon/Makefile +++ b/drivers/extcon/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_EXTCON_AXP288) += extcon-axp288.o obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o obj-$(CONFIG_EXTCON_INTEL_INT3496) += extcon-intel-int3496.o obj-$(CONFIG_EXTCON_INTEL_CHT_WC) += extcon-intel-cht-wc.o +obj-$(CONFIG_EXTCON_INTEL_MRFLD) += extcon-intel-mrfld.o obj-$(CONFIG_EXTCON_MAX14577) += extcon-max14577.o obj-$(CONFIG_EXTCON_MAX3355) += extcon-max3355.o obj-$(CONFIG_EXTCON_MAX77693) += extcon-max77693.o diff --git a/drivers/extcon/devres.c b/drivers/extcon/devres.c index f599aeddf8e5..f487d877ab5d 100644 --- a/drivers/extcon/devres.c +++ b/drivers/extcon/devres.c @@ -205,7 +205,7 @@ EXPORT_SYMBOL(devm_extcon_register_notifier); /** * devm_extcon_unregister_notifier() - - Resource-managed extcon_unregister_notifier() + * - Resource-managed extcon_unregister_notifier() * @dev: the device owning the extcon device being created * @edev: the extcon device * @id: the unique id among the extcon enumeration diff --git a/drivers/extcon/extcon-arizona.c b/drivers/extcon/extcon-arizona.c index da0e9bc4262f..9327479c719c 100644 --- a/drivers/extcon/extcon-arizona.c +++ b/drivers/extcon/extcon-arizona.c @@ -1726,6 +1726,16 @@ static int arizona_extcon_remove(struct platform_device *pdev) struct arizona_extcon_info *info = platform_get_drvdata(pdev); struct arizona *arizona = info->arizona; int jack_irq_rise, jack_irq_fall; + bool change; + + regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, + ARIZONA_MICD_ENA, 0, + &change); + + if (change) { + regulator_disable(info->micvdd); + pm_runtime_put(info->dev); + } gpiod_put(info->micd_pol_gpio); diff --git a/drivers/extcon/extcon-intel-cht-wc.c b/drivers/extcon/extcon-intel-cht-wc.c index 5ef215297101..9d32150e68db 100644 --- a/drivers/extcon/extcon-intel-cht-wc.c +++ b/drivers/extcon/extcon-intel-cht-wc.c @@ -17,6 +17,8 @@ #include <linux/regmap.h> #include <linux/slab.h> +#include "extcon-intel.h" + #define CHT_WC_PHYCTRL 0x5e07 #define CHT_WC_CHGRCTRL0 0x5e16 @@ -29,7 +31,15 @@ #define CHT_WC_CHGRCTRL0_DBPOFF BIT(6) #define CHT_WC_CHGRCTRL0_CHR_WDT_NOKICK BIT(7) -#define CHT_WC_CHGRCTRL1 0x5e17 +#define CHT_WC_CHGRCTRL1 0x5e17 +#define CHT_WC_CHGRCTRL1_FUSB_INLMT_100 BIT(0) +#define CHT_WC_CHGRCTRL1_FUSB_INLMT_150 BIT(1) +#define CHT_WC_CHGRCTRL1_FUSB_INLMT_500 BIT(2) +#define CHT_WC_CHGRCTRL1_FUSB_INLMT_900 BIT(3) +#define CHT_WC_CHGRCTRL1_FUSB_INLMT_1500 BIT(4) +#define CHT_WC_CHGRCTRL1_FTEMP_EVENT BIT(5) +#define CHT_WC_CHGRCTRL1_OTGMODE BIT(6) +#define CHT_WC_CHGRCTRL1_DBPEN BIT(7) #define CHT_WC_USBSRC 0x5e29 #define CHT_WC_USBSRC_STS_MASK GENMASK(1, 0) @@ -48,6 +58,13 @@ #define CHT_WC_USBSRC_TYPE_OTHER 8 #define CHT_WC_USBSRC_TYPE_DCP_EXTPHY 9 +#define CHT_WC_CHGDISCTRL 0x5e2f +#define CHT_WC_CHGDISCTRL_OUT BIT(0) +/* 0 - open drain, 1 - regular push-pull output */ +#define CHT_WC_CHGDISCTRL_DRV BIT(4) +/* 0 - pin is controlled by SW, 1 - by HW */ +#define CHT_WC_CHGDISCTRL_FN BIT(6) + #define CHT_WC_PWRSRC_IRQ 0x6e03 #define CHT_WC_PWRSRC_IRQ_MASK 0x6e0f #define CHT_WC_PWRSRC_STS 0x6e1e @@ -65,15 +82,6 @@ #define CHT_WC_VBUS_GPIO_CTLO_DRV_OD BIT(4) #define CHT_WC_VBUS_GPIO_CTLO_DIR_OUT BIT(5) -enum cht_wc_usb_id { - USB_ID_OTG, - USB_ID_GND, - USB_ID_FLOAT, - USB_RID_A, - USB_RID_B, - USB_RID_C, -}; - enum cht_wc_mux_select { MUX_SEL_PMIC = 0, MUX_SEL_SOC, @@ -101,9 +109,9 @@ static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts) { switch ((pwrsrc_sts & CHT_WC_PWRSRC_USBID_MASK) >> CHT_WC_PWRSRC_USBID_SHIFT) { case CHT_WC_PWRSRC_RID_GND: - return USB_ID_GND; + return INTEL_USB_ID_GND; case CHT_WC_PWRSRC_RID_FLOAT: - return USB_ID_FLOAT; + return INTEL_USB_ID_FLOAT; case CHT_WC_PWRSRC_RID_ACA: default: /* @@ -111,7 +119,7 @@ static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts) * the USBID GPADC channel here and determine ACA role * based on that. */ - return USB_ID_FLOAT; + return INTEL_USB_ID_FLOAT; } } @@ -198,6 +206,30 @@ static void cht_wc_extcon_set_5v_boost(struct cht_wc_extcon_data *ext, dev_err(ext->dev, "Error writing Vbus GPIO CTLO: %d\n", ret); } +static void cht_wc_extcon_set_otgmode(struct cht_wc_extcon_data *ext, + bool enable) +{ + unsigned int val = enable ? CHT_WC_CHGRCTRL1_OTGMODE : 0; + int ret; + + ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL1, + CHT_WC_CHGRCTRL1_OTGMODE, val); + if (ret) + dev_err(ext->dev, "Error updating CHGRCTRL1 reg: %d\n", ret); +} + +static void cht_wc_extcon_enable_charging(struct cht_wc_extcon_data *ext, + bool enable) +{ + unsigned int val = enable ? 0 : CHT_WC_CHGDISCTRL_OUT; + int ret; + + ret = regmap_update_bits(ext->regmap, CHT_WC_CHGDISCTRL, + CHT_WC_CHGDISCTRL_OUT, val); + if (ret) + dev_err(ext->dev, "Error updating CHGDISCTRL reg: %d\n", ret); +} + /* Small helper to sync EXTCON_CHG_USB_SDP and EXTCON_USB state */ static void cht_wc_extcon_set_state(struct cht_wc_extcon_data *ext, unsigned int cable, bool state) @@ -221,11 +253,17 @@ static void cht_wc_extcon_pwrsrc_event(struct cht_wc_extcon_data *ext) } id = cht_wc_extcon_get_id(ext, pwrsrc_sts); - if (id == USB_ID_GND) { + if (id == INTEL_USB_ID_GND) { + cht_wc_extcon_enable_charging(ext, false); + cht_wc_extcon_set_otgmode(ext, true); + /* The 5v boost causes a false VBUS / SDP detect, skip */ goto charger_det_done; } + cht_wc_extcon_set_otgmode(ext, false); + cht_wc_extcon_enable_charging(ext, true); + /* Plugged into a host/charger or not connected? */ if (!(pwrsrc_sts & CHT_WC_PWRSRC_VBUS)) { /* Route D+ and D- to PMIC for future charger detection */ @@ -248,7 +286,7 @@ set_state: ext->previous_cable = cable; } - ext->usb_host = ((id == USB_ID_GND) || (id == USB_RID_A)); + ext->usb_host = ((id == INTEL_USB_ID_GND) || (id == INTEL_USB_RID_A)); extcon_set_state_sync(ext->edev, EXTCON_USB_HOST, ext->usb_host); } @@ -278,6 +316,14 @@ static int cht_wc_extcon_sw_control(struct cht_wc_extcon_data *ext, bool enable) { int ret, mask, val; + val = enable ? 0 : CHT_WC_CHGDISCTRL_FN; + ret = regmap_update_bits(ext->regmap, CHT_WC_CHGDISCTRL, + CHT_WC_CHGDISCTRL_FN, val); + if (ret) + dev_err(ext->dev, + "Error setting sw control for CHGDIS pin: %d\n", + ret); + mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF; val = enable ? mask : 0; ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL0, mask, val); @@ -329,7 +375,10 @@ static int cht_wc_extcon_probe(struct platform_device *pdev) /* Enable sw control */ ret = cht_wc_extcon_sw_control(ext, true); if (ret) - return ret; + goto disable_sw_control; + + /* Disable charging by external battery charger */ + cht_wc_extcon_enable_charging(ext, false); /* Register extcon device */ ret = devm_extcon_dev_register(ext->dev, ext->edev); diff --git a/drivers/extcon/extcon-intel-mrfld.c b/drivers/extcon/extcon-intel-mrfld.c new file mode 100644 index 000000000000..f47016fb28a8 --- /dev/null +++ b/drivers/extcon/extcon-intel-mrfld.c @@ -0,0 +1,284 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * extcon driver for Basin Cove PMIC + * + * Copyright (c) 2019, Intel Corporation. + * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com> + */ + +#include <linux/extcon-provider.h> +#include <linux/interrupt.h> +#include <linux/mfd/intel_soc_pmic.h> +#include <linux/mfd/intel_soc_pmic_mrfld.h> +#include <linux/mod_devicetable.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> + +#include "extcon-intel.h" + +#define BCOVE_USBIDCTRL 0x19 +#define BCOVE_USBIDCTRL_ID BIT(0) +#define BCOVE_USBIDCTRL_ACA BIT(1) +#define BCOVE_USBIDCTRL_ALL (BCOVE_USBIDCTRL_ID | BCOVE_USBIDCTRL_ACA) + +#define BCOVE_USBIDSTS 0x1a +#define BCOVE_USBIDSTS_GND BIT(0) +#define BCOVE_USBIDSTS_RARBRC_MASK GENMASK(2, 1) +#define BCOVE_USBIDSTS_RARBRC_SHIFT 1 +#define BCOVE_USBIDSTS_NO_ACA 0 +#define BCOVE_USBIDSTS_R_ID_A 1 +#define BCOVE_USBIDSTS_R_ID_B 2 +#define BCOVE_USBIDSTS_R_ID_C 3 +#define BCOVE_USBIDSTS_FLOAT BIT(3) +#define BCOVE_USBIDSTS_SHORT BIT(4) + +#define BCOVE_CHGRIRQ_ALL (BCOVE_CHGRIRQ_VBUSDET | BCOVE_CHGRIRQ_DCDET | \ + BCOVE_CHGRIRQ_BATTDET | BCOVE_CHGRIRQ_USBIDDET) + +#define BCOVE_CHGRCTRL0 0x4b +#define BCOVE_CHGRCTRL0_CHGRRESET BIT(0) +#define BCOVE_CHGRCTRL0_EMRGCHREN BIT(1) +#define BCOVE_CHGRCTRL0_EXTCHRDIS BIT(2) +#define BCOVE_CHGRCTRL0_SWCONTROL BIT(3) +#define BCOVE_CHGRCTRL0_TTLCK BIT(4) +#define BCOVE_CHGRCTRL0_BIT_5 BIT(5) +#define BCOVE_CHGRCTRL0_BIT_6 BIT(6) +#define BCOVE_CHGRCTRL0_CHR_WDT_NOKICK BIT(7) + +struct mrfld_extcon_data { + struct device *dev; + struct regmap *regmap; + struct extcon_dev *edev; + unsigned int status; + unsigned int id; +}; + +static const unsigned int mrfld_extcon_cable[] = { + EXTCON_USB, + EXTCON_USB_HOST, + EXTCON_CHG_USB_SDP, + EXTCON_CHG_USB_CDP, + EXTCON_CHG_USB_DCP, + EXTCON_CHG_USB_ACA, + EXTCON_NONE, +}; + +static int mrfld_extcon_clear(struct mrfld_extcon_data *data, unsigned int reg, + unsigned int mask) +{ + return regmap_update_bits(data->regmap, reg, mask, 0x00); +} + +static int mrfld_extcon_set(struct mrfld_extcon_data *data, unsigned int reg, + unsigned int mask) +{ + return regmap_update_bits(data->regmap, reg, mask, 0xff); +} + +static int mrfld_extcon_sw_control(struct mrfld_extcon_data *data, bool enable) +{ + unsigned int mask = BCOVE_CHGRCTRL0_SWCONTROL; + struct device *dev = data->dev; + int ret; + + if (enable) + ret = mrfld_extcon_set(data, BCOVE_CHGRCTRL0, mask); + else + ret = mrfld_extcon_clear(data, BCOVE_CHGRCTRL0, mask); + if (ret) + dev_err(dev, "can't set SW control: %d\n", ret); + return ret; +} + +static int mrfld_extcon_get_id(struct mrfld_extcon_data *data) +{ + struct regmap *regmap = data->regmap; + unsigned int id; + bool ground; + int ret; + + ret = regmap_read(regmap, BCOVE_USBIDSTS, &id); + if (ret) + return ret; + + if (id & BCOVE_USBIDSTS_FLOAT) + return INTEL_USB_ID_FLOAT; + + switch ((id & BCOVE_USBIDSTS_RARBRC_MASK) >> BCOVE_USBIDSTS_RARBRC_SHIFT) { + case BCOVE_USBIDSTS_R_ID_A: + return INTEL_USB_RID_A; + case BCOVE_USBIDSTS_R_ID_B: + return INTEL_USB_RID_B; + case BCOVE_USBIDSTS_R_ID_C: + return INTEL_USB_RID_C; + } + + /* + * PMIC A0 reports USBIDSTS_GND = 1 for ID_GND, + * but PMIC B0 reports USBIDSTS_GND = 0 for ID_GND. + * Thus we must check this bit at last. + */ + ground = id & BCOVE_USBIDSTS_GND; + switch ('A' + BCOVE_MAJOR(data->id)) { + case 'A': + return ground ? INTEL_USB_ID_GND : INTEL_USB_ID_FLOAT; + case 'B': + return ground ? INTEL_USB_ID_FLOAT : INTEL_USB_ID_GND; + } + + /* Unknown or unsupported type */ + return INTEL_USB_ID_FLOAT; +} + +static int mrfld_extcon_role_detect(struct mrfld_extcon_data *data) +{ + unsigned int id; + bool usb_host; + int ret; + + ret = mrfld_extcon_get_id(data); + if (ret < 0) + return ret; + + id = ret; + + usb_host = (id == INTEL_USB_ID_GND) || (id == INTEL_USB_RID_A); + extcon_set_state_sync(data->edev, EXTCON_USB_HOST, usb_host); + + return 0; +} + +static int mrfld_extcon_cable_detect(struct mrfld_extcon_data *data) +{ + struct regmap *regmap = data->regmap; + unsigned int status, change; + int ret; + + /* + * It seems SCU firmware clears the content of BCOVE_CHGRIRQ1 + * and makes it useless for OS. Instead we compare a previously + * stored status to the current one, provided by BCOVE_SCHGRIRQ1. + */ + ret = regmap_read(regmap, BCOVE_SCHGRIRQ1, &status); + if (ret) + return ret; + + change = status ^ data->status; + if (!change) + return -ENODATA; + + if (change & BCOVE_CHGRIRQ_USBIDDET) { + ret = mrfld_extcon_role_detect(data); + if (ret) + return ret; + } + + data->status = status; + + return 0; +} + +static irqreturn_t mrfld_extcon_interrupt(int irq, void *dev_id) +{ + struct mrfld_extcon_data *data = dev_id; + int ret; + + ret = mrfld_extcon_cable_detect(data); + + mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR); + + return ret ? IRQ_NONE: IRQ_HANDLED; +} + +static int mrfld_extcon_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct intel_soc_pmic *pmic = dev_get_drvdata(dev->parent); + struct regmap *regmap = pmic->regmap; + struct mrfld_extcon_data *data; + unsigned int id; + int irq, ret; + + irq = platform_get_irq(pdev, 0); + if (irq < 0) + return irq; + + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); + if (!data) + return -ENOMEM; + + data->dev = dev; + data->regmap = regmap; + + data->edev = devm_extcon_dev_allocate(dev, mrfld_extcon_cable); + if (IS_ERR(data->edev)) + return -ENOMEM; + + ret = devm_extcon_dev_register(dev, data->edev); + if (ret < 0) { + dev_err(dev, "can't register extcon device: %d\n", ret); + return ret; + } + + ret = devm_request_threaded_irq(dev, irq, NULL, mrfld_extcon_interrupt, + IRQF_ONESHOT | IRQF_SHARED, pdev->name, + data); + if (ret) { + dev_err(dev, "can't register IRQ handler: %d\n", ret); + return ret; + } + + ret = regmap_read(regmap, BCOVE_ID, &id); + if (ret) { + dev_err(dev, "can't read PMIC ID: %d\n", ret); + return ret; + } + + data->id = id; + + ret = mrfld_extcon_sw_control(data, true); + if (ret) + return ret; + + /* Get initial state */ + mrfld_extcon_role_detect(data); + + mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR); + mrfld_extcon_clear(data, BCOVE_MCHGRIRQ1, BCOVE_CHGRIRQ_ALL); + + mrfld_extcon_set(data, BCOVE_USBIDCTRL, BCOVE_USBIDCTRL_ALL); + + platform_set_drvdata(pdev, data); + + return 0; +} + +static int mrfld_extcon_remove(struct platform_device *pdev) +{ + struct mrfld_extcon_data *data = platform_get_drvdata(pdev); + + mrfld_extcon_sw_control(data, false); + + return 0; +} + +static const struct platform_device_id mrfld_extcon_id_table[] = { + { .name = "mrfld_bcove_pwrsrc" }, + {} +}; +MODULE_DEVICE_TABLE(platform, mrfld_extcon_id_table); + +static struct platform_driver mrfld_extcon_driver = { + .driver = { + .name = "mrfld_bcove_pwrsrc", + }, + .probe = mrfld_extcon_probe, + .remove = mrfld_extcon_remove, + .id_table = mrfld_extcon_id_table, +}; +module_platform_driver(mrfld_extcon_driver); + +MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); +MODULE_DESCRIPTION("extcon driver for Intel Merrifield Basin Cove PMIC"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/extcon/extcon-intel.h b/drivers/extcon/extcon-intel.h new file mode 100644 index 000000000000..0ad645ec7b33 --- /dev/null +++ b/drivers/extcon/extcon-intel.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Header file for Intel extcon hardware + * + * Copyright (C) 2019 Intel Corporation. All rights reserved. + */ + +#ifndef __EXTCON_INTEL_H__ +#define __EXTCON_INTEL_H__ + +enum extcon_intel_usb_id { + INTEL_USB_ID_OTG, + INTEL_USB_ID_GND, + INTEL_USB_ID_FLOAT, + INTEL_USB_RID_A, + INTEL_USB_RID_B, + INTEL_USB_RID_C, +}; + +#endif /* __EXTCON_INTEL_H__ */ diff --git a/drivers/firmware/google/vpd.c b/drivers/firmware/google/vpd.c index c0c0b4e4e281..f240946ed701 100644 --- a/drivers/firmware/google/vpd.c +++ b/drivers/firmware/google/vpd.c @@ -254,7 +254,7 @@ static int vpd_section_destroy(struct vpd_section *sec) static int vpd_sections_init(phys_addr_t physaddr) { - struct vpd_cbmem __iomem *temp; + struct vpd_cbmem *temp; struct vpd_cbmem header; int ret = 0; @@ -262,7 +262,7 @@ static int vpd_sections_init(phys_addr_t physaddr) if (!temp) return -ENOMEM; - memcpy_fromio(&header, temp, sizeof(struct vpd_cbmem)); + memcpy(&header, temp, sizeof(struct vpd_cbmem)); memunmap(temp); if (header.magic != VPD_CBMEM_MAGIC) diff --git a/drivers/gnss/ubx.c b/drivers/gnss/ubx.c index 12568aebb7f6..7b05bc40532e 100644 --- a/drivers/gnss/ubx.c +++ b/drivers/gnss/ubx.c @@ -130,6 +130,7 @@ static void ubx_remove(struct serdev_device *serdev) #ifdef CONFIG_OF static const struct of_device_id ubx_of_match[] = { + { .compatible = "u-blox,neo-6m" }, { .compatible = "u-blox,neo-8" }, { .compatible = "u-blox,neo-m8" }, {}, diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index ad34380cac49..18e8d03321d6 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -75,20 +75,13 @@ config CORESIGHT_SOURCE_ETM4X bool "CoreSight Embedded Trace Macrocell 4.x driver" depends on ARM64 select CORESIGHT_LINKS_AND_SINKS + select PID_IN_CONTEXTIDR help This driver provides support for the ETM4.x tracer module, tracing the instructions that a processor is executing. This is primarily useful for instruction level tracing. Depending on the implemented version data tracing may also be available. -config CORESIGHT_DYNAMIC_REPLICATOR - bool "CoreSight Programmable Replicator driver" - depends on CORESIGHT_LINKS_AND_SINKS - help - This enables support for dynamic CoreSight replicator link driver. - The programmable ATB replicator allows independent filtering of the - trace data based on the traceid. - config CORESIGHT_STM bool "CoreSight System Trace Macrocell driver" depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64 diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile index 41870ded51a3..3b435aa42af5 100644 --- a/drivers/hwtracing/coresight/Makefile +++ b/drivers/hwtracing/coresight/Makefile @@ -15,7 +15,6 @@ obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \ coresight-etm3x-sysfs.o obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \ coresight-etm4x-sysfs.o -obj-$(CONFIG_CORESIGHT_DYNAMIC_REPLICATOR) += coresight-dynamic-replicator.o obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c index 170fbb66bda2..4ea68a3522e9 100644 --- a/drivers/hwtracing/coresight/coresight-catu.c +++ b/drivers/hwtracing/coresight/coresight-catu.c @@ -485,12 +485,12 @@ static int catu_disable(struct coresight_device *csdev, void *__unused) return rc; } -const struct coresight_ops_helper catu_helper_ops = { +static const struct coresight_ops_helper catu_helper_ops = { .enable = catu_enable, .disable = catu_disable, }; -const struct coresight_ops catu_ops = { +static const struct coresight_ops catu_ops = { .helper_ops = &catu_helper_ops, }; @@ -557,8 +557,9 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id) drvdata->csdev = coresight_register(&catu_desc); if (IS_ERR(drvdata->csdev)) ret = PTR_ERR(drvdata->csdev); + else + pm_runtime_put(&adev->dev); out: - pm_runtime_put(&adev->dev); return ret; } diff --git a/drivers/hwtracing/coresight/coresight-catu.h b/drivers/hwtracing/coresight/coresight-catu.h index 1b281f0dcccc..1d2ad183fd92 100644 --- a/drivers/hwtracing/coresight/coresight-catu.h +++ b/drivers/hwtracing/coresight/coresight-catu.h @@ -109,11 +109,6 @@ static inline bool coresight_is_catu_device(struct coresight_device *csdev) return true; } -#ifdef CONFIG_CORESIGHT_CATU extern const struct etr_buf_operations etr_catu_buf_ops; -#else -/* Dummy declaration for the CATU ops */ -static const struct etr_buf_operations etr_catu_buf_ops; -#endif #endif diff --git a/drivers/hwtracing/coresight/coresight-dynamic-replicator.c b/drivers/hwtracing/coresight/coresight-dynamic-replicator.c deleted file mode 100644 index 299667b887fc..000000000000 --- a/drivers/hwtracing/coresight/coresight-dynamic-replicator.c +++ /dev/null @@ -1,255 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2011-2015, The Linux Foundation. All rights reserved. - */ - -#include <linux/amba/bus.h> -#include <linux/clk.h> -#include <linux/coresight.h> -#include <linux/device.h> -#include <linux/err.h> -#include <linux/init.h> -#include <linux/io.h> -#include <linux/kernel.h> -#include <linux/of.h> -#include <linux/pm_runtime.h> -#include <linux/slab.h> - -#include "coresight-priv.h" - -#define REPLICATOR_IDFILTER0 0x000 -#define REPLICATOR_IDFILTER1 0x004 - -/** - * struct replicator_state - specifics associated to a replicator component - * @base: memory mapped base address for this component. - * @dev: the device entity associated with this component - * @atclk: optional clock for the core parts of the replicator. - * @csdev: component vitals needed by the framework - */ -struct replicator_state { - void __iomem *base; - struct device *dev; - struct clk *atclk; - struct coresight_device *csdev; -}; - -/* - * replicator_reset : Reset the replicator configuration to sane values. - */ -static void replicator_reset(struct replicator_state *drvdata) -{ - CS_UNLOCK(drvdata->base); - - if (!coresight_claim_device_unlocked(drvdata->base)) { - writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0); - writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1); - coresight_disclaim_device_unlocked(drvdata->base); - } - - CS_LOCK(drvdata->base); -} - -static int replicator_enable(struct coresight_device *csdev, int inport, - int outport) -{ - int rc = 0; - u32 reg; - struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent); - - switch (outport) { - case 0: - reg = REPLICATOR_IDFILTER0; - break; - case 1: - reg = REPLICATOR_IDFILTER1; - break; - default: - WARN_ON(1); - return -EINVAL; - } - - CS_UNLOCK(drvdata->base); - - if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) && - (readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff)) - rc = coresight_claim_device_unlocked(drvdata->base); - - /* Ensure that the outport is enabled. */ - if (!rc) { - writel_relaxed(0x00, drvdata->base + reg); - dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); - } - - CS_LOCK(drvdata->base); - - return rc; -} - -static void replicator_disable(struct coresight_device *csdev, int inport, - int outport) -{ - u32 reg; - struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent); - - switch (outport) { - case 0: - reg = REPLICATOR_IDFILTER0; - break; - case 1: - reg = REPLICATOR_IDFILTER1; - break; - default: - WARN_ON(1); - return; - } - - CS_UNLOCK(drvdata->base); - - /* disable the flow of ATB data through port */ - writel_relaxed(0xff, drvdata->base + reg); - - if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) && - (readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff)) - coresight_disclaim_device_unlocked(drvdata->base); - CS_LOCK(drvdata->base); - - dev_dbg(drvdata->dev, "REPLICATOR disabled\n"); -} - -static const struct coresight_ops_link replicator_link_ops = { - .enable = replicator_enable, - .disable = replicator_disable, -}; - -static const struct coresight_ops replicator_cs_ops = { - .link_ops = &replicator_link_ops, -}; - -#define coresight_replicator_reg(name, offset) \ - coresight_simple_reg32(struct replicator_state, name, offset) - -coresight_replicator_reg(idfilter0, REPLICATOR_IDFILTER0); -coresight_replicator_reg(idfilter1, REPLICATOR_IDFILTER1); - -static struct attribute *replicator_mgmt_attrs[] = { - &dev_attr_idfilter0.attr, - &dev_attr_idfilter1.attr, - NULL, -}; - -static const struct attribute_group replicator_mgmt_group = { - .attrs = replicator_mgmt_attrs, - .name = "mgmt", -}; - -static const struct attribute_group *replicator_groups[] = { - &replicator_mgmt_group, - NULL, -}; - -static int replicator_probe(struct amba_device *adev, const struct amba_id *id) -{ - int ret; - struct device *dev = &adev->dev; - struct resource *res = &adev->res; - struct coresight_platform_data *pdata = NULL; - struct replicator_state *drvdata; - struct coresight_desc desc = { 0 }; - struct device_node *np = adev->dev.of_node; - void __iomem *base; - - if (np) { - pdata = of_get_coresight_platform_data(dev, np); - if (IS_ERR(pdata)) - return PTR_ERR(pdata); - adev->dev.platform_data = pdata; - } - - drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); - if (!drvdata) - return -ENOMEM; - - drvdata->dev = &adev->dev; - drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ - if (!IS_ERR(drvdata->atclk)) { - ret = clk_prepare_enable(drvdata->atclk); - if (ret) - return ret; - } - - /* Validity for the resource is already checked by the AMBA core */ - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); - - drvdata->base = base; - dev_set_drvdata(dev, drvdata); - pm_runtime_put(&adev->dev); - - desc.type = CORESIGHT_DEV_TYPE_LINK; - desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT; - desc.ops = &replicator_cs_ops; - desc.pdata = adev->dev.platform_data; - desc.dev = &adev->dev; - desc.groups = replicator_groups; - drvdata->csdev = coresight_register(&desc); - - if (!IS_ERR(drvdata->csdev)) { - replicator_reset(drvdata); - return 0; - } - return PTR_ERR(drvdata->csdev); -} - -#ifdef CONFIG_PM -static int replicator_runtime_suspend(struct device *dev) -{ - struct replicator_state *drvdata = dev_get_drvdata(dev); - - if (drvdata && !IS_ERR(drvdata->atclk)) - clk_disable_unprepare(drvdata->atclk); - - return 0; -} - -static int replicator_runtime_resume(struct device *dev) -{ - struct replicator_state *drvdata = dev_get_drvdata(dev); - - if (drvdata && !IS_ERR(drvdata->atclk)) - clk_prepare_enable(drvdata->atclk); - - return 0; -} -#endif - -static const struct dev_pm_ops replicator_dev_pm_ops = { - SET_RUNTIME_PM_OPS(replicator_runtime_suspend, - replicator_runtime_resume, - NULL) -}; - -static const struct amba_id replicator_ids[] = { - { - .id = 0x000bb909, - .mask = 0x000fffff, - }, - { - /* Coresight SoC-600 */ - .id = 0x000bb9ec, - .mask = 0x000fffff, - }, - { 0, 0 }, -}; - -static struct amba_driver replicator_driver = { - .drv = { - .name = "coresight-dynamic-replicator", - .pm = &replicator_dev_pm_ops, - .suppress_bind_attrs = true, - }, - .probe = replicator_probe, - .id_table = replicator_ids, -}; -builtin_amba_driver(replicator_driver); diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c index 105782ea64c7..4ee4c80a4354 100644 --- a/drivers/hwtracing/coresight/coresight-etb10.c +++ b/drivers/hwtracing/coresight/coresight-etb10.c @@ -5,6 +5,7 @@ * Description: CoreSight Embedded Trace Buffer driver */ +#include <linux/atomic.h> #include <linux/kernel.h> #include <linux/init.h> #include <linux/types.h> @@ -71,6 +72,8 @@ * @miscdev: specifics to handle "/dev/xyz.etb" entry. * @spinlock: only one at a time pls. * @reading: synchronise user space access to etb buffer. + * @pid: Process ID of the process being monitored by the session + * that is using this component. * @buf: area of memory where ETB buffer content gets sent. * @mode: this ETB is being used. * @buffer_depth: size of @buf. @@ -84,6 +87,7 @@ struct etb_drvdata { struct miscdevice miscdev; spinlock_t spinlock; local_t reading; + pid_t pid; u8 *buf; u32 mode; u32 buffer_depth; @@ -93,17 +97,9 @@ struct etb_drvdata { static int etb_set_buffer(struct coresight_device *csdev, struct perf_output_handle *handle); -static unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata) +static inline unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata) { - u32 depth = 0; - - pm_runtime_get_sync(drvdata->dev); - - /* RO registers don't need locking */ - depth = readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG); - - pm_runtime_put(drvdata->dev); - return depth; + return readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG); } static void __etb_enable_hw(struct etb_drvdata *drvdata) @@ -159,14 +155,15 @@ static int etb_enable_sysfs(struct coresight_device *csdev) goto out; } - /* Nothing to do, the tracer is already enabled. */ - if (drvdata->mode == CS_MODE_SYSFS) - goto out; + if (drvdata->mode == CS_MODE_DISABLED) { + ret = etb_enable_hw(drvdata); + if (ret) + goto out; - ret = etb_enable_hw(drvdata); - if (!ret) drvdata->mode = CS_MODE_SYSFS; + } + atomic_inc(csdev->refcnt); out: spin_unlock_irqrestore(&drvdata->spinlock, flags); return ret; @@ -175,29 +172,52 @@ out: static int etb_enable_perf(struct coresight_device *csdev, void *data) { int ret = 0; + pid_t pid; unsigned long flags; struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + struct perf_output_handle *handle = data; spin_lock_irqsave(&drvdata->spinlock, flags); - /* No need to continue if the component is already in use. */ - if (drvdata->mode != CS_MODE_DISABLED) { + /* No need to continue if the component is already in used by sysFS. */ + if (drvdata->mode == CS_MODE_SYSFS) { + ret = -EBUSY; + goto out; + } + + /* Get a handle on the pid of the process to monitor */ + pid = task_pid_nr(handle->event->owner); + + if (drvdata->pid != -1 && drvdata->pid != pid) { ret = -EBUSY; goto out; } /* + * No HW configuration is needed if the sink is already in + * use for this session. + */ + if (drvdata->pid == pid) { + atomic_inc(csdev->refcnt); + goto out; + } + + /* * We don't have an internal state to clean up if we fail to setup * the perf buffer. So we can perform the step before we turn the * ETB on and leave without cleaning up. */ - ret = etb_set_buffer(csdev, (struct perf_output_handle *)data); + ret = etb_set_buffer(csdev, handle); if (ret) goto out; ret = etb_enable_hw(drvdata); - if (!ret) + if (!ret) { + /* Associate with monitored process. */ + drvdata->pid = pid; drvdata->mode = CS_MODE_PERF; + atomic_inc(csdev->refcnt); + } out: spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -325,27 +345,35 @@ static void etb_disable_hw(struct etb_drvdata *drvdata) coresight_disclaim_device(drvdata->base); } -static void etb_disable(struct coresight_device *csdev) +static int etb_disable(struct coresight_device *csdev) { struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); unsigned long flags; spin_lock_irqsave(&drvdata->spinlock, flags); - /* Disable the ETB only if it needs to */ - if (drvdata->mode != CS_MODE_DISABLED) { - etb_disable_hw(drvdata); - drvdata->mode = CS_MODE_DISABLED; + if (atomic_dec_return(csdev->refcnt)) { + spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EBUSY; } + + /* Complain if we (somehow) got out of sync */ + WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED); + etb_disable_hw(drvdata); + /* Dissociate from monitored process. */ + drvdata->pid = -1; + drvdata->mode = CS_MODE_DISABLED; spin_unlock_irqrestore(&drvdata->spinlock, flags); dev_dbg(drvdata->dev, "ETB disabled\n"); + return 0; } -static void *etb_alloc_buffer(struct coresight_device *csdev, int cpu, - void **pages, int nr_pages, bool overwrite) +static void *etb_alloc_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool overwrite) { - int node; + int node, cpu = event->cpu; struct cs_buffers *buf; if (cpu == -1) @@ -404,7 +432,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, const u32 *barrier; u32 read_ptr, write_ptr, capacity; u32 status, read_data; - unsigned long offset, to_read; + unsigned long offset, to_read = 0, flags; struct cs_buffers *buf = sink_config; struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); @@ -413,6 +441,12 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS; + spin_lock_irqsave(&drvdata->spinlock, flags); + + /* Don't do anything if another tracer is using this sink */ + if (atomic_read(csdev->refcnt) != 1) + goto out; + __etb_disable_hw(drvdata); CS_UNLOCK(drvdata->base); @@ -523,6 +557,8 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev, } __etb_enable_hw(drvdata); CS_LOCK(drvdata->base); +out: + spin_unlock_irqrestore(&drvdata->spinlock, flags); return to_read; } @@ -720,7 +756,6 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id) spin_lock_init(&drvdata->spinlock); drvdata->buffer_depth = etb_get_buffer_depth(drvdata); - pm_runtime_put(&adev->dev); if (drvdata->buffer_depth & 0x80000000) return -EINVAL; @@ -730,6 +765,9 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id) if (!drvdata->buf) return -ENOMEM; + /* This device is not associated with a session */ + drvdata->pid = -1; + desc.type = CORESIGHT_DEV_TYPE_SINK; desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; desc.ops = &etb_cs_ops; @@ -747,6 +785,7 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id) if (ret) goto err_misc_register; + pm_runtime_put(&adev->dev); return 0; err_misc_register: diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c index 4d5a2b9f9d6a..3c6294432748 100644 --- a/drivers/hwtracing/coresight/coresight-etm-perf.c +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c @@ -29,6 +29,7 @@ static DEFINE_PER_CPU(struct coresight_device *, csdev_src); /* ETMv3.5/PTM's ETMCR is 'config' */ PMU_FORMAT_ATTR(cycacc, "config:" __stringify(ETM_OPT_CYCACC)); +PMU_FORMAT_ATTR(contextid, "config:" __stringify(ETM_OPT_CTXTID)); PMU_FORMAT_ATTR(timestamp, "config:" __stringify(ETM_OPT_TS)); PMU_FORMAT_ATTR(retstack, "config:" __stringify(ETM_OPT_RETSTK)); /* Sink ID - same for all ETMs */ @@ -36,6 +37,7 @@ PMU_FORMAT_ATTR(sinkid, "config2:0-31"); static struct attribute *etm_config_formats_attr[] = { &format_attr_cycacc.attr, + &format_attr_contextid.attr, &format_attr_timestamp.attr, &format_attr_retstack.attr, &format_attr_sinkid.attr, @@ -118,23 +120,34 @@ out: return ret; } +static void free_sink_buffer(struct etm_event_data *event_data) +{ + int cpu; + cpumask_t *mask = &event_data->mask; + struct coresight_device *sink; + + if (WARN_ON(cpumask_empty(mask))) + return; + + if (!event_data->snk_config) + return; + + cpu = cpumask_first(mask); + sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu)); + sink_ops(sink)->free_buffer(event_data->snk_config); +} + static void free_event_data(struct work_struct *work) { int cpu; cpumask_t *mask; struct etm_event_data *event_data; - struct coresight_device *sink; event_data = container_of(work, struct etm_event_data, work); mask = &event_data->mask; /* Free the sink buffers, if there are any */ - if (event_data->snk_config && !WARN_ON(cpumask_empty(mask))) { - cpu = cpumask_first(mask); - sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu)); - if (sink_ops(sink)->free_buffer) - sink_ops(sink)->free_buffer(event_data->snk_config); - } + free_sink_buffer(event_data); for_each_cpu(cpu, mask) { struct list_head **ppath; @@ -213,7 +226,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, sink = coresight_get_enabled_sink(true); } - if (!sink || !sink_ops(sink)->alloc_buffer) + if (!sink) goto err; mask = &event_data->mask; @@ -259,9 +272,12 @@ static void *etm_setup_aux(struct perf_event *event, void **pages, if (cpu >= nr_cpu_ids) goto err; + if (!sink_ops(sink)->alloc_buffer || !sink_ops(sink)->free_buffer) + goto err; + /* Allocate the sink buffer for this session */ event_data->snk_config = - sink_ops(sink)->alloc_buffer(sink, cpu, pages, + sink_ops(sink)->alloc_buffer(sink, event, pages, nr_pages, overwrite); if (!event_data->snk_config) goto err; @@ -566,7 +582,8 @@ static int __init etm_perf_init(void) { int ret; - etm_pmu.capabilities = PERF_PMU_CAP_EXCLUSIVE; + etm_pmu.capabilities = (PERF_PMU_CAP_EXCLUSIVE | + PERF_PMU_CAP_ITRACE); etm_pmu.attr_groups = etm_pmu_attr_groups; etm_pmu.task_ctx_nr = perf_sw_context; diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c index 08ce37c9475d..8bb0092c7ec2 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.c +++ b/drivers/hwtracing/coresight/coresight-etm4x.c @@ -138,8 +138,11 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata) drvdata->base + TRCCNTVRn(i)); } - /* Resource selector pair 0 is always implemented and reserved */ - for (i = 0; i < drvdata->nr_resource * 2; i++) + /* + * Resource selector pair 0 is always implemented and reserved. As + * such start at 2. + */ + for (i = 2; i < drvdata->nr_resource * 2; i++) writel_relaxed(config->res_ctrl[i], drvdata->base + TRCRSCTLRn(i)); @@ -201,6 +204,91 @@ static void etm4_enable_hw_smp_call(void *info) arg->rc = etm4_enable_hw(arg->drvdata); } +/* + * The goal of function etm4_config_timestamp_event() is to configure a + * counter that will tell the tracer to emit a timestamp packet when it + * reaches zero. This is done in order to get a more fine grained idea + * of when instructions are executed so that they can be correlated + * with execution on other CPUs. + * + * To do this the counter itself is configured to self reload and + * TRCRSCTLR1 (always true) used to get the counter to decrement. From + * there a resource selector is configured with the counter and the + * timestamp control register to use the resource selector to trigger the + * event that will insert a timestamp packet in the stream. + */ +static int etm4_config_timestamp_event(struct etmv4_drvdata *drvdata) +{ + int ctridx, ret = -EINVAL; + int counter, rselector; + u32 val = 0; + struct etmv4_config *config = &drvdata->config; + + /* No point in trying if we don't have at least one counter */ + if (!drvdata->nr_cntr) + goto out; + + /* Find a counter that hasn't been initialised */ + for (ctridx = 0; ctridx < drvdata->nr_cntr; ctridx++) + if (config->cntr_val[ctridx] == 0) + break; + + /* All the counters have been configured already, bail out */ + if (ctridx == drvdata->nr_cntr) { + pr_debug("%s: no available counter found\n", __func__); + ret = -ENOSPC; + goto out; + } + + /* + * Searching for an available resource selector to use, starting at + * '2' since every implementation has at least 2 resource selector. + * ETMIDR4 gives the number of resource selector _pairs_, + * hence multiply by 2. + */ + for (rselector = 2; rselector < drvdata->nr_resource * 2; rselector++) + if (!config->res_ctrl[rselector]) + break; + + if (rselector == drvdata->nr_resource * 2) { + pr_debug("%s: no available resource selector found\n", + __func__); + ret = -ENOSPC; + goto out; + } + + /* Remember what counter we used */ + counter = 1 << ctridx; + + /* + * Initialise original and reload counter value to the smallest + * possible value in order to get as much precision as we can. + */ + config->cntr_val[ctridx] = 1; + config->cntrldvr[ctridx] = 1; + + /* Set the trace counter control register */ + val = 0x1 << 16 | /* Bit 16, reload counter automatically */ + 0x0 << 7 | /* Select single resource selector */ + 0x1; /* Resource selector 1, i.e always true */ + + config->cntr_ctrl[ctridx] = val; + + val = 0x2 << 16 | /* Group 0b0010 - Counter and sequencers */ + counter << 0; /* Counter to use */ + + config->res_ctrl[rselector] = val; + + val = 0x0 << 7 | /* Select single resource selector */ + rselector; /* Resource selector */ + + config->ts_ctrl = val; + + ret = 0; +out: + return ret; +} + static int etm4_parse_event_config(struct etmv4_drvdata *drvdata, struct perf_event *event) { @@ -236,9 +324,29 @@ static int etm4_parse_event_config(struct etmv4_drvdata *drvdata, /* TRM: Must program this for cycacc to work */ config->ccctlr = ETM_CYC_THRESHOLD_DEFAULT; } - if (attr->config & BIT(ETM_OPT_TS)) + if (attr->config & BIT(ETM_OPT_TS)) { + /* + * Configure timestamps to be emitted at regular intervals in + * order to correlate instructions executed on different CPUs + * (CPU-wide trace scenarios). + */ + ret = etm4_config_timestamp_event(drvdata); + + /* + * No need to go further if timestamp intervals can't + * be configured. + */ + if (ret) + goto out; + /* bit[11], Global timestamp tracing bit */ config->cfg |= BIT(11); + } + + if (attr->config & BIT(ETM_OPT_CTXTID)) + /* bit[6], Context ID tracing bit */ + config->cfg |= BIT(ETM4_CFG_BIT_CTXTID); + /* return stack - enable if selected and supported */ if ((attr->config & BIT(ETM_OPT_RETSTK)) && drvdata->retstack) /* bit[12], Return stack enable bit */ diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c index 927925151509..16b0c0e1e43a 100644 --- a/drivers/hwtracing/coresight/coresight-funnel.c +++ b/drivers/hwtracing/coresight/coresight-funnel.c @@ -12,6 +12,8 @@ #include <linux/err.h> #include <linux/fs.h> #include <linux/slab.h> +#include <linux/of.h> +#include <linux/platform_device.h> #include <linux/pm_runtime.h> #include <linux/coresight.h> #include <linux/amba/bus.h> @@ -43,7 +45,7 @@ struct funnel_drvdata { unsigned long priority; }; -static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port) +static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port) { u32 functl; int rc = 0; @@ -71,17 +73,19 @@ done: static int funnel_enable(struct coresight_device *csdev, int inport, int outport) { - int rc; + int rc = 0; struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); - rc = funnel_enable_hw(drvdata, inport); + if (drvdata->base) + rc = dynamic_funnel_enable_hw(drvdata, inport); if (!rc) dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport); return rc; } -static void funnel_disable_hw(struct funnel_drvdata *drvdata, int inport) +static void dynamic_funnel_disable_hw(struct funnel_drvdata *drvdata, + int inport) { u32 functl; @@ -103,7 +107,8 @@ static void funnel_disable(struct coresight_device *csdev, int inport, { struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); - funnel_disable_hw(drvdata, inport); + if (drvdata->base) + dynamic_funnel_disable_hw(drvdata, inport); dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport); } @@ -177,54 +182,70 @@ static struct attribute *coresight_funnel_attrs[] = { }; ATTRIBUTE_GROUPS(coresight_funnel); -static int funnel_probe(struct amba_device *adev, const struct amba_id *id) +static int funnel_probe(struct device *dev, struct resource *res) { int ret; void __iomem *base; - struct device *dev = &adev->dev; struct coresight_platform_data *pdata = NULL; struct funnel_drvdata *drvdata; - struct resource *res = &adev->res; struct coresight_desc desc = { 0 }; - struct device_node *np = adev->dev.of_node; + struct device_node *np = dev->of_node; if (np) { pdata = of_get_coresight_platform_data(dev, np); if (IS_ERR(pdata)) return PTR_ERR(pdata); - adev->dev.platform_data = pdata; + dev->platform_data = pdata; } + if (of_device_is_compatible(np, "arm,coresight-funnel")) + pr_warn_once("Uses OBSOLETE CoreSight funnel binding\n"); + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); if (!drvdata) return -ENOMEM; - drvdata->dev = &adev->dev; - drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ + drvdata->dev = dev; + drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */ if (!IS_ERR(drvdata->atclk)) { ret = clk_prepare_enable(drvdata->atclk); if (ret) return ret; } - dev_set_drvdata(dev, drvdata); - /* Validity for the resource is already checked by the AMBA core */ - base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); + /* + * Map the device base for dynamic-funnel, which has been + * validated by AMBA core. + */ + if (res) { + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) { + ret = PTR_ERR(base); + goto out_disable_clk; + } + drvdata->base = base; + desc.groups = coresight_funnel_groups; + } - drvdata->base = base; - pm_runtime_put(&adev->dev); + dev_set_drvdata(dev, drvdata); desc.type = CORESIGHT_DEV_TYPE_LINK; desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG; desc.ops = &funnel_cs_ops; desc.pdata = pdata; desc.dev = dev; - desc.groups = coresight_funnel_groups; drvdata->csdev = coresight_register(&desc); + if (IS_ERR(drvdata->csdev)) { + ret = PTR_ERR(drvdata->csdev); + goto out_disable_clk; + } + + pm_runtime_put(dev); - return PTR_ERR_OR_ZERO(drvdata->csdev); +out_disable_clk: + if (ret && !IS_ERR_OR_NULL(drvdata->atclk)) + clk_disable_unprepare(drvdata->atclk); + return ret; } #ifdef CONFIG_PM @@ -253,7 +274,48 @@ static const struct dev_pm_ops funnel_dev_pm_ops = { SET_RUNTIME_PM_OPS(funnel_runtime_suspend, funnel_runtime_resume, NULL) }; -static const struct amba_id funnel_ids[] = { +static int static_funnel_probe(struct platform_device *pdev) +{ + int ret; + + pm_runtime_get_noresume(&pdev->dev); + pm_runtime_set_active(&pdev->dev); + pm_runtime_enable(&pdev->dev); + + /* Static funnel do not have programming base */ + ret = funnel_probe(&pdev->dev, NULL); + + if (ret) { + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_disable(&pdev->dev); + } + + return ret; +} + +static const struct of_device_id static_funnel_match[] = { + {.compatible = "arm,coresight-static-funnel"}, + {} +}; + +static struct platform_driver static_funnel_driver = { + .probe = static_funnel_probe, + .driver = { + .name = "coresight-static-funnel", + .of_match_table = static_funnel_match, + .pm = &funnel_dev_pm_ops, + .suppress_bind_attrs = true, + }, +}; +builtin_platform_driver(static_funnel_driver); + +static int dynamic_funnel_probe(struct amba_device *adev, + const struct amba_id *id) +{ + return funnel_probe(&adev->dev, &adev->res); +} + +static const struct amba_id dynamic_funnel_ids[] = { { .id = 0x000bb908, .mask = 0x000fffff, @@ -266,14 +328,14 @@ static const struct amba_id funnel_ids[] = { { 0, 0}, }; -static struct amba_driver funnel_driver = { +static struct amba_driver dynamic_funnel_driver = { .drv = { - .name = "coresight-funnel", + .name = "coresight-dynamic-funnel", .owner = THIS_MODULE, .pm = &funnel_dev_pm_ops, .suppress_bind_attrs = true, }, - .probe = funnel_probe, - .id_table = funnel_ids, + .probe = dynamic_funnel_probe, + .id_table = dynamic_funnel_ids, }; -builtin_amba_driver(funnel_driver); +builtin_amba_driver(dynamic_funnel_driver); diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c index feac98315471..8c9ce74498e1 100644 --- a/drivers/hwtracing/coresight/coresight-replicator.c +++ b/drivers/hwtracing/coresight/coresight-replicator.c @@ -1,10 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. + * Copyright (c) 2011-2015, The Linux Foundation. All rights reserved. * * Description: CoreSight Replicator driver */ +#include <linux/amba/bus.h> #include <linux/kernel.h> #include <linux/device.h> #include <linux/platform_device.h> @@ -18,25 +19,117 @@ #include "coresight-priv.h" +#define REPLICATOR_IDFILTER0 0x000 +#define REPLICATOR_IDFILTER1 0x004 + /** * struct replicator_drvdata - specifics associated to a replicator component + * @base: memory mapped base address for this component. Also indicates + * whether this one is programmable or not. * @dev: the device entity associated with this component * @atclk: optional clock for the core parts of the replicator. * @csdev: component vitals needed by the framework */ struct replicator_drvdata { + void __iomem *base; struct device *dev; struct clk *atclk; struct coresight_device *csdev; }; +static void dynamic_replicator_reset(struct replicator_drvdata *drvdata) +{ + CS_UNLOCK(drvdata->base); + + if (!coresight_claim_device_unlocked(drvdata->base)) { + writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0); + writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1); + coresight_disclaim_device_unlocked(drvdata->base); + } + + CS_LOCK(drvdata->base); +} + +/* + * replicator_reset : Reset the replicator configuration to sane values. + */ +static inline void replicator_reset(struct replicator_drvdata *drvdata) +{ + if (drvdata->base) + dynamic_replicator_reset(drvdata); +} + +static int dynamic_replicator_enable(struct replicator_drvdata *drvdata, + int inport, int outport) +{ + int rc = 0; + u32 reg; + + switch (outport) { + case 0: + reg = REPLICATOR_IDFILTER0; + break; + case 1: + reg = REPLICATOR_IDFILTER1; + break; + default: + WARN_ON(1); + return -EINVAL; + } + + CS_UNLOCK(drvdata->base); + + if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) && + (readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff)) + rc = coresight_claim_device_unlocked(drvdata->base); + + /* Ensure that the outport is enabled. */ + if (!rc) + writel_relaxed(0x00, drvdata->base + reg); + CS_LOCK(drvdata->base); + + return rc; +} + static int replicator_enable(struct coresight_device *csdev, int inport, int outport) { + int rc = 0; struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); - dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); - return 0; + if (drvdata->base) + rc = dynamic_replicator_enable(drvdata, inport, outport); + if (!rc) + dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); + return rc; +} + +static void dynamic_replicator_disable(struct replicator_drvdata *drvdata, + int inport, int outport) +{ + u32 reg; + + switch (outport) { + case 0: + reg = REPLICATOR_IDFILTER0; + break; + case 1: + reg = REPLICATOR_IDFILTER1; + break; + default: + WARN_ON(1); + return; + } + + CS_UNLOCK(drvdata->base); + + /* disable the flow of ATB data through port */ + writel_relaxed(0xff, drvdata->base + reg); + + if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) && + (readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff)) + coresight_disclaim_device_unlocked(drvdata->base); + CS_LOCK(drvdata->base); } static void replicator_disable(struct coresight_device *csdev, int inport, @@ -44,6 +137,8 @@ static void replicator_disable(struct coresight_device *csdev, int inport, { struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + if (drvdata->base) + dynamic_replicator_disable(drvdata, inport, outport); dev_dbg(drvdata->dev, "REPLICATOR disabled\n"); } @@ -56,58 +151,110 @@ static const struct coresight_ops replicator_cs_ops = { .link_ops = &replicator_link_ops, }; -static int replicator_probe(struct platform_device *pdev) +#define coresight_replicator_reg(name, offset) \ + coresight_simple_reg32(struct replicator_drvdata, name, offset) + +coresight_replicator_reg(idfilter0, REPLICATOR_IDFILTER0); +coresight_replicator_reg(idfilter1, REPLICATOR_IDFILTER1); + +static struct attribute *replicator_mgmt_attrs[] = { + &dev_attr_idfilter0.attr, + &dev_attr_idfilter1.attr, + NULL, +}; + +static const struct attribute_group replicator_mgmt_group = { + .attrs = replicator_mgmt_attrs, + .name = "mgmt", +}; + +static const struct attribute_group *replicator_groups[] = { + &replicator_mgmt_group, + NULL, +}; + +static int replicator_probe(struct device *dev, struct resource *res) { - int ret; - struct device *dev = &pdev->dev; + int ret = 0; struct coresight_platform_data *pdata = NULL; struct replicator_drvdata *drvdata; struct coresight_desc desc = { 0 }; - struct device_node *np = pdev->dev.of_node; + struct device_node *np = dev->of_node; + void __iomem *base; if (np) { pdata = of_get_coresight_platform_data(dev, np); if (IS_ERR(pdata)) return PTR_ERR(pdata); - pdev->dev.platform_data = pdata; + dev->platform_data = pdata; } + if (of_device_is_compatible(np, "arm,coresight-replicator")) + pr_warn_once("Uses OBSOLETE CoreSight replicator binding\n"); + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); if (!drvdata) return -ENOMEM; - drvdata->dev = &pdev->dev; - drvdata->atclk = devm_clk_get(&pdev->dev, "atclk"); /* optional */ + drvdata->dev = dev; + drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */ if (!IS_ERR(drvdata->atclk)) { ret = clk_prepare_enable(drvdata->atclk); if (ret) return ret; } - pm_runtime_get_noresume(&pdev->dev); - pm_runtime_set_active(&pdev->dev); - pm_runtime_enable(&pdev->dev); - platform_set_drvdata(pdev, drvdata); + + /* + * Map the device base for dynamic-replicator, which has been + * validated by AMBA core + */ + if (res) { + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) { + ret = PTR_ERR(base); + goto out_disable_clk; + } + drvdata->base = base; + desc.groups = replicator_groups; + } + + dev_set_drvdata(dev, drvdata); desc.type = CORESIGHT_DEV_TYPE_LINK; desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT; desc.ops = &replicator_cs_ops; - desc.pdata = pdev->dev.platform_data; - desc.dev = &pdev->dev; + desc.pdata = dev->platform_data; + desc.dev = dev; drvdata->csdev = coresight_register(&desc); if (IS_ERR(drvdata->csdev)) { ret = PTR_ERR(drvdata->csdev); - goto out_disable_pm; + goto out_disable_clk; } - pm_runtime_put(&pdev->dev); - - return 0; + replicator_reset(drvdata); + pm_runtime_put(dev); -out_disable_pm: - if (!IS_ERR(drvdata->atclk)) +out_disable_clk: + if (ret && !IS_ERR_OR_NULL(drvdata->atclk)) clk_disable_unprepare(drvdata->atclk); - pm_runtime_put_noidle(&pdev->dev); - pm_runtime_disable(&pdev->dev); + return ret; +} + +static int static_replicator_probe(struct platform_device *pdev) +{ + int ret; + + pm_runtime_get_noresume(&pdev->dev); + pm_runtime_set_active(&pdev->dev); + pm_runtime_enable(&pdev->dev); + + /* Static replicators do not have programming base */ + ret = replicator_probe(&pdev->dev, NULL); + + if (ret) { + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_disable(&pdev->dev); + } return ret; } @@ -139,18 +286,49 @@ static const struct dev_pm_ops replicator_dev_pm_ops = { replicator_runtime_resume, NULL) }; -static const struct of_device_id replicator_match[] = { +static const struct of_device_id static_replicator_match[] = { {.compatible = "arm,coresight-replicator"}, + {.compatible = "arm,coresight-static-replicator"}, {} }; -static struct platform_driver replicator_driver = { - .probe = replicator_probe, +static struct platform_driver static_replicator_driver = { + .probe = static_replicator_probe, .driver = { - .name = "coresight-replicator", - .of_match_table = replicator_match, + .name = "coresight-static-replicator", + .of_match_table = static_replicator_match, + .pm = &replicator_dev_pm_ops, + .suppress_bind_attrs = true, + }, +}; +builtin_platform_driver(static_replicator_driver); + +static int dynamic_replicator_probe(struct amba_device *adev, + const struct amba_id *id) +{ + return replicator_probe(&adev->dev, &adev->res); +} + +static const struct amba_id dynamic_replicator_ids[] = { + { + .id = 0x000bb909, + .mask = 0x000fffff, + }, + { + /* Coresight SoC-600 */ + .id = 0x000bb9ec, + .mask = 0x000fffff, + }, + { 0, 0 }, +}; + +static struct amba_driver dynamic_replicator_driver = { + .drv = { + .name = "coresight-dynamic-replicator", .pm = &replicator_dev_pm_ops, .suppress_bind_attrs = true, }, + .probe = dynamic_replicator_probe, + .id_table = dynamic_replicator_ids, }; -builtin_platform_driver(replicator_driver); +builtin_amba_driver(dynamic_replicator_driver); diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c index a5f053f2db2c..2527b5d3b65e 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c @@ -4,6 +4,7 @@ * Author: Mathieu Poirier <mathieu.poirier@linaro.org> */ +#include <linux/atomic.h> #include <linux/circ_buf.h> #include <linux/coresight.h> #include <linux/perf_event.h> @@ -180,8 +181,10 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev) * sink is already enabled no memory is needed and the HW need not be * touched. */ - if (drvdata->mode == CS_MODE_SYSFS) + if (drvdata->mode == CS_MODE_SYSFS) { + atomic_inc(csdev->refcnt); goto out; + } /* * If drvdata::buf isn't NULL, memory was allocated for a previous @@ -200,11 +203,13 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev) } ret = tmc_etb_enable_hw(drvdata); - if (!ret) + if (!ret) { drvdata->mode = CS_MODE_SYSFS; - else + atomic_inc(csdev->refcnt); + } else { /* Free up the buffer if we failed to enable */ used = false; + } out: spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -218,6 +223,7 @@ out: static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data) { int ret = 0; + pid_t pid; unsigned long flags; struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct perf_output_handle *handle = data; @@ -228,19 +234,42 @@ static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data) if (drvdata->reading) break; /* - * In Perf mode there can be only one writer per sink. There - * is also no need to continue if the ETB/ETF is already - * operated from sysFS. + * No need to continue if the ETB/ETF is already operated + * from sysFS. */ - if (drvdata->mode != CS_MODE_DISABLED) + if (drvdata->mode == CS_MODE_SYSFS) { + ret = -EBUSY; + break; + } + + /* Get a handle on the pid of the process to monitor */ + pid = task_pid_nr(handle->event->owner); + + if (drvdata->pid != -1 && drvdata->pid != pid) { + ret = -EBUSY; break; + } ret = tmc_set_etf_buffer(csdev, handle); if (ret) break; + + /* + * No HW configuration is needed if the sink is already in + * use for this session. + */ + if (drvdata->pid == pid) { + atomic_inc(csdev->refcnt); + break; + } + ret = tmc_etb_enable_hw(drvdata); - if (!ret) + if (!ret) { + /* Associate with monitored process. */ + drvdata->pid = pid; drvdata->mode = CS_MODE_PERF; + atomic_inc(csdev->refcnt); + } } while (0); spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -273,26 +302,34 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev, return 0; } -static void tmc_disable_etf_sink(struct coresight_device *csdev) +static int tmc_disable_etf_sink(struct coresight_device *csdev) { unsigned long flags; struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); spin_lock_irqsave(&drvdata->spinlock, flags); + if (drvdata->reading) { spin_unlock_irqrestore(&drvdata->spinlock, flags); - return; + return -EBUSY; } - /* Disable the TMC only if it needs to */ - if (drvdata->mode != CS_MODE_DISABLED) { - tmc_etb_disable_hw(drvdata); - drvdata->mode = CS_MODE_DISABLED; + if (atomic_dec_return(csdev->refcnt)) { + spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EBUSY; } + /* Complain if we (somehow) got out of sync */ + WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED); + tmc_etb_disable_hw(drvdata); + /* Dissociate from monitored process. */ + drvdata->pid = -1; + drvdata->mode = CS_MODE_DISABLED; + spin_unlock_irqrestore(&drvdata->spinlock, flags); dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n"); + return 0; } static int tmc_enable_etf_link(struct coresight_device *csdev, @@ -337,10 +374,11 @@ static void tmc_disable_etf_link(struct coresight_device *csdev, dev_dbg(drvdata->dev, "TMC-ETF disabled\n"); } -static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu, - void **pages, int nr_pages, bool overwrite) +static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool overwrite) { - int node; + int node, cpu = event->cpu; struct cs_buffers *buf; if (cpu == -1) @@ -400,7 +438,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, u32 *buf_ptr; u64 read_ptr, write_ptr; u32 status; - unsigned long offset, to_read; + unsigned long offset, to_read = 0, flags; struct cs_buffers *buf = sink_config; struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); @@ -411,6 +449,12 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, if (WARN_ON_ONCE(drvdata->mode != CS_MODE_PERF)) return 0; + spin_lock_irqsave(&drvdata->spinlock, flags); + + /* Don't do anything if another tracer is using this sink */ + if (atomic_read(csdev->refcnt) != 1) + goto out; + CS_UNLOCK(drvdata->base); tmc_flush_and_stop(drvdata); @@ -504,6 +548,8 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, to_read = buf->nr_pages << PAGE_SHIFT; } CS_LOCK(drvdata->base); +out: + spin_unlock_irqrestore(&drvdata->spinlock, flags); return to_read; } diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c index f684283890d3..df6e4b0b84e9 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etr.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c @@ -4,10 +4,15 @@ * Author: Mathieu Poirier <mathieu.poirier@linaro.org> */ +#include <linux/atomic.h> #include <linux/coresight.h> #include <linux/dma-mapping.h> #include <linux/iommu.h> +#include <linux/idr.h> +#include <linux/mutex.h> +#include <linux/refcount.h> #include <linux/slab.h> +#include <linux/types.h> #include <linux/vmalloc.h> #include "coresight-catu.h" #include "coresight-etm-perf.h" @@ -23,14 +28,18 @@ struct etr_flat_buf { /* * etr_perf_buffer - Perf buffer used for ETR + * @drvdata - The ETR drvdaga this buffer has been allocated for. * @etr_buf - Actual buffer used by the ETR + * @pid - The PID this etr_perf_buffer belongs to. * @snaphost - Perf session mode * @head - handle->head at the beginning of the session. * @nr_pages - Number of pages in the ring buffer. * @pages - Array of Pages in the ring buffer. */ struct etr_perf_buffer { + struct tmc_drvdata *drvdata; struct etr_buf *etr_buf; + pid_t pid; bool snapshot; unsigned long head; int nr_pages; @@ -772,7 +781,8 @@ static inline void tmc_etr_disable_catu(struct tmc_drvdata *drvdata) static const struct etr_buf_operations *etr_buf_ops[] = { [ETR_MODE_FLAT] = &etr_flat_buf_ops, [ETR_MODE_ETR_SG] = &etr_sg_buf_ops, - [ETR_MODE_CATU] = &etr_catu_buf_ops, + [ETR_MODE_CATU] = IS_ENABLED(CONFIG_CORESIGHT_CATU) + ? &etr_catu_buf_ops : NULL, }; static inline int tmc_etr_mode_alloc_buf(int mode, @@ -786,7 +796,7 @@ static inline int tmc_etr_mode_alloc_buf(int mode, case ETR_MODE_FLAT: case ETR_MODE_ETR_SG: case ETR_MODE_CATU: - if (etr_buf_ops[mode]->alloc) + if (etr_buf_ops[mode] && etr_buf_ops[mode]->alloc) rc = etr_buf_ops[mode]->alloc(drvdata, etr_buf, node, pages); if (!rc) @@ -1124,8 +1134,10 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev) * sink is already enabled no memory is needed and the HW need not be * touched, even if the buffer size has changed. */ - if (drvdata->mode == CS_MODE_SYSFS) + if (drvdata->mode == CS_MODE_SYSFS) { + atomic_inc(csdev->refcnt); goto out; + } /* * If we don't have a buffer or it doesn't match the requested size, @@ -1138,8 +1150,10 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev) } ret = tmc_etr_enable_hw(drvdata, drvdata->sysfs_buf); - if (!ret) + if (!ret) { drvdata->mode = CS_MODE_SYSFS; + atomic_inc(csdev->refcnt); + } out: spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -1154,23 +1168,23 @@ out: } /* - * tmc_etr_setup_perf_buf: Allocate ETR buffer for use by perf. + * alloc_etr_buf: Allocate ETR buffer for use by perf. * The size of the hardware buffer is dependent on the size configured * via sysfs and the perf ring buffer size. We prefer to allocate the * largest possible size, scaling down the size by half until it * reaches a minimum limit (1M), beyond which we give up. */ -static struct etr_perf_buffer * -tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, int node, int nr_pages, - void **pages, bool snapshot) +static struct etr_buf * +alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event, + int nr_pages, void **pages, bool snapshot) { + int node, cpu = event->cpu; struct etr_buf *etr_buf; - struct etr_perf_buffer *etr_perf; unsigned long size; - etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node); - if (!etr_perf) - return ERR_PTR(-ENOMEM); + if (cpu == -1) + cpu = smp_processor_id(); + node = cpu_to_node(cpu); /* * Try to match the perf ring buffer size if it is larger @@ -1195,32 +1209,160 @@ tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, int node, int nr_pages, size /= 2; } while (size >= TMC_ETR_PERF_MIN_BUF_SIZE); + return ERR_PTR(-ENOMEM); + +done: + return etr_buf; +} + +static struct etr_buf * +get_perf_etr_buf_cpu_wide(struct tmc_drvdata *drvdata, + struct perf_event *event, int nr_pages, + void **pages, bool snapshot) +{ + int ret; + pid_t pid = task_pid_nr(event->owner); + struct etr_buf *etr_buf; + +retry: + /* + * An etr_perf_buffer is associated with an event and holds a reference + * to the AUX ring buffer that was created for that event. In CPU-wide + * N:1 mode multiple events (one per CPU), each with its own AUX ring + * buffer, share a sink. As such an etr_perf_buffer is created for each + * event but a single etr_buf associated with the ETR is shared between + * them. The last event in a trace session will copy the content of the + * etr_buf to its AUX ring buffer. Ring buffer associated to other + * events are simply not used an freed as events are destoyed. We still + * need to allocate a ring buffer for each event since we don't know + * which event will be last. + */ + + /* + * The first thing to do here is check if an etr_buf has already been + * allocated for this session. If so it is shared with this event, + * otherwise it is created. + */ + mutex_lock(&drvdata->idr_mutex); + etr_buf = idr_find(&drvdata->idr, pid); + if (etr_buf) { + refcount_inc(&etr_buf->refcount); + mutex_unlock(&drvdata->idr_mutex); + return etr_buf; + } + + /* If we made it here no buffer has been allocated, do so now. */ + mutex_unlock(&drvdata->idr_mutex); + + etr_buf = alloc_etr_buf(drvdata, event, nr_pages, pages, snapshot); + if (IS_ERR(etr_buf)) + return etr_buf; + + refcount_set(&etr_buf->refcount, 1); + + /* Now that we have a buffer, add it to the IDR. */ + mutex_lock(&drvdata->idr_mutex); + ret = idr_alloc(&drvdata->idr, etr_buf, pid, pid + 1, GFP_KERNEL); + mutex_unlock(&drvdata->idr_mutex); + + /* Another event with this session ID has allocated this buffer. */ + if (ret == -ENOSPC) { + tmc_free_etr_buf(etr_buf); + goto retry; + } + + /* The IDR can't allocate room for a new session, abandon ship. */ + if (ret == -ENOMEM) { + tmc_free_etr_buf(etr_buf); + return ERR_PTR(ret); + } + + + return etr_buf; +} + +static struct etr_buf * +get_perf_etr_buf_per_thread(struct tmc_drvdata *drvdata, + struct perf_event *event, int nr_pages, + void **pages, bool snapshot) +{ + struct etr_buf *etr_buf; + + /* + * In per-thread mode the etr_buf isn't shared, so just go ahead + * with memory allocation. + */ + etr_buf = alloc_etr_buf(drvdata, event, nr_pages, pages, snapshot); + if (IS_ERR(etr_buf)) + goto out; + + refcount_set(&etr_buf->refcount, 1); +out: + return etr_buf; +} + +static struct etr_buf * +get_perf_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event, + int nr_pages, void **pages, bool snapshot) +{ + if (event->cpu == -1) + return get_perf_etr_buf_per_thread(drvdata, event, nr_pages, + pages, snapshot); + + return get_perf_etr_buf_cpu_wide(drvdata, event, nr_pages, + pages, snapshot); +} + +static struct etr_perf_buffer * +tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, struct perf_event *event, + int nr_pages, void **pages, bool snapshot) +{ + int node, cpu = event->cpu; + struct etr_buf *etr_buf; + struct etr_perf_buffer *etr_perf; + + if (cpu == -1) + cpu = smp_processor_id(); + node = cpu_to_node(cpu); + + etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node); + if (!etr_perf) + return ERR_PTR(-ENOMEM); + + etr_buf = get_perf_etr_buf(drvdata, event, nr_pages, pages, snapshot); + if (!IS_ERR(etr_buf)) + goto done; + kfree(etr_perf); return ERR_PTR(-ENOMEM); done: + /* + * Keep a reference to the ETR this buffer has been allocated for + * in order to have access to the IDR in tmc_free_etr_buffer(). + */ + etr_perf->drvdata = drvdata; etr_perf->etr_buf = etr_buf; + return etr_perf; } static void *tmc_alloc_etr_buffer(struct coresight_device *csdev, - int cpu, void **pages, int nr_pages, - bool snapshot) + struct perf_event *event, void **pages, + int nr_pages, bool snapshot) { struct etr_perf_buffer *etr_perf; struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); - if (cpu == -1) - cpu = smp_processor_id(); - - etr_perf = tmc_etr_setup_perf_buf(drvdata, cpu_to_node(cpu), + etr_perf = tmc_etr_setup_perf_buf(drvdata, event, nr_pages, pages, snapshot); if (IS_ERR(etr_perf)) { dev_dbg(drvdata->dev, "Unable to allocate ETR buffer\n"); return NULL; } + etr_perf->pid = task_pid_nr(event->owner); etr_perf->snapshot = snapshot; etr_perf->nr_pages = nr_pages; etr_perf->pages = pages; @@ -1231,9 +1373,33 @@ static void *tmc_alloc_etr_buffer(struct coresight_device *csdev, static void tmc_free_etr_buffer(void *config) { struct etr_perf_buffer *etr_perf = config; + struct tmc_drvdata *drvdata = etr_perf->drvdata; + struct etr_buf *buf, *etr_buf = etr_perf->etr_buf; + + if (!etr_buf) + goto free_etr_perf_buffer; + + mutex_lock(&drvdata->idr_mutex); + /* If we are not the last one to use the buffer, don't touch it. */ + if (!refcount_dec_and_test(&etr_buf->refcount)) { + mutex_unlock(&drvdata->idr_mutex); + goto free_etr_perf_buffer; + } + + /* We are the last one, remove from the IDR and free the buffer. */ + buf = idr_remove(&drvdata->idr, etr_perf->pid); + mutex_unlock(&drvdata->idr_mutex); + + /* + * Something went very wrong if the buffer associated with this ID + * is not the same in the IDR. Leak to avoid use after free. + */ + if (buf && WARN_ON(buf != etr_buf)) + goto free_etr_perf_buffer; + + tmc_free_etr_buf(etr_perf->etr_buf); - if (etr_perf->etr_buf) - tmc_free_etr_buf(etr_perf->etr_buf); +free_etr_perf_buffer: kfree(etr_perf); } @@ -1308,6 +1474,13 @@ tmc_update_etr_buffer(struct coresight_device *csdev, struct etr_buf *etr_buf = etr_perf->etr_buf; spin_lock_irqsave(&drvdata->spinlock, flags); + + /* Don't do anything if another tracer is using this sink */ + if (atomic_read(csdev->refcnt) != 1) { + spin_unlock_irqrestore(&drvdata->spinlock, flags); + goto out; + } + if (WARN_ON(drvdata->perf_data != etr_perf)) { lost = true; spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -1347,17 +1520,15 @@ out: static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data) { int rc = 0; + pid_t pid; unsigned long flags; struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); struct perf_output_handle *handle = data; struct etr_perf_buffer *etr_perf = etm_perf_sink_config(handle); spin_lock_irqsave(&drvdata->spinlock, flags); - /* - * There can be only one writer per sink in perf mode. If the sink - * is already open in SYSFS mode, we can't use it. - */ - if (drvdata->mode != CS_MODE_DISABLED || WARN_ON(drvdata->perf_data)) { + /* Don't use this sink if it is already claimed by sysFS */ + if (drvdata->mode == CS_MODE_SYSFS) { rc = -EBUSY; goto unlock_out; } @@ -1367,11 +1538,34 @@ static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data) goto unlock_out; } + /* Get a handle on the pid of the process to monitor */ + pid = etr_perf->pid; + + /* Do not proceed if this device is associated with another session */ + if (drvdata->pid != -1 && drvdata->pid != pid) { + rc = -EBUSY; + goto unlock_out; + } + etr_perf->head = PERF_IDX2OFF(handle->head, etr_perf); drvdata->perf_data = etr_perf; + + /* + * No HW configuration is needed if the sink is already in + * use for this session. + */ + if (drvdata->pid == pid) { + atomic_inc(csdev->refcnt); + goto unlock_out; + } + rc = tmc_etr_enable_hw(drvdata, etr_perf->etr_buf); - if (!rc) + if (!rc) { + /* Associate with monitored process. */ + drvdata->pid = pid; drvdata->mode = CS_MODE_PERF; + atomic_inc(csdev->refcnt); + } unlock_out: spin_unlock_irqrestore(&drvdata->spinlock, flags); @@ -1392,26 +1586,34 @@ static int tmc_enable_etr_sink(struct coresight_device *csdev, return -EINVAL; } -static void tmc_disable_etr_sink(struct coresight_device *csdev) +static int tmc_disable_etr_sink(struct coresight_device *csdev) { unsigned long flags; struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); spin_lock_irqsave(&drvdata->spinlock, flags); + if (drvdata->reading) { spin_unlock_irqrestore(&drvdata->spinlock, flags); - return; + return -EBUSY; } - /* Disable the TMC only if it needs to */ - if (drvdata->mode != CS_MODE_DISABLED) { - tmc_etr_disable_hw(drvdata); - drvdata->mode = CS_MODE_DISABLED; + if (atomic_dec_return(csdev->refcnt)) { + spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EBUSY; } + /* Complain if we (somehow) got out of sync */ + WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED); + tmc_etr_disable_hw(drvdata); + /* Dissociate from monitored process. */ + drvdata->pid = -1; + drvdata->mode = CS_MODE_DISABLED; + spin_unlock_irqrestore(&drvdata->spinlock, flags); dev_dbg(drvdata->dev, "TMC-ETR disabled\n"); + return 0; } static const struct coresight_ops_sink tmc_etr_sink_ops = { diff --git a/drivers/hwtracing/coresight/coresight-tmc.c b/drivers/hwtracing/coresight/coresight-tmc.c index 2a02da3d630f..3f718729d741 100644 --- a/drivers/hwtracing/coresight/coresight-tmc.c +++ b/drivers/hwtracing/coresight/coresight-tmc.c @@ -8,10 +8,12 @@ #include <linux/init.h> #include <linux/types.h> #include <linux/device.h> +#include <linux/idr.h> #include <linux/io.h> #include <linux/err.h> #include <linux/fs.h> #include <linux/miscdevice.h> +#include <linux/mutex.h> #include <linux/property.h> #include <linux/uaccess.h> #include <linux/slab.h> @@ -340,6 +342,8 @@ static inline bool tmc_etr_can_use_sg(struct tmc_drvdata *drvdata) static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata, u32 devid, void *dev_caps) { + int rc; + u32 dma_mask = 0; /* Set the unadvertised capabilities */ @@ -369,7 +373,10 @@ static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata, dma_mask = 40; } - return dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask)); + rc = dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask)); + if (rc) + dev_err(drvdata->dev, "Failed to setup DMA mask: %d\n", rc); + return rc; } static int tmc_probe(struct amba_device *adev, const struct amba_id *id) @@ -415,6 +422,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID); drvdata->config_type = BMVAL(devid, 6, 7); drvdata->memwidth = tmc_get_memwidth(devid); + /* This device is not associated with a session */ + drvdata->pid = -1; if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { if (np) @@ -427,8 +436,6 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4; } - pm_runtime_put(&adev->dev); - desc.pdata = pdata; desc.dev = dev; desc.groups = coresight_tmc_groups; @@ -447,6 +454,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) coresight_get_uci_data(id)); if (ret) goto out; + idr_init(&drvdata->idr); + mutex_init(&drvdata->idr_mutex); break; case TMC_CONFIG_TYPE_ETF: desc.type = CORESIGHT_DEV_TYPE_LINKSINK; @@ -471,6 +480,8 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id) ret = misc_register(&drvdata->miscdev); if (ret) coresight_unregister(drvdata->csdev); + else + pm_runtime_put(&adev->dev); out: return ret; } diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h index 487c53701e9c..503f1b3a3741 100644 --- a/drivers/hwtracing/coresight/coresight-tmc.h +++ b/drivers/hwtracing/coresight/coresight-tmc.h @@ -8,7 +8,10 @@ #define _CORESIGHT_TMC_H #include <linux/dma-mapping.h> +#include <linux/idr.h> #include <linux/miscdevice.h> +#include <linux/mutex.h> +#include <linux/refcount.h> #define TMC_RSZ 0x004 #define TMC_STS 0x00c @@ -133,6 +136,7 @@ struct etr_buf_operations; /** * struct etr_buf - Details of the buffer used by ETR + * refcount ; Number of sources currently using this etr_buf. * @mode : Mode of the ETR buffer, contiguous, Scatter Gather etc. * @full : Trace data overflow * @size : Size of the buffer. @@ -143,6 +147,7 @@ struct etr_buf_operations; * @private : Backend specific information for the buf */ struct etr_buf { + refcount_t refcount; enum etr_mode mode; bool full; ssize_t size; @@ -160,6 +165,8 @@ struct etr_buf { * @csdev: component vitals needed by the framework. * @miscdev: specifics to handle "/dev/xyz.tmc" entry. * @spinlock: only one at a time pls. + * @pid: Process ID of the process being monitored by the session + * that is using this component. * @buf: Snapshot of the trace data for ETF/ETB. * @etr_buf: details of buffer used in TMC-ETR * @len: size of the available trace for ETF/ETB. @@ -170,6 +177,8 @@ struct etr_buf { * @trigger_cntr: amount of words to store after a trigger. * @etr_caps: Bitmask of capabilities of the TMC ETR, inferred from the * device configuration register (DEVID) + * @idr: Holds etr_bufs allocated for this ETR. + * @idr_mutex: Access serialisation for idr. * @perf_data: PERF buffer for ETR. * @sysfs_data: SYSFS buffer for ETR. */ @@ -179,6 +188,7 @@ struct tmc_drvdata { struct coresight_device *csdev; struct miscdevice miscdev; spinlock_t spinlock; + pid_t pid; bool reading; union { char *buf; /* TMC ETB */ @@ -191,6 +201,8 @@ struct tmc_drvdata { enum tmc_mem_intf_width memwidth; u32 trigger_cntr; u32 etr_caps; + struct idr idr; + struct mutex idr_mutex; struct etr_buf *sysfs_buf; void *perf_data; }; diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c index b2f72a1fa402..63d9af31f57f 100644 --- a/drivers/hwtracing/coresight/coresight-tpiu.c +++ b/drivers/hwtracing/coresight/coresight-tpiu.c @@ -5,6 +5,7 @@ * Description: CoreSight Trace Port Interface Unit driver */ +#include <linux/atomic.h> #include <linux/kernel.h> #include <linux/init.h> #include <linux/device.h> @@ -73,7 +74,7 @@ static int tpiu_enable(struct coresight_device *csdev, u32 mode, void *__unused) struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); tpiu_enable_hw(drvdata); - + atomic_inc(csdev->refcnt); dev_dbg(drvdata->dev, "TPIU enabled\n"); return 0; } @@ -94,13 +95,17 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata) CS_LOCK(drvdata->base); } -static void tpiu_disable(struct coresight_device *csdev) +static int tpiu_disable(struct coresight_device *csdev) { struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); + if (atomic_dec_return(csdev->refcnt)) + return -EBUSY; + tpiu_disable_hw(drvdata); dev_dbg(drvdata->dev, "TPIU disabled\n"); + return 0; } static const struct coresight_ops_sink tpiu_sink_ops = { @@ -153,8 +158,6 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id) /* Disable tpiu to support older devices */ tpiu_disable_hw(drvdata); - pm_runtime_put(&adev->dev); - desc.type = CORESIGHT_DEV_TYPE_SINK; desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PORT; desc.ops = &tpiu_cs_ops; @@ -162,7 +165,12 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id) desc.dev = dev; drvdata->csdev = coresight_register(&desc); - return PTR_ERR_OR_ZERO(drvdata->csdev); + if (!IS_ERR(drvdata->csdev)) { + pm_runtime_put(&adev->dev); + return 0; + } + + return PTR_ERR(drvdata->csdev); } #ifdef CONFIG_PM diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c index 29cef898afba..4b130281236a 100644 --- a/drivers/hwtracing/coresight/coresight.c +++ b/drivers/hwtracing/coresight/coresight.c @@ -225,26 +225,28 @@ static int coresight_enable_sink(struct coresight_device *csdev, * We need to make sure the "new" session is compatible with the * existing "mode" of operation. */ - if (sink_ops(csdev)->enable) { - ret = sink_ops(csdev)->enable(csdev, mode, data); - if (ret) - return ret; - csdev->enable = true; - } + if (!sink_ops(csdev)->enable) + return -EINVAL; - atomic_inc(csdev->refcnt); + ret = sink_ops(csdev)->enable(csdev, mode, data); + if (ret) + return ret; + csdev->enable = true; return 0; } static void coresight_disable_sink(struct coresight_device *csdev) { - if (atomic_dec_return(csdev->refcnt) == 0) { - if (sink_ops(csdev)->disable) { - sink_ops(csdev)->disable(csdev); - csdev->enable = false; - } - } + int ret; + + if (!sink_ops(csdev)->disable) + return; + + ret = sink_ops(csdev)->disable(csdev); + if (ret) + return; + csdev->enable = false; } static int coresight_enable_link(struct coresight_device *csdev, @@ -973,7 +975,6 @@ static void coresight_device_release(struct device *dev) { struct coresight_device *csdev = to_coresight_device(dev); - kfree(csdev->conns); kfree(csdev->refcnt); kfree(csdev); } diff --git a/drivers/hwtracing/intel_th/acpi.c b/drivers/hwtracing/intel_th/acpi.c index 87bc3744755f..87f9024e4bbb 100644 --- a/drivers/hwtracing/intel_th/acpi.c +++ b/drivers/hwtracing/intel_th/acpi.c @@ -37,15 +37,21 @@ MODULE_DEVICE_TABLE(acpi, intel_th_acpi_ids); static int intel_th_acpi_probe(struct platform_device *pdev) { struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); + struct resource resource[TH_MMIO_END]; const struct acpi_device_id *id; struct intel_th *th; + int i, r; id = acpi_match_device(intel_th_acpi_ids, &pdev->dev); if (!id) return -ENODEV; - th = intel_th_alloc(&pdev->dev, (void *)id->driver_data, - pdev->resource, pdev->num_resources, -1); + for (i = 0, r = 0; i < pdev->num_resources && r < TH_MMIO_END; i++) + if (pdev->resource[i].flags & + (IORESOURCE_IRQ | IORESOURCE_MEM)) + resource[r++] = pdev->resource[i]; + + th = intel_th_alloc(&pdev->dev, (void *)id->driver_data, resource, r); if (IS_ERR(th)) return PTR_ERR(th); diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c index 7c1acc2f801c..033dce563c99 100644 --- a/drivers/hwtracing/intel_th/core.c +++ b/drivers/hwtracing/intel_th/core.c @@ -430,9 +430,9 @@ static const struct intel_th_subdevice { .nres = 1, .res = { { - /* Handle TSCU from GTH driver */ + /* Handle TSCU and CTS from GTH driver */ .start = REG_GTH_OFFSET, - .end = REG_TSCU_OFFSET + REG_TSCU_LENGTH - 1, + .end = REG_CTS_OFFSET + REG_CTS_LENGTH - 1, .flags = IORESOURCE_MEM, }, }, @@ -491,7 +491,7 @@ static const struct intel_th_subdevice { .flags = IORESOURCE_MEM, }, { - .start = 1, /* use resource[1] */ + .start = TH_MMIO_SW, .end = 0, .flags = IORESOURCE_MEM, }, @@ -501,6 +501,24 @@ static const struct intel_th_subdevice { .type = INTEL_TH_SOURCE, }, { + .nres = 2, + .res = { + { + .start = REG_STH_OFFSET, + .end = REG_STH_OFFSET + REG_STH_LENGTH - 1, + .flags = IORESOURCE_MEM, + }, + { + .start = TH_MMIO_RTIT, + .end = 0, + .flags = IORESOURCE_MEM, + }, + }, + .id = -1, + .name = "rtit", + .type = INTEL_TH_SOURCE, + }, + { .nres = 1, .res = { { @@ -584,7 +602,6 @@ intel_th_subdevice_alloc(struct intel_th *th, struct intel_th_device *thdev; struct resource res[3]; unsigned int req = 0; - bool is64bit = false; int r, err; thdev = intel_th_device_alloc(th, subdev->type, subdev->name, @@ -594,18 +611,12 @@ intel_th_subdevice_alloc(struct intel_th *th, thdev->drvdata = th->drvdata; - for (r = 0; r < th->num_resources; r++) - if (th->resource[r].flags & IORESOURCE_MEM_64) { - is64bit = true; - break; - } - memcpy(res, subdev->res, sizeof(struct resource) * subdev->nres); for (r = 0; r < subdev->nres; r++) { struct resource *devres = th->resource; - int bar = 0; /* cut subdevices' MMIO from resource[0] */ + int bar = TH_MMIO_CONFIG; /* * Take .end == 0 to mean 'take the whole bar', @@ -614,8 +625,9 @@ intel_th_subdevice_alloc(struct intel_th *th, */ if (!res[r].end && res[r].flags == IORESOURCE_MEM) { bar = res[r].start; - if (is64bit) - bar *= 2; + err = -ENODEV; + if (bar >= th->num_resources) + goto fail_put_device; res[r].start = 0; res[r].end = resource_size(&devres[bar]) - 1; } @@ -627,7 +639,12 @@ intel_th_subdevice_alloc(struct intel_th *th, dev_dbg(th->dev, "%s:%d @ %pR\n", subdev->name, r, &res[r]); } else if (res[r].flags & IORESOURCE_IRQ) { - res[r].start = th->irq; + /* + * Only pass on the IRQ if we have useful interrupts: + * the ones that can be configured via MINTCTL. + */ + if (INTEL_TH_CAP(th, has_mintctl) && th->irq != -1) + res[r].start = th->irq; } } @@ -758,8 +775,13 @@ static int intel_th_populate(struct intel_th *th) thdev = intel_th_subdevice_alloc(th, subdev); /* note: caller should free subdevices from th::thdev[] */ - if (IS_ERR(thdev)) + if (IS_ERR(thdev)) { + /* ENODEV for individual subdevices is allowed */ + if (PTR_ERR(thdev) == -ENODEV) + continue; + return PTR_ERR(thdev); + } th->thdev[th->num_thdevs++] = thdev; } @@ -809,26 +831,40 @@ static const struct file_operations intel_th_output_fops = { .llseek = noop_llseek, }; +static irqreturn_t intel_th_irq(int irq, void *data) +{ + struct intel_th *th = data; + irqreturn_t ret = IRQ_NONE; + struct intel_th_driver *d; + int i; + + for (i = 0; i < th->num_thdevs; i++) { + if (th->thdev[i]->type != INTEL_TH_OUTPUT) + continue; + + d = to_intel_th_driver(th->thdev[i]->dev.driver); + if (d && d->irq) + ret |= d->irq(th->thdev[i]); + } + + if (ret == IRQ_NONE) + pr_warn_ratelimited("nobody cared for irq\n"); + + return ret; +} + /** * intel_th_alloc() - allocate a new Intel TH device and its subdevices * @dev: parent device - * @devres: parent's resources - * @ndevres: number of resources + * @devres: resources indexed by th_mmio_idx * @irq: irq number */ struct intel_th * intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, - struct resource *devres, unsigned int ndevres, int irq) + struct resource *devres, unsigned int ndevres) { + int err, r, nr_mmios = 0; struct intel_th *th; - int err, r; - - if (irq == -1) - for (r = 0; r < ndevres; r++) - if (devres[r].flags & IORESOURCE_IRQ) { - irq = devres[r].start; - break; - } th = kzalloc(sizeof(*th), GFP_KERNEL); if (!th) @@ -846,12 +882,32 @@ intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, err = th->major; goto err_ida; } + th->irq = -1; th->dev = dev; th->drvdata = drvdata; - th->resource = devres; - th->num_resources = ndevres; - th->irq = irq; + for (r = 0; r < ndevres; r++) + switch (devres[r].flags & IORESOURCE_TYPE_BITS) { + case IORESOURCE_MEM: + th->resource[nr_mmios++] = devres[r]; + break; + case IORESOURCE_IRQ: + err = devm_request_irq(dev, devres[r].start, + intel_th_irq, IRQF_SHARED, + dev_name(dev), th); + if (err) + goto err_chrdev; + + if (th->irq == -1) + th->irq = devres[r].start; + break; + default: + dev_warn(dev, "Unknown resource type %lx\n", + devres[r].flags); + break; + } + + th->num_resources = nr_mmios; dev_set_drvdata(dev, th); @@ -868,6 +924,10 @@ intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, return th; +err_chrdev: + __unregister_chrdev(th->major, 0, TH_POSSIBLE_OUTPUTS, + "intel_th/output"); + err_ida: ida_simple_remove(&intel_th_ida, th->id); @@ -928,6 +988,27 @@ int intel_th_trace_enable(struct intel_th_device *thdev) EXPORT_SYMBOL_GPL(intel_th_trace_enable); /** + * intel_th_trace_switch() - execute a switch sequence + * @thdev: output device that requests tracing switch + */ +int intel_th_trace_switch(struct intel_th_device *thdev) +{ + struct intel_th_device *hub = to_intel_th_device(thdev->dev.parent); + struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver); + + if (WARN_ON_ONCE(hub->type != INTEL_TH_SWITCH)) + return -EINVAL; + + if (WARN_ON_ONCE(thdev->type != INTEL_TH_OUTPUT)) + return -EINVAL; + + hubdrv->trig_switch(hub, &thdev->output); + + return 0; +} +EXPORT_SYMBOL_GPL(intel_th_trace_switch); + +/** * intel_th_trace_disable() - disable tracing for an output device * @thdev: output device that requests tracing be disabled */ diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c index edc52d75e6bd..fa9d34af87ac 100644 --- a/drivers/hwtracing/intel_th/gth.c +++ b/drivers/hwtracing/intel_th/gth.c @@ -308,6 +308,11 @@ static int intel_th_gth_reset(struct gth_device *gth) iowrite32(0, gth->base + REG_GTH_SCR); iowrite32(0xfc, gth->base + REG_GTH_SCR2); + /* setup CTS for single trigger */ + iowrite32(CTS_EVENT_ENABLE_IF_ANYTHING, gth->base + REG_CTS_C0S0_EN); + iowrite32(CTS_ACTION_CONTROL_SET_STATE(CTS_STATE_IDLE) | + CTS_ACTION_CONTROL_TRIGGER, gth->base + REG_CTS_C0S0_ACT); + return 0; } @@ -457,6 +462,68 @@ static int intel_th_output_attributes(struct gth_device *gth) } /** + * intel_th_gth_stop() - stop tracing to an output device + * @gth: GTH device + * @output: output device's descriptor + * @capture_done: set when no more traces will be captured + * + * This will stop tracing using force storeEn off signal and wait for the + * pipelines to be empty for the corresponding output port. + */ +static void intel_th_gth_stop(struct gth_device *gth, + struct intel_th_output *output, + bool capture_done) +{ + struct intel_th_device *outdev = + container_of(output, struct intel_th_device, output); + struct intel_th_driver *outdrv = + to_intel_th_driver(outdev->dev.driver); + unsigned long count; + u32 reg; + u32 scr2 = 0xfc | (capture_done ? 1 : 0); + + iowrite32(0, gth->base + REG_GTH_SCR); + iowrite32(scr2, gth->base + REG_GTH_SCR2); + + /* wait on pipeline empty for the given port */ + for (reg = 0, count = GTH_PLE_WAITLOOP_DEPTH; + count && !(reg & BIT(output->port)); count--) { + reg = ioread32(gth->base + REG_GTH_STAT); + cpu_relax(); + } + + if (!count) + dev_dbg(gth->dev, "timeout waiting for GTH[%d] PLE\n", + output->port); + + /* wait on output piepline empty */ + if (outdrv->wait_empty) + outdrv->wait_empty(outdev); + + /* clear force capture done for next captures */ + iowrite32(0xfc, gth->base + REG_GTH_SCR2); +} + +/** + * intel_th_gth_start() - start tracing to an output device + * @gth: GTH device + * @output: output device's descriptor + * + * This will start tracing using force storeEn signal. + */ +static void intel_th_gth_start(struct gth_device *gth, + struct intel_th_output *output) +{ + u32 scr = 0xfc0000; + + if (output->multiblock) + scr |= 0xff; + + iowrite32(scr, gth->base + REG_GTH_SCR); + iowrite32(0, gth->base + REG_GTH_SCR2); +} + +/** * intel_th_gth_disable() - disable tracing to an output device * @thdev: GTH device * @output: output device's descriptor @@ -469,7 +536,6 @@ static void intel_th_gth_disable(struct intel_th_device *thdev, struct intel_th_output *output) { struct gth_device *gth = dev_get_drvdata(&thdev->dev); - unsigned long count; int master; u32 reg; @@ -482,22 +548,7 @@ static void intel_th_gth_disable(struct intel_th_device *thdev, } spin_unlock(>h->gth_lock); - iowrite32(0, gth->base + REG_GTH_SCR); - iowrite32(0xfd, gth->base + REG_GTH_SCR2); - - /* wait on pipeline empty for the given port */ - for (reg = 0, count = GTH_PLE_WAITLOOP_DEPTH; - count && !(reg & BIT(output->port)); count--) { - reg = ioread32(gth->base + REG_GTH_STAT); - cpu_relax(); - } - - /* clear force capture done for next captures */ - iowrite32(0xfc, gth->base + REG_GTH_SCR2); - - if (!count) - dev_dbg(&thdev->dev, "timeout waiting for GTH[%d] PLE\n", - output->port); + intel_th_gth_stop(gth, output, true); reg = ioread32(gth->base + REG_GTH_SCRPD0); reg &= ~output->scratchpad; @@ -526,8 +577,8 @@ static void intel_th_gth_enable(struct intel_th_device *thdev, { struct gth_device *gth = dev_get_drvdata(&thdev->dev); struct intel_th *th = to_intel_th(thdev); - u32 scr = 0xfc0000, scrpd; int master; + u32 scrpd; spin_lock(>h->gth_lock); for_each_set_bit(master, gth->output[output->port].master, @@ -535,9 +586,6 @@ static void intel_th_gth_enable(struct intel_th_device *thdev, gth_master_set(gth, master, output->port); } - if (output->multiblock) - scr |= 0xff; - output->active = true; spin_unlock(>h->gth_lock); @@ -548,8 +596,38 @@ static void intel_th_gth_enable(struct intel_th_device *thdev, scrpd |= output->scratchpad; iowrite32(scrpd, gth->base + REG_GTH_SCRPD0); - iowrite32(scr, gth->base + REG_GTH_SCR); - iowrite32(0, gth->base + REG_GTH_SCR2); + intel_th_gth_start(gth, output); +} + +/** + * intel_th_gth_switch() - execute a switch sequence + * @thdev: GTH device + * @output: output device's descriptor + * + * This will execute a switch sequence that will trigger a switch window + * when tracing to MSC in multi-block mode. + */ +static void intel_th_gth_switch(struct intel_th_device *thdev, + struct intel_th_output *output) +{ + struct gth_device *gth = dev_get_drvdata(&thdev->dev); + unsigned long count; + u32 reg; + + /* trigger */ + iowrite32(0, gth->base + REG_CTS_CTL); + iowrite32(CTS_CTL_SEQUENCER_ENABLE, gth->base + REG_CTS_CTL); + /* wait on trigger status */ + for (reg = 0, count = CTS_TRIG_WAITLOOP_DEPTH; + count && !(reg & BIT(4)); count--) { + reg = ioread32(gth->base + REG_CTS_STAT); + cpu_relax(); + } + if (!count) + dev_dbg(&thdev->dev, "timeout waiting for CTS Trigger\n"); + + intel_th_gth_stop(gth, output, false); + intel_th_gth_start(gth, output); } /** @@ -735,6 +813,7 @@ static struct intel_th_driver intel_th_gth_driver = { .unassign = intel_th_gth_unassign, .set_output = intel_th_gth_set_output, .enable = intel_th_gth_enable, + .trig_switch = intel_th_gth_switch, .disable = intel_th_gth_disable, .driver = { .name = "gth", diff --git a/drivers/hwtracing/intel_th/gth.h b/drivers/hwtracing/intel_th/gth.h index 6f2b0b930875..bfcc0fd01177 100644 --- a/drivers/hwtracing/intel_th/gth.h +++ b/drivers/hwtracing/intel_th/gth.h @@ -49,6 +49,12 @@ enum { REG_GTH_SCRPD3 = 0xec, /* ScratchPad[3] */ REG_TSCU_TSUCTRL = 0x2000, /* TSCU control register */ REG_TSCU_TSCUSTAT = 0x2004, /* TSCU status register */ + + /* Common Capture Sequencer (CTS) registers */ + REG_CTS_C0S0_EN = 0x30c0, /* clause_event_enable_c0s0 */ + REG_CTS_C0S0_ACT = 0x3180, /* clause_action_control_c0s0 */ + REG_CTS_STAT = 0x32a0, /* cts_status */ + REG_CTS_CTL = 0x32a4, /* cts_control */ }; /* waiting for Pipeline Empty bit(s) to assert for GTH */ @@ -57,4 +63,17 @@ enum { #define TSUCTRL_CTCRESYNC BIT(0) #define TSCUSTAT_CTCSYNCING BIT(1) +/* waiting for Trigger status to assert for CTS */ +#define CTS_TRIG_WAITLOOP_DEPTH 10000 + +#define CTS_EVENT_ENABLE_IF_ANYTHING BIT(31) +#define CTS_ACTION_CONTROL_STATE_OFF 27 +#define CTS_ACTION_CONTROL_SET_STATE(x) \ + (((x) & 0x1f) << CTS_ACTION_CONTROL_STATE_OFF) +#define CTS_ACTION_CONTROL_TRIGGER BIT(4) + +#define CTS_STATE_IDLE 0x10u + +#define CTS_CTL_SEQUENCER_ENABLE BIT(0) + #endif /* __INTEL_TH_GTH_H__ */ diff --git a/drivers/hwtracing/intel_th/intel_th.h b/drivers/hwtracing/intel_th/intel_th.h index 780206dc9012..0df480072b6c 100644 --- a/drivers/hwtracing/intel_th/intel_th.h +++ b/drivers/hwtracing/intel_th/intel_th.h @@ -8,6 +8,8 @@ #ifndef __INTEL_TH_H__ #define __INTEL_TH_H__ +#include <linux/irqreturn.h> + /* intel_th_device device types */ enum { /* Devices that generate trace data */ @@ -18,6 +20,8 @@ enum { INTEL_TH_SWITCH, }; +struct intel_th_device; + /** * struct intel_th_output - descriptor INTEL_TH_OUTPUT type devices * @port: output port number, assigned by the switch @@ -25,6 +29,7 @@ enum { * @scratchpad: scratchpad bits to flag when this output is enabled * @multiblock: true for multiblock output configuration * @active: true when this output is enabled + * @wait_empty: wait for device pipeline to be empty * * Output port descriptor, used by switch driver to tell which output * port this output device corresponds to. Filled in at output device's @@ -42,10 +47,12 @@ struct intel_th_output { /** * struct intel_th_drvdata - describes hardware capabilities and quirks * @tscu_enable: device needs SW to enable time stamping unit + * @has_mintctl: device has interrupt control (MINTCTL) register * @host_mode_only: device can only operate in 'host debugger' mode */ struct intel_th_drvdata { unsigned int tscu_enable : 1, + has_mintctl : 1, host_mode_only : 1; }; @@ -157,10 +164,13 @@ struct intel_th_driver { struct intel_th_device *othdev); void (*enable)(struct intel_th_device *thdev, struct intel_th_output *output); + void (*trig_switch)(struct intel_th_device *thdev, + struct intel_th_output *output); void (*disable)(struct intel_th_device *thdev, struct intel_th_output *output); /* output ops */ - void (*irq)(struct intel_th_device *thdev); + irqreturn_t (*irq)(struct intel_th_device *thdev); + void (*wait_empty)(struct intel_th_device *thdev); int (*activate)(struct intel_th_device *thdev); void (*deactivate)(struct intel_th_device *thdev); /* file_operations for those who want a device node */ @@ -213,21 +223,23 @@ static inline struct intel_th *to_intel_th(struct intel_th_device *thdev) struct intel_th * intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata, - struct resource *devres, unsigned int ndevres, int irq); + struct resource *devres, unsigned int ndevres); void intel_th_free(struct intel_th *th); int intel_th_driver_register(struct intel_th_driver *thdrv); void intel_th_driver_unregister(struct intel_th_driver *thdrv); int intel_th_trace_enable(struct intel_th_device *thdev); +int intel_th_trace_switch(struct intel_th_device *thdev); int intel_th_trace_disable(struct intel_th_device *thdev); int intel_th_set_output(struct intel_th_device *thdev, unsigned int master); int intel_th_output_enable(struct intel_th *th, unsigned int otype); -enum { +enum th_mmio_idx { TH_MMIO_CONFIG = 0, - TH_MMIO_SW = 2, + TH_MMIO_SW = 1, + TH_MMIO_RTIT = 2, TH_MMIO_END, }; @@ -237,6 +249,9 @@ enum { #define TH_CONFIGURABLE_MASTERS 256 #define TH_MSC_MAX 2 +/* Maximum IRQ vectors */ +#define TH_NVEC_MAX 8 + /** * struct intel_th - Intel TH controller * @dev: driver core's device @@ -244,7 +259,7 @@ enum { * @hub: "switch" subdevice (GTH) * @resource: resources of the entire controller * @num_thdevs: number of devices in the @thdev array - * @num_resources: number or resources in the @resource array + * @num_resources: number of resources in the @resource array * @irq: irq number * @id: this Intel TH controller's device ID in the system * @major: device node major for output devices @@ -256,7 +271,7 @@ struct intel_th { struct intel_th_device *hub; struct intel_th_drvdata *drvdata; - struct resource *resource; + struct resource resource[TH_MMIO_END]; int (*activate)(struct intel_th *); void (*deactivate)(struct intel_th *); unsigned int num_thdevs; @@ -296,6 +311,9 @@ enum { REG_TSCU_OFFSET = 0x2000, REG_TSCU_LENGTH = 0x1000, + REG_CTS_OFFSET = 0x3000, + REG_CTS_LENGTH = 0x1000, + /* Software Trace Hub (STH) [0x4000..0x4fff] */ REG_STH_OFFSET = 0x4000, REG_STH_LENGTH = 0x2000, diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c index ba7aaf421f36..81bb54fa3ce8 100644 --- a/drivers/hwtracing/intel_th/msu.c +++ b/drivers/hwtracing/intel_th/msu.c @@ -29,28 +29,18 @@ #define msc_dev(x) (&(x)->thdev->dev) /** - * struct msc_block - multiblock mode block descriptor - * @bdesc: pointer to hardware descriptor (beginning of the block) - * @addr: physical address of the block - */ -struct msc_block { - struct msc_block_desc *bdesc; - dma_addr_t addr; -}; - -/** * struct msc_window - multiblock mode window descriptor * @entry: window list linkage (msc::win_list) * @pgoff: page offset into the buffer that this window starts at * @nr_blocks: number of blocks (pages) in this window - * @block: array of block descriptors + * @sgt: array of block descriptors */ struct msc_window { struct list_head entry; unsigned long pgoff; unsigned int nr_blocks; struct msc *msc; - struct msc_block block[0]; + struct sg_table sgt; }; /** @@ -84,6 +74,8 @@ struct msc_iter { * @reg_base: register window base address * @thdev: intel_th_device pointer * @win_list: list of windows in multiblock mode + * @single_sgt: single mode buffer + * @cur_win: current window * @nr_pages: total number of pages allocated for this buffer * @single_sz: amount of data in single mode * @single_wrap: single mode wrap occurred @@ -101,9 +93,12 @@ struct msc_iter { */ struct msc { void __iomem *reg_base; + void __iomem *msu_base; struct intel_th_device *thdev; struct list_head win_list; + struct sg_table single_sgt; + struct msc_window *cur_win; unsigned long nr_pages; unsigned long single_sz; unsigned int single_wrap : 1; @@ -120,7 +115,8 @@ struct msc { /* config */ unsigned int enabled : 1, - wrap : 1; + wrap : 1, + do_irq : 1; unsigned int mode; unsigned int burst_len; unsigned int index; @@ -139,6 +135,49 @@ static inline bool msc_block_is_empty(struct msc_block_desc *bdesc) return false; } +static inline struct msc_block_desc * +msc_win_block(struct msc_window *win, unsigned int block) +{ + return sg_virt(&win->sgt.sgl[block]); +} + +static inline dma_addr_t +msc_win_baddr(struct msc_window *win, unsigned int block) +{ + return sg_dma_address(&win->sgt.sgl[block]); +} + +static inline unsigned long +msc_win_bpfn(struct msc_window *win, unsigned int block) +{ + return msc_win_baddr(win, block) >> PAGE_SHIFT; +} + +/** + * msc_is_last_win() - check if a window is the last one for a given MSC + * @win: window + * Return: true if @win is the last window in MSC's multiblock buffer + */ +static inline bool msc_is_last_win(struct msc_window *win) +{ + return win->entry.next == &win->msc->win_list; +} + +/** + * msc_next_window() - return next window in the multiblock buffer + * @win: current window + * + * Return: window following the current one + */ +static struct msc_window *msc_next_window(struct msc_window *win) +{ + if (msc_is_last_win(win)) + return list_first_entry(&win->msc->win_list, struct msc_window, + entry); + + return list_next_entry(win, entry); +} + /** * msc_oldest_window() - locate the window with oldest data * @msc: MSC device @@ -150,9 +189,7 @@ static inline bool msc_block_is_empty(struct msc_block_desc *bdesc) */ static struct msc_window *msc_oldest_window(struct msc *msc) { - struct msc_window *win; - u32 reg = ioread32(msc->reg_base + REG_MSU_MSC0NWSA); - unsigned long win_addr = (unsigned long)reg << PAGE_SHIFT; + struct msc_window *win, *next = msc_next_window(msc->cur_win); unsigned int found = 0; if (list_empty(&msc->win_list)) @@ -164,18 +201,18 @@ static struct msc_window *msc_oldest_window(struct msc *msc) * something like 2, in which case we're good */ list_for_each_entry(win, &msc->win_list, entry) { - if (win->block[0].addr == win_addr) + if (win == next) found++; /* skip the empty ones */ - if (msc_block_is_empty(win->block[0].bdesc)) + if (msc_block_is_empty(msc_win_block(win, 0))) continue; if (found) return win; } - return list_entry(msc->win_list.next, struct msc_window, entry); + return list_first_entry(&msc->win_list, struct msc_window, entry); } /** @@ -187,7 +224,7 @@ static struct msc_window *msc_oldest_window(struct msc *msc) static unsigned int msc_win_oldest_block(struct msc_window *win) { unsigned int blk; - struct msc_block_desc *bdesc = win->block[0].bdesc; + struct msc_block_desc *bdesc = msc_win_block(win, 0); /* without wrapping, first block is the oldest */ if (!msc_block_wrapped(bdesc)) @@ -198,7 +235,7 @@ static unsigned int msc_win_oldest_block(struct msc_window *win) * oldest data for this window. */ for (blk = 0; blk < win->nr_blocks; blk++) { - bdesc = win->block[blk].bdesc; + bdesc = msc_win_block(win, blk); if (msc_block_last_written(bdesc)) return blk; @@ -207,34 +244,9 @@ static unsigned int msc_win_oldest_block(struct msc_window *win) return 0; } -/** - * msc_is_last_win() - check if a window is the last one for a given MSC - * @win: window - * Return: true if @win is the last window in MSC's multiblock buffer - */ -static inline bool msc_is_last_win(struct msc_window *win) -{ - return win->entry.next == &win->msc->win_list; -} - -/** - * msc_next_window() - return next window in the multiblock buffer - * @win: current window - * - * Return: window following the current one - */ -static struct msc_window *msc_next_window(struct msc_window *win) -{ - if (msc_is_last_win(win)) - return list_entry(win->msc->win_list.next, struct msc_window, - entry); - - return list_entry(win->entry.next, struct msc_window, entry); -} - static struct msc_block_desc *msc_iter_bdesc(struct msc_iter *iter) { - return iter->win->block[iter->block].bdesc; + return msc_win_block(iter->win, iter->block); } static void msc_iter_init(struct msc_iter *iter) @@ -467,13 +479,47 @@ static void msc_buffer_clear_hw_header(struct msc *msc) offsetof(struct msc_block_desc, hw_tag); for (blk = 0; blk < win->nr_blocks; blk++) { - struct msc_block_desc *bdesc = win->block[blk].bdesc; + struct msc_block_desc *bdesc = msc_win_block(win, blk); memset(&bdesc->hw_tag, 0, hw_sz); } } } +static int intel_th_msu_init(struct msc *msc) +{ + u32 mintctl, msusts; + + if (!msc->do_irq) + return 0; + + mintctl = ioread32(msc->msu_base + REG_MSU_MINTCTL); + mintctl |= msc->index ? M1BLIE : M0BLIE; + iowrite32(mintctl, msc->msu_base + REG_MSU_MINTCTL); + if (mintctl != ioread32(msc->msu_base + REG_MSU_MINTCTL)) { + dev_info(msc_dev(msc), "MINTCTL ignores writes: no usable interrupts\n"); + msc->do_irq = 0; + return 0; + } + + msusts = ioread32(msc->msu_base + REG_MSU_MSUSTS); + iowrite32(msusts, msc->msu_base + REG_MSU_MSUSTS); + + return 0; +} + +static void intel_th_msu_deinit(struct msc *msc) +{ + u32 mintctl; + + if (!msc->do_irq) + return; + + mintctl = ioread32(msc->msu_base + REG_MSU_MINTCTL); + mintctl &= msc->index ? ~M1BLIE : ~M0BLIE; + iowrite32(mintctl, msc->msu_base + REG_MSU_MINTCTL); +} + /** * msc_configure() - set up MSC hardware * @msc: the MSC device to configure @@ -531,23 +577,14 @@ static int msc_configure(struct msc *msc) */ static void msc_disable(struct msc *msc) { - unsigned long count; u32 reg; lockdep_assert_held(&msc->buf_mutex); intel_th_trace_disable(msc->thdev); - for (reg = 0, count = MSC_PLE_WAITLOOP_DEPTH; - count && !(reg & MSCSTS_PLE); count--) { - reg = ioread32(msc->reg_base + REG_MSU_MSC0STS); - cpu_relax(); - } - - if (!count) - dev_dbg(msc_dev(msc), "timeout waiting for MSC0 PLE\n"); - if (msc->mode == MSC_MODE_SINGLE) { + reg = ioread32(msc->reg_base + REG_MSU_MSC0STS); msc->single_wrap = !!(reg & MSCSTS_WRAPSTAT); reg = ioread32(msc->reg_base + REG_MSU_MSC0MWP); @@ -617,22 +654,45 @@ static void intel_th_msc_deactivate(struct intel_th_device *thdev) */ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size) { + unsigned long nr_pages = size >> PAGE_SHIFT; unsigned int order = get_order(size); struct page *page; + int ret; if (!size) return 0; + ret = sg_alloc_table(&msc->single_sgt, 1, GFP_KERNEL); + if (ret) + goto err_out; + + ret = -ENOMEM; page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (!page) - return -ENOMEM; + goto err_free_sgt; split_page(page, order); - msc->nr_pages = size >> PAGE_SHIFT; + sg_set_buf(msc->single_sgt.sgl, page_address(page), size); + + ret = dma_map_sg(msc_dev(msc)->parent->parent, msc->single_sgt.sgl, 1, + DMA_FROM_DEVICE); + if (ret < 0) + goto err_free_pages; + + msc->nr_pages = nr_pages; msc->base = page_address(page); - msc->base_addr = page_to_phys(page); + msc->base_addr = sg_dma_address(msc->single_sgt.sgl); return 0; + +err_free_pages: + __free_pages(page, order); + +err_free_sgt: + sg_free_table(&msc->single_sgt); + +err_out: + return ret; } /** @@ -643,6 +703,10 @@ static void msc_buffer_contig_free(struct msc *msc) { unsigned long off; + dma_unmap_sg(msc_dev(msc)->parent->parent, msc->single_sgt.sgl, + 1, DMA_FROM_DEVICE); + sg_free_table(&msc->single_sgt); + for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) { struct page *page = virt_to_page(msc->base + off); @@ -669,6 +733,40 @@ static struct page *msc_buffer_contig_get_page(struct msc *msc, return virt_to_page(msc->base + (pgoff << PAGE_SHIFT)); } +static int __msc_buffer_win_alloc(struct msc_window *win, + unsigned int nr_blocks) +{ + struct scatterlist *sg_ptr; + void *block; + int i, ret; + + ret = sg_alloc_table(&win->sgt, nr_blocks, GFP_KERNEL); + if (ret) + return -ENOMEM; + + for_each_sg(win->sgt.sgl, sg_ptr, nr_blocks, i) { + block = dma_alloc_coherent(msc_dev(win->msc)->parent->parent, + PAGE_SIZE, &sg_dma_address(sg_ptr), + GFP_KERNEL); + if (!block) + goto err_nomem; + + sg_set_buf(sg_ptr, block, PAGE_SIZE); + } + + return nr_blocks; + +err_nomem: + for (i--; i >= 0; i--) + dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, + msc_win_block(win, i), + msc_win_baddr(win, i)); + + sg_free_table(&win->sgt); + + return -ENOMEM; +} + /** * msc_buffer_win_alloc() - alloc a window for a multiblock mode * @msc: MSC device @@ -682,44 +780,49 @@ static struct page *msc_buffer_contig_get_page(struct msc *msc, static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks) { struct msc_window *win; - unsigned long size = PAGE_SIZE; - int i, ret = -ENOMEM; + int ret = -ENOMEM, i; if (!nr_blocks) return 0; - win = kzalloc(offsetof(struct msc_window, block[nr_blocks]), - GFP_KERNEL); + /* + * This limitation hold as long as we need random access to the + * block. When that changes, this can go away. + */ + if (nr_blocks > SG_MAX_SINGLE_ALLOC) + return -EINVAL; + + win = kzalloc(sizeof(*win), GFP_KERNEL); if (!win) return -ENOMEM; + win->msc = msc; + if (!list_empty(&msc->win_list)) { - struct msc_window *prev = list_entry(msc->win_list.prev, - struct msc_window, entry); + struct msc_window *prev = list_last_entry(&msc->win_list, + struct msc_window, + entry); + /* This works as long as blocks are page-sized */ win->pgoff = prev->pgoff + prev->nr_blocks; } - for (i = 0; i < nr_blocks; i++) { - win->block[i].bdesc = - dma_alloc_coherent(msc_dev(msc)->parent->parent, size, - &win->block[i].addr, GFP_KERNEL); - - if (!win->block[i].bdesc) - goto err_nomem; + ret = __msc_buffer_win_alloc(win, nr_blocks); + if (ret < 0) + goto err_nomem; #ifdef CONFIG_X86 + for (i = 0; i < ret; i++) /* Set the page as uncached */ - set_memory_uc((unsigned long)win->block[i].bdesc, 1); + set_memory_uc((unsigned long)msc_win_block(win, i), 1); #endif - } - win->msc = msc; - win->nr_blocks = nr_blocks; + win->nr_blocks = ret; if (list_empty(&msc->win_list)) { - msc->base = win->block[0].bdesc; - msc->base_addr = win->block[0].addr; + msc->base = msc_win_block(win, 0); + msc->base_addr = msc_win_baddr(win, 0); + msc->cur_win = win; } list_add_tail(&win->entry, &msc->win_list); @@ -728,19 +831,25 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks) return 0; err_nomem: - for (i--; i >= 0; i--) { -#ifdef CONFIG_X86 - /* Reset the page to write-back before releasing */ - set_memory_wb((unsigned long)win->block[i].bdesc, 1); -#endif - dma_free_coherent(msc_dev(msc)->parent->parent, size, - win->block[i].bdesc, win->block[i].addr); - } kfree(win); return ret; } +static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win) +{ + int i; + + for (i = 0; i < win->nr_blocks; i++) { + struct page *page = sg_page(&win->sgt.sgl[i]); + + page->mapping = NULL; + dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, + msc_win_block(win, i), msc_win_baddr(win, i)); + } + sg_free_table(&win->sgt); +} + /** * msc_buffer_win_free() - free a window from MSC's window list * @msc: MSC device @@ -761,17 +870,13 @@ static void msc_buffer_win_free(struct msc *msc, struct msc_window *win) msc->base_addr = 0; } - for (i = 0; i < win->nr_blocks; i++) { - struct page *page = virt_to_page(win->block[i].bdesc); - - page->mapping = NULL; #ifdef CONFIG_X86 - /* Reset the page to write-back before releasing */ - set_memory_wb((unsigned long)win->block[i].bdesc, 1); + for (i = 0; i < win->nr_blocks; i++) + /* Reset the page to write-back */ + set_memory_wb((unsigned long)msc_win_block(win, i), 1); #endif - dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, - win->block[i].bdesc, win->block[i].addr); - } + + __msc_buffer_win_free(msc, win); kfree(win); } @@ -798,19 +903,18 @@ static void msc_buffer_relink(struct msc *msc) */ if (msc_is_last_win(win)) { sw_tag |= MSC_SW_TAG_LASTWIN; - next_win = list_entry(msc->win_list.next, - struct msc_window, entry); + next_win = list_first_entry(&msc->win_list, + struct msc_window, entry); } else { - next_win = list_entry(win->entry.next, - struct msc_window, entry); + next_win = list_next_entry(win, entry); } for (blk = 0; blk < win->nr_blocks; blk++) { - struct msc_block_desc *bdesc = win->block[blk].bdesc; + struct msc_block_desc *bdesc = msc_win_block(win, blk); memset(bdesc, 0, sizeof(*bdesc)); - bdesc->next_win = next_win->block[0].addr >> PAGE_SHIFT; + bdesc->next_win = msc_win_bpfn(next_win, 0); /* * Similarly to last window, last block should point @@ -818,11 +922,9 @@ static void msc_buffer_relink(struct msc *msc) */ if (blk == win->nr_blocks - 1) { sw_tag |= MSC_SW_TAG_LASTBLK; - bdesc->next_blk = - win->block[0].addr >> PAGE_SHIFT; + bdesc->next_blk = msc_win_bpfn(win, 0); } else { - bdesc->next_blk = - win->block[blk + 1].addr >> PAGE_SHIFT; + bdesc->next_blk = msc_win_bpfn(win, blk + 1); } bdesc->sw_tag = sw_tag; @@ -997,7 +1099,7 @@ static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff) found: pgoff -= win->pgoff; - return virt_to_page(win->block[pgoff].bdesc); + return sg_page(&win->sgt.sgl[pgoff]); } /** @@ -1250,6 +1352,22 @@ static const struct file_operations intel_th_msc_fops = { .owner = THIS_MODULE, }; +static void intel_th_msc_wait_empty(struct intel_th_device *thdev) +{ + struct msc *msc = dev_get_drvdata(&thdev->dev); + unsigned long count; + u32 reg; + + for (reg = 0, count = MSC_PLE_WAITLOOP_DEPTH; + count && !(reg & MSCSTS_PLE); count--) { + reg = __raw_readl(msc->reg_base + REG_MSU_MSC0STS); + cpu_relax(); + } + + if (!count) + dev_dbg(msc_dev(msc), "timeout waiting for MSC0 PLE\n"); +} + static int intel_th_msc_init(struct msc *msc) { atomic_set(&msc->user_count, -1); @@ -1266,6 +1384,39 @@ static int intel_th_msc_init(struct msc *msc) return 0; } +static void msc_win_switch(struct msc *msc) +{ + struct msc_window *last, *first; + + first = list_first_entry(&msc->win_list, struct msc_window, entry); + last = list_last_entry(&msc->win_list, struct msc_window, entry); + + if (msc_is_last_win(msc->cur_win)) + msc->cur_win = first; + else + msc->cur_win = list_next_entry(msc->cur_win, entry); + + msc->base = msc_win_block(msc->cur_win, 0); + msc->base_addr = msc_win_baddr(msc->cur_win, 0); + + intel_th_trace_switch(msc->thdev); +} + +static irqreturn_t intel_th_msc_interrupt(struct intel_th_device *thdev) +{ + struct msc *msc = dev_get_drvdata(&thdev->dev); + u32 msusts = ioread32(msc->msu_base + REG_MSU_MSUSTS); + u32 mask = msc->index ? MSUSTS_MSC1BLAST : MSUSTS_MSC0BLAST; + + if (!(msusts & mask)) { + if (msc->enabled) + return IRQ_HANDLED; + return IRQ_NONE; + } + + return IRQ_HANDLED; +} + static const char * const msc_mode[] = { [MSC_MODE_SINGLE] = "single", [MSC_MODE_MULTI] = "multi", @@ -1440,10 +1591,38 @@ free_win: static DEVICE_ATTR_RW(nr_pages); +static ssize_t +win_switch_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t size) +{ + struct msc *msc = dev_get_drvdata(dev); + unsigned long val; + int ret; + + ret = kstrtoul(buf, 10, &val); + if (ret) + return ret; + + if (val != 1) + return -EINVAL; + + mutex_lock(&msc->buf_mutex); + if (msc->mode != MSC_MODE_MULTI) + ret = -ENOTSUPP; + else + msc_win_switch(msc); + mutex_unlock(&msc->buf_mutex); + + return ret ? ret : size; +} + +static DEVICE_ATTR_WO(win_switch); + static struct attribute *msc_output_attrs[] = { &dev_attr_wrap.attr, &dev_attr_mode.attr, &dev_attr_nr_pages.attr, + &dev_attr_win_switch.attr, NULL, }; @@ -1471,10 +1650,19 @@ static int intel_th_msc_probe(struct intel_th_device *thdev) if (!msc) return -ENOMEM; + res = intel_th_device_get_resource(thdev, IORESOURCE_IRQ, 1); + if (!res) + msc->do_irq = 1; + msc->index = thdev->id; msc->thdev = thdev; msc->reg_base = base + msc->index * 0x100; + msc->msu_base = base; + + err = intel_th_msu_init(msc); + if (err) + return err; err = intel_th_msc_init(msc); if (err) @@ -1491,6 +1679,7 @@ static void intel_th_msc_remove(struct intel_th_device *thdev) int ret; intel_th_msc_deactivate(thdev); + intel_th_msu_deinit(msc); /* * Buffers should not be used at this point except if the @@ -1504,6 +1693,8 @@ static void intel_th_msc_remove(struct intel_th_device *thdev) static struct intel_th_driver intel_th_msc_driver = { .probe = intel_th_msc_probe, .remove = intel_th_msc_remove, + .irq = intel_th_msc_interrupt, + .wait_empty = intel_th_msc_wait_empty, .activate = intel_th_msc_activate, .deactivate = intel_th_msc_deactivate, .fops = &intel_th_msc_fops, diff --git a/drivers/hwtracing/intel_th/msu.h b/drivers/hwtracing/intel_th/msu.h index 9cc8aced6116..574c16004cb2 100644 --- a/drivers/hwtracing/intel_th/msu.h +++ b/drivers/hwtracing/intel_th/msu.h @@ -11,6 +11,7 @@ enum { REG_MSU_MSUPARAMS = 0x0000, REG_MSU_MSUSTS = 0x0008, + REG_MSU_MINTCTL = 0x0004, /* MSU-global interrupt control */ REG_MSU_MSC0CTL = 0x0100, /* MSC0 control */ REG_MSU_MSC0STS = 0x0104, /* MSC0 status */ REG_MSU_MSC0BAR = 0x0108, /* MSC0 output base address */ @@ -28,6 +29,8 @@ enum { /* MSUSTS bits */ #define MSUSTS_MSU_INT BIT(0) +#define MSUSTS_MSC0BLAST BIT(16) +#define MSUSTS_MSC1BLAST BIT(24) /* MSCnCTL bits */ #define MSC_EN BIT(0) @@ -36,6 +39,11 @@ enum { #define MSC_MODE (BIT(4) | BIT(5)) #define MSC_LEN (BIT(8) | BIT(9) | BIT(10)) +/* MINTCTL bits */ +#define MICDE BIT(0) +#define M0BLIE BIT(16) +#define M1BLIE BIT(24) + /* MSC operating modes (MSC_MODE) */ enum { MSC_MODE_SINGLE = 0, @@ -87,7 +95,7 @@ static inline unsigned long msc_data_sz(struct msc_block_desc *bdesc) static inline bool msc_block_wrapped(struct msc_block_desc *bdesc) { - if (bdesc->hw_tag & MSC_HW_TAG_BLOCKWRAP) + if (bdesc->hw_tag & (MSC_HW_TAG_BLOCKWRAP | MSC_HW_TAG_WINWRAP)) return true; return false; diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c index 70f2cb90adc5..f1228708f2a2 100644 --- a/drivers/hwtracing/intel_th/pci.c +++ b/drivers/hwtracing/intel_th/pci.c @@ -17,7 +17,13 @@ #define DRIVER_NAME "intel_th_pci" -#define BAR_MASK (BIT(TH_MMIO_CONFIG) | BIT(TH_MMIO_SW)) +enum { + TH_PCI_CONFIG_BAR = 0, + TH_PCI_STH_SW_BAR = 2, + TH_PCI_RTIT_BAR = 4, +}; + +#define BAR_MASK (BIT(TH_PCI_CONFIG_BAR) | BIT(TH_PCI_STH_SW_BAR)) #define PCI_REG_NPKDSC 0x80 #define NPKDSC_TSACT BIT(5) @@ -66,8 +72,12 @@ static int intel_th_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct intel_th_drvdata *drvdata = (void *)id->driver_data; + struct resource resource[TH_MMIO_END + TH_NVEC_MAX] = { + [TH_MMIO_CONFIG] = pdev->resource[TH_PCI_CONFIG_BAR], + [TH_MMIO_SW] = pdev->resource[TH_PCI_STH_SW_BAR], + }; + int err, r = TH_MMIO_SW + 1, i; struct intel_th *th; - int err; err = pcim_enable_device(pdev); if (err) @@ -77,8 +87,19 @@ static int intel_th_pci_probe(struct pci_dev *pdev, if (err) return err; - th = intel_th_alloc(&pdev->dev, drvdata, pdev->resource, - DEVICE_COUNT_RESOURCE, pdev->irq); + if (pdev->resource[TH_PCI_RTIT_BAR].start) { + resource[TH_MMIO_RTIT] = pdev->resource[TH_PCI_RTIT_BAR]; + r++; + } + + err = pci_alloc_irq_vectors(pdev, 1, 8, PCI_IRQ_ALL_TYPES); + if (err > 0) + for (i = 0; i < err; i++, r++) { + resource[r].flags = IORESOURCE_IRQ; + resource[r].start = pci_irq_vector(pdev, i); + } + + th = intel_th_alloc(&pdev->dev, drvdata, resource, r); if (IS_ERR(th)) return PTR_ERR(th); @@ -95,10 +116,13 @@ static void intel_th_pci_remove(struct pci_dev *pdev) struct intel_th *th = pci_get_drvdata(pdev); intel_th_free(th); + + pci_free_irq_vectors(pdev); } static const struct intel_th_drvdata intel_th_2x = { .tscu_enable = 1, + .has_mintctl = 1, }; static const struct pci_device_id intel_th_pci_id_table[] = { diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index 6005a1c189f6..871eb4bc4efc 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -90,18 +90,7 @@ static int icc_summary_show(struct seq_file *s, void *data) return 0; } - -static int icc_summary_open(struct inode *inode, struct file *file) -{ - return single_open(file, icc_summary_show, inode->i_private); -} - -static const struct file_operations icc_summary_fops = { - .open = icc_summary_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(icc_summary); static struct icc_node *node_find(const int id) { diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index 42ab8ec92a04..3209ee020b15 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -496,6 +496,14 @@ config VEXPRESS_SYSCFG bus. System Configuration interface is one of the possible means of generating transactions on this bus. +config ASPEED_P2A_CTRL + depends on (ARCH_ASPEED || COMPILE_TEST) && REGMAP && MFD_SYSCON + tristate "Aspeed ast2400/2500 HOST P2A VGA MMIO to BMC bridge control" + help + Control Aspeed ast2400/2500 HOST P2A VGA MMIO to BMC mappings through + ioctl()s, the driver also provides an interface for userspace mappings to + a pre-defined region. + config ASPEED_LPC_CTRL depends on (ARCH_ASPEED || COMPILE_TEST) && REGMAP && MFD_SYSCON tristate "Aspeed ast2400/2500 HOST LPC to BMC bridge control" diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile index d5b7d3404dc7..c36239573a5c 100644 --- a/drivers/misc/Makefile +++ b/drivers/misc/Makefile @@ -56,6 +56,7 @@ obj-$(CONFIG_VEXPRESS_SYSCFG) += vexpress-syscfg.o obj-$(CONFIG_CXL_BASE) += cxl/ obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o +obj-$(CONFIG_ASPEED_P2A_CTRL) += aspeed-p2a-ctrl.o obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o obj-$(CONFIG_OCXL) += ocxl/ obj-y += cardreader/ diff --git a/drivers/misc/aspeed-p2a-ctrl.c b/drivers/misc/aspeed-p2a-ctrl.c new file mode 100644 index 000000000000..b60fbeaffcbd --- /dev/null +++ b/drivers/misc/aspeed-p2a-ctrl.c @@ -0,0 +1,444 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2019 Google Inc + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + * + * Provides a simple driver to control the ASPEED P2A interface which allows + * the host to read and write to various regions of the BMC's memory. + */ + +#include <linux/fs.h> +#include <linux/io.h> +#include <linux/mfd/syscon.h> +#include <linux/miscdevice.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/mutex.h> +#include <linux/of_address.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> +#include <linux/slab.h> +#include <linux/uaccess.h> + +#include <linux/aspeed-p2a-ctrl.h> + +#define DEVICE_NAME "aspeed-p2a-ctrl" + +/* SCU2C is a Misc. Control Register. */ +#define SCU2C 0x2c +/* SCU180 is the PCIe Configuration Setting Control Register. */ +#define SCU180 0x180 +/* Bit 1 controls the P2A bridge, while bit 0 controls the entire VGA device + * on the PCI bus. + */ +#define SCU180_ENP2A BIT(1) + +/* The ast2400/2500 both have six ranges. */ +#define P2A_REGION_COUNT 6 + +struct region { + u64 min; + u64 max; + u32 bit; +}; + +struct aspeed_p2a_model_data { + /* min, max, bit */ + struct region regions[P2A_REGION_COUNT]; +}; + +struct aspeed_p2a_ctrl { + struct miscdevice miscdev; + struct regmap *regmap; + + const struct aspeed_p2a_model_data *config; + + /* Access to these needs to be locked, held via probe, mapping ioctl, + * and release, remove. + */ + struct mutex tracking; + u32 readers; + u32 readerwriters[P2A_REGION_COUNT]; + + phys_addr_t mem_base; + resource_size_t mem_size; +}; + +struct aspeed_p2a_user { + struct file *file; + struct aspeed_p2a_ctrl *parent; + + /* The entire memory space is opened for reading once the bridge is + * enabled, therefore this needs only to be tracked once per user. + * If any user has it open for read, the bridge must stay enabled. + */ + u32 read; + + /* Each entry of the array corresponds to a P2A Region. If the user + * opens for read or readwrite, the reference goes up here. On + * release, this array is walked and references adjusted accordingly. + */ + u32 readwrite[P2A_REGION_COUNT]; +}; + +static void aspeed_p2a_enable_bridge(struct aspeed_p2a_ctrl *p2a_ctrl) +{ + regmap_update_bits(p2a_ctrl->regmap, + SCU180, SCU180_ENP2A, SCU180_ENP2A); +} + +static void aspeed_p2a_disable_bridge(struct aspeed_p2a_ctrl *p2a_ctrl) +{ + regmap_update_bits(p2a_ctrl->regmap, SCU180, SCU180_ENP2A, 0); +} + +static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma) +{ + unsigned long vsize; + pgprot_t prot; + struct aspeed_p2a_user *priv = file->private_data; + struct aspeed_p2a_ctrl *ctrl = priv->parent; + + if (ctrl->mem_base == 0 && ctrl->mem_size == 0) + return -EINVAL; + + vsize = vma->vm_end - vma->vm_start; + prot = vma->vm_page_prot; + + if (vma->vm_pgoff + vsize > ctrl->mem_base + ctrl->mem_size) + return -EINVAL; + + /* ast2400/2500 AHB accesses are not cache coherent */ + prot = pgprot_noncached(prot); + + if (remap_pfn_range(vma, vma->vm_start, + (ctrl->mem_base >> PAGE_SHIFT) + vma->vm_pgoff, + vsize, prot)) + return -EAGAIN; + + return 0; +} + +static bool aspeed_p2a_region_acquire(struct aspeed_p2a_user *priv, + struct aspeed_p2a_ctrl *ctrl, + struct aspeed_p2a_ctrl_mapping *map) +{ + int i; + u64 base, end; + bool matched = false; + + base = map->addr; + end = map->addr + (map->length - 1); + + /* If the value is a legal u32, it will find a match. */ + for (i = 0; i < P2A_REGION_COUNT; i++) { + const struct region *curr = &ctrl->config->regions[i]; + + /* If the top of this region is lower than your base, skip it. + */ + if (curr->max < base) + continue; + + /* If the bottom of this region is higher than your end, bail. + */ + if (curr->min > end) + break; + + /* Lock this and update it, therefore it someone else is + * closing their file out, this'll preserve the increment. + */ + mutex_lock(&ctrl->tracking); + ctrl->readerwriters[i] += 1; + mutex_unlock(&ctrl->tracking); + + /* Track with the user, so when they close their file, we can + * decrement properly. + */ + priv->readwrite[i] += 1; + + /* Enable the region as read-write. */ + regmap_update_bits(ctrl->regmap, SCU2C, curr->bit, 0); + matched = true; + } + + return matched; +} + +static long aspeed_p2a_ioctl(struct file *file, unsigned int cmd, + unsigned long data) +{ + struct aspeed_p2a_user *priv = file->private_data; + struct aspeed_p2a_ctrl *ctrl = priv->parent; + void __user *arg = (void __user *)data; + struct aspeed_p2a_ctrl_mapping map; + + if (copy_from_user(&map, arg, sizeof(map))) + return -EFAULT; + + switch (cmd) { + case ASPEED_P2A_CTRL_IOCTL_SET_WINDOW: + /* If they want a region to be read-only, since the entire + * region is read-only once enabled, we just need to track this + * user wants to read from the bridge, and if it's not enabled. + * Enable it. + */ + if (map.flags == ASPEED_P2A_CTRL_READ_ONLY) { + mutex_lock(&ctrl->tracking); + ctrl->readers += 1; + mutex_unlock(&ctrl->tracking); + + /* Track with the user, so when they close their file, + * we can decrement properly. + */ + priv->read += 1; + } else if (map.flags == ASPEED_P2A_CTRL_READWRITE) { + /* If we don't acquire any region return error. */ + if (!aspeed_p2a_region_acquire(priv, ctrl, &map)) { + return -EINVAL; + } + } else { + /* Invalid map flags. */ + return -EINVAL; + } + + aspeed_p2a_enable_bridge(ctrl); + return 0; + case ASPEED_P2A_CTRL_IOCTL_GET_MEMORY_CONFIG: + /* This is a request for the memory-region and corresponding + * length that is used by the driver for mmap. + */ + + map.flags = 0; + map.addr = ctrl->mem_base; + map.length = ctrl->mem_size; + + return copy_to_user(arg, &map, sizeof(map)) ? -EFAULT : 0; + } + + return -EINVAL; +} + + +/* + * When a user opens this file, we create a structure to track their mappings. + * + * A user can map a region as read-only (bridge enabled), or read-write (bit + * flipped, and bridge enabled). Either way, this tracking is used, s.t. when + * they release the device references are handled. + * + * The bridge is not enabled until a user calls an ioctl to map a region, + * simply opening the device does not enable it. + */ +static int aspeed_p2a_open(struct inode *inode, struct file *file) +{ + struct aspeed_p2a_user *priv; + + priv = kmalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->file = file; + priv->read = 0; + memset(priv->readwrite, 0, sizeof(priv->readwrite)); + + /* The file's private_data is initialized to the p2a_ctrl. */ + priv->parent = file->private_data; + + /* Set the file's private_data to the user's data. */ + file->private_data = priv; + + return 0; +} + +/* + * This will close the users mappings. It will go through what they had opened + * for readwrite, and decrement those counts. If at the end, this is the last + * user, it'll close the bridge. + */ +static int aspeed_p2a_release(struct inode *inode, struct file *file) +{ + int i; + u32 bits = 0; + bool open_regions = false; + struct aspeed_p2a_user *priv = file->private_data; + + /* Lock others from changing these values until everything is updated + * in one pass. + */ + mutex_lock(&priv->parent->tracking); + + priv->parent->readers -= priv->read; + + for (i = 0; i < P2A_REGION_COUNT; i++) { + priv->parent->readerwriters[i] -= priv->readwrite[i]; + + if (priv->parent->readerwriters[i] > 0) + open_regions = true; + else + bits |= priv->parent->config->regions[i].bit; + } + + /* Setting a bit to 1 disables the region, so let's just OR with the + * above to disable any. + */ + + /* Note, if another user is trying to ioctl, they can't grab tracking, + * and therefore can't grab either register mutex. + * If another user is trying to close, they can't grab tracking either. + */ + regmap_update_bits(priv->parent->regmap, SCU2C, bits, bits); + + /* If parent->readers is zero and open windows is 0, disable the + * bridge. + */ + if (!open_regions && priv->parent->readers == 0) + aspeed_p2a_disable_bridge(priv->parent); + + mutex_unlock(&priv->parent->tracking); + + kfree(priv); + + return 0; +} + +static const struct file_operations aspeed_p2a_ctrl_fops = { + .owner = THIS_MODULE, + .mmap = aspeed_p2a_mmap, + .unlocked_ioctl = aspeed_p2a_ioctl, + .open = aspeed_p2a_open, + .release = aspeed_p2a_release, +}; + +/* The regions are controlled by SCU2C */ +static void aspeed_p2a_disable_all(struct aspeed_p2a_ctrl *p2a_ctrl) +{ + int i; + u32 value = 0; + + for (i = 0; i < P2A_REGION_COUNT; i++) + value |= p2a_ctrl->config->regions[i].bit; + + regmap_update_bits(p2a_ctrl->regmap, SCU2C, value, value); + + /* Disable the bridge. */ + aspeed_p2a_disable_bridge(p2a_ctrl); +} + +static int aspeed_p2a_ctrl_probe(struct platform_device *pdev) +{ + struct aspeed_p2a_ctrl *misc_ctrl; + struct device *dev; + struct resource resm; + struct device_node *node; + int rc = 0; + + dev = &pdev->dev; + + misc_ctrl = devm_kzalloc(dev, sizeof(*misc_ctrl), GFP_KERNEL); + if (!misc_ctrl) + return -ENOMEM; + + mutex_init(&misc_ctrl->tracking); + + /* optional. */ + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (node) { + rc = of_address_to_resource(node, 0, &resm); + of_node_put(node); + if (rc) { + dev_err(dev, "Couldn't address to resource for reserved memory\n"); + return -ENODEV; + } + + misc_ctrl->mem_size = resource_size(&resm); + misc_ctrl->mem_base = resm.start; + } + + misc_ctrl->regmap = syscon_node_to_regmap(pdev->dev.parent->of_node); + if (IS_ERR(misc_ctrl->regmap)) { + dev_err(dev, "Couldn't get regmap\n"); + return -ENODEV; + } + + misc_ctrl->config = of_device_get_match_data(dev); + + dev_set_drvdata(&pdev->dev, misc_ctrl); + + aspeed_p2a_disable_all(misc_ctrl); + + misc_ctrl->miscdev.minor = MISC_DYNAMIC_MINOR; + misc_ctrl->miscdev.name = DEVICE_NAME; + misc_ctrl->miscdev.fops = &aspeed_p2a_ctrl_fops; + misc_ctrl->miscdev.parent = dev; + + rc = misc_register(&misc_ctrl->miscdev); + if (rc) + dev_err(dev, "Unable to register device\n"); + + return rc; +} + +static int aspeed_p2a_ctrl_remove(struct platform_device *pdev) +{ + struct aspeed_p2a_ctrl *p2a_ctrl = dev_get_drvdata(&pdev->dev); + + misc_deregister(&p2a_ctrl->miscdev); + + return 0; +} + +#define SCU2C_DRAM BIT(25) +#define SCU2C_SPI BIT(24) +#define SCU2C_SOC BIT(23) +#define SCU2C_FLASH BIT(22) + +static const struct aspeed_p2a_model_data ast2400_model_data = { + .regions = { + {0x00000000, 0x17FFFFFF, SCU2C_FLASH}, + {0x18000000, 0x1FFFFFFF, SCU2C_SOC}, + {0x20000000, 0x2FFFFFFF, SCU2C_FLASH}, + {0x30000000, 0x3FFFFFFF, SCU2C_SPI}, + {0x40000000, 0x5FFFFFFF, SCU2C_DRAM}, + {0x60000000, 0xFFFFFFFF, SCU2C_SOC}, + } +}; + +static const struct aspeed_p2a_model_data ast2500_model_data = { + .regions = { + {0x00000000, 0x0FFFFFFF, SCU2C_FLASH}, + {0x10000000, 0x1FFFFFFF, SCU2C_SOC}, + {0x20000000, 0x3FFFFFFF, SCU2C_FLASH}, + {0x40000000, 0x5FFFFFFF, SCU2C_SOC}, + {0x60000000, 0x7FFFFFFF, SCU2C_SPI}, + {0x80000000, 0xFFFFFFFF, SCU2C_DRAM}, + } +}; + +static const struct of_device_id aspeed_p2a_ctrl_match[] = { + { .compatible = "aspeed,ast2400-p2a-ctrl", + .data = &ast2400_model_data }, + { .compatible = "aspeed,ast2500-p2a-ctrl", + .data = &ast2500_model_data }, + { }, +}; + +static struct platform_driver aspeed_p2a_ctrl_driver = { + .driver = { + .name = DEVICE_NAME, + .of_match_table = aspeed_p2a_ctrl_match, + }, + .probe = aspeed_p2a_ctrl_probe, + .remove = aspeed_p2a_ctrl_remove, +}; + +module_platform_driver(aspeed_p2a_ctrl_driver); + +MODULE_DEVICE_TABLE(of, aspeed_p2a_ctrl_match); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Patrick Venture <venture@google.com>"); +MODULE_DESCRIPTION("Control for aspeed 2400/2500 P2A VGA HOST to BMC mappings"); diff --git a/drivers/misc/cardreader/rts5260.c b/drivers/misc/cardreader/rts5260.c index 52c95add56f0..4e285addbf2b 100644 --- a/drivers/misc/cardreader/rts5260.c +++ b/drivers/misc/cardreader/rts5260.c @@ -456,13 +456,13 @@ static void rts5260_pwr_saving_setting(struct rtsx_pcr *pcr) pcr_dbg(pcr, "Set parameters for L1.2."); rtsx_pci_write_register(pcr, PWR_GLOBAL_CTRL, 0xFF, PCIE_L1_2_EN); - rtsx_pci_write_register(pcr, RTS5260_DVCC_CTRL, + rtsx_pci_write_register(pcr, RTS5260_DVCC_CTRL, RTS5260_DVCC_OCP_EN | RTS5260_DVCC_OCP_CL_EN, RTS5260_DVCC_OCP_EN | RTS5260_DVCC_OCP_CL_EN); - rtsx_pci_write_register(pcr, PWR_FE_CTL, + rtsx_pci_write_register(pcr, PWR_FE_CTL, 0xFF, PCIE_L1_2_PD_FE_EN); } else if (lss_l1_1) { pcr_dbg(pcr, "Set parameters for L1.1."); diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 36d0d5c9cfba..98603e235cf0 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -12,6 +12,7 @@ #include <linux/module.h> #include <linux/of_address.h> #include <linux/of.h> +#include <linux/sort.h> #include <linux/of_platform.h> #include <linux/rpmsg.h> #include <linux/scatterlist.h> @@ -31,7 +32,7 @@ #define FASTRPC_CTX_MAX (256) #define FASTRPC_INIT_HANDLE 1 #define FASTRPC_CTXID_MASK (0xFF0) -#define INIT_FILELEN_MAX (2 * 1024 * 1024) +#define INIT_FILELEN_MAX (64 * 1024 * 1024) #define INIT_MEMLEN_MAX (8 * 1024 * 1024) #define FASTRPC_DEVICE_NAME "fastrpc" @@ -104,6 +105,15 @@ struct fastrpc_invoke_rsp { int retval; /* invoke return value */ }; +struct fastrpc_buf_overlap { + u64 start; + u64 end; + int raix; + u64 mstart; + u64 mend; + u64 offset; +}; + struct fastrpc_buf { struct fastrpc_user *fl; struct dma_buf *dmabuf; @@ -149,12 +159,14 @@ struct fastrpc_invoke_ctx { struct kref refcount; struct list_head node; /* list of ctxs */ struct completion work; + struct work_struct put_work; struct fastrpc_msg msg; struct fastrpc_user *fl; struct fastrpc_remote_arg *rpra; struct fastrpc_map **maps; struct fastrpc_buf *buf; struct fastrpc_invoke_args *args; + struct fastrpc_buf_overlap *olaps; struct fastrpc_channel_ctx *cctx; }; @@ -282,6 +294,7 @@ static void fastrpc_context_free(struct kref *ref) { struct fastrpc_invoke_ctx *ctx; struct fastrpc_channel_ctx *cctx; + unsigned long flags; int i; ctx = container_of(ref, struct fastrpc_invoke_ctx, refcount); @@ -293,11 +306,12 @@ static void fastrpc_context_free(struct kref *ref) if (ctx->buf) fastrpc_buf_free(ctx->buf); - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); idr_remove(&cctx->ctx_idr, ctx->ctxid >> 4); - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); kfree(ctx->maps); + kfree(ctx->olaps); kfree(ctx); } @@ -311,12 +325,70 @@ static void fastrpc_context_put(struct fastrpc_invoke_ctx *ctx) kref_put(&ctx->refcount, fastrpc_context_free); } +static void fastrpc_context_put_wq(struct work_struct *work) +{ + struct fastrpc_invoke_ctx *ctx = + container_of(work, struct fastrpc_invoke_ctx, put_work); + + fastrpc_context_put(ctx); +} + +#define CMP(aa, bb) ((aa) == (bb) ? 0 : (aa) < (bb) ? -1 : 1) +static int olaps_cmp(const void *a, const void *b) +{ + struct fastrpc_buf_overlap *pa = (struct fastrpc_buf_overlap *)a; + struct fastrpc_buf_overlap *pb = (struct fastrpc_buf_overlap *)b; + /* sort with lowest starting buffer first */ + int st = CMP(pa->start, pb->start); + /* sort with highest ending buffer first */ + int ed = CMP(pb->end, pa->end); + + return st == 0 ? ed : st; +} + +static void fastrpc_get_buff_overlaps(struct fastrpc_invoke_ctx *ctx) +{ + u64 max_end = 0; + int i; + + for (i = 0; i < ctx->nbufs; ++i) { + ctx->olaps[i].start = ctx->args[i].ptr; + ctx->olaps[i].end = ctx->olaps[i].start + ctx->args[i].length; + ctx->olaps[i].raix = i; + } + + sort(ctx->olaps, ctx->nbufs, sizeof(*ctx->olaps), olaps_cmp, NULL); + + for (i = 0; i < ctx->nbufs; ++i) { + /* Falling inside previous range */ + if (ctx->olaps[i].start < max_end) { + ctx->olaps[i].mstart = max_end; + ctx->olaps[i].mend = ctx->olaps[i].end; + ctx->olaps[i].offset = max_end - ctx->olaps[i].start; + + if (ctx->olaps[i].end > max_end) { + max_end = ctx->olaps[i].end; + } else { + ctx->olaps[i].mend = 0; + ctx->olaps[i].mstart = 0; + } + + } else { + ctx->olaps[i].mend = ctx->olaps[i].end; + ctx->olaps[i].mstart = ctx->olaps[i].start; + ctx->olaps[i].offset = 0; + max_end = ctx->olaps[i].end; + } + } +} + static struct fastrpc_invoke_ctx *fastrpc_context_alloc( struct fastrpc_user *user, u32 kernel, u32 sc, struct fastrpc_invoke_args *args) { struct fastrpc_channel_ctx *cctx = user->cctx; struct fastrpc_invoke_ctx *ctx = NULL; + unsigned long flags; int ret; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -336,7 +408,15 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( kfree(ctx); return ERR_PTR(-ENOMEM); } + ctx->olaps = kcalloc(ctx->nscalars, + sizeof(*ctx->olaps), GFP_KERNEL); + if (!ctx->olaps) { + kfree(ctx->maps); + kfree(ctx); + return ERR_PTR(-ENOMEM); + } ctx->args = args; + fastrpc_get_buff_overlaps(ctx); } ctx->sc = sc; @@ -345,20 +425,21 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc( ctx->tgid = user->tgid; ctx->cctx = cctx; init_completion(&ctx->work); + INIT_WORK(&ctx->put_work, fastrpc_context_put_wq); spin_lock(&user->lock); list_add_tail(&ctx->node, &user->pending); spin_unlock(&user->lock); - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); ret = idr_alloc_cyclic(&cctx->ctx_idr, ctx, 1, FASTRPC_CTX_MAX, GFP_ATOMIC); if (ret < 0) { - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); goto err_idr; } ctx->ctxid = ret << 4; - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); kref_init(&ctx->refcount); @@ -368,6 +449,7 @@ err_idr: list_del(&ctx->node); spin_unlock(&user->lock); kfree(ctx->maps); + kfree(ctx->olaps); kfree(ctx); return ERR_PTR(ret); @@ -586,8 +668,11 @@ static u64 fastrpc_get_payload_size(struct fastrpc_invoke_ctx *ctx, int metalen) size = ALIGN(metalen, FASTRPC_ALIGN); for (i = 0; i < ctx->nscalars; i++) { if (ctx->args[i].fd == 0 || ctx->args[i].fd == -1) { - size = ALIGN(size, FASTRPC_ALIGN); - size += ctx->args[i].length; + + if (ctx->olaps[i].offset == 0) + size = ALIGN(size, FASTRPC_ALIGN); + + size += (ctx->olaps[i].mend - ctx->olaps[i].mstart); } } @@ -625,12 +710,12 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) struct fastrpc_remote_arg *rpra; struct fastrpc_invoke_buf *list; struct fastrpc_phy_page *pages; - int inbufs, i, err = 0; - u64 rlen, pkt_size; + int inbufs, i, oix, err = 0; + u64 len, rlen, pkt_size; + u64 pg_start, pg_end; uintptr_t args; int metalen; - inbufs = REMOTE_SCALARS_INBUFS(ctx->sc); metalen = fastrpc_get_meta_size(ctx); pkt_size = fastrpc_get_payload_size(ctx, metalen); @@ -653,8 +738,11 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) rlen = pkt_size - metalen; ctx->rpra = rpra; - for (i = 0; i < ctx->nbufs; ++i) { - u64 len = ctx->args[i].length; + for (oix = 0; oix < ctx->nbufs; ++oix) { + int mlen; + + i = ctx->olaps[oix].raix; + len = ctx->args[i].length; rpra[i].pv = 0; rpra[i].len = len; @@ -664,22 +752,45 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx) if (!len) continue; - pages[i].size = roundup(len, PAGE_SIZE); - if (ctx->maps[i]) { + struct vm_area_struct *vma = NULL; + rpra[i].pv = (u64) ctx->args[i].ptr; pages[i].addr = ctx->maps[i]->phys; + + vma = find_vma(current->mm, ctx->args[i].ptr); + if (vma) + pages[i].addr += ctx->args[i].ptr - + vma->vm_start; + + pg_start = (ctx->args[i].ptr & PAGE_MASK) >> PAGE_SHIFT; + pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >> + PAGE_SHIFT; + pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE; + } else { - rlen -= ALIGN(args, FASTRPC_ALIGN) - args; - args = ALIGN(args, FASTRPC_ALIGN); - if (rlen < len) + + if (ctx->olaps[oix].offset == 0) { + rlen -= ALIGN(args, FASTRPC_ALIGN) - args; + args = ALIGN(args, FASTRPC_ALIGN); + } + + mlen = ctx->olaps[oix].mend - ctx->olaps[oix].mstart; + + if (rlen < mlen) goto bail; - rpra[i].pv = args; - pages[i].addr = ctx->buf->phys + (pkt_size - rlen); + rpra[i].pv = args - ctx->olaps[oix].offset; + pages[i].addr = ctx->buf->phys - + ctx->olaps[oix].offset + + (pkt_size - rlen); pages[i].addr = pages[i].addr & PAGE_MASK; - args = args + len; - rlen -= len; + + pg_start = (args & PAGE_MASK) >> PAGE_SHIFT; + pg_end = ((args + len - 1) & PAGE_MASK) >> PAGE_SHIFT; + pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE; + args = args + mlen; + rlen -= mlen; } if (i < inbufs && !ctx->maps[i]) { @@ -782,6 +893,9 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, if (err) goto bail; } + + /* make sure that all CPU memory writes are seen by DSP */ + dma_wmb(); /* Send invoke buffer to remote dsp */ err = fastrpc_invoke_send(fl->sctx, ctx, kernel, handle); if (err) @@ -798,6 +912,8 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel, goto bail; if (ctx->nscalars) { + /* make sure that all memory writes by DSP are seen by CPU */ + dma_rmb(); /* populate all the output buffers with results */ err = fastrpc_put_args(ctx, kernel); if (err) @@ -843,12 +959,12 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, if (copy_from_user(&init, argp, sizeof(init))) { err = -EFAULT; - goto bail; + goto err; } if (init.filelen > INIT_FILELEN_MAX) { err = -EINVAL; - goto bail; + goto err; } inbuf.pgid = fl->tgid; @@ -862,17 +978,15 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, if (init.filelen && init.filefd) { err = fastrpc_map_create(fl, init.filefd, init.filelen, &map); if (err) - goto bail; + goto err; } memlen = ALIGN(max(INIT_FILELEN_MAX, (int)init.filelen * 4), 1024 * 1024); err = fastrpc_buf_alloc(fl, fl->sctx->dev, memlen, &imem); - if (err) { - fastrpc_map_put(map); - goto bail; - } + if (err) + goto err_alloc; fl->init_mem = imem; args[0].ptr = (u64)(uintptr_t)&inbuf; @@ -908,13 +1022,24 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl, err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc, args); + if (err) + goto err_invoke; - if (err) { + kfree(args); + + return 0; + +err_invoke: + fl->init_mem = NULL; + fastrpc_buf_free(imem); +err_alloc: + if (map) { + spin_lock(&fl->lock); + list_del(&map->node); + spin_unlock(&fl->lock); fastrpc_map_put(map); - fastrpc_buf_free(imem); } - -bail: +err: kfree(args); return err; @@ -924,9 +1049,10 @@ static struct fastrpc_session_ctx *fastrpc_session_alloc( struct fastrpc_channel_ctx *cctx) { struct fastrpc_session_ctx *session = NULL; + unsigned long flags; int i; - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); for (i = 0; i < cctx->sesscount; i++) { if (!cctx->session[i].used && cctx->session[i].valid) { cctx->session[i].used = true; @@ -934,7 +1060,7 @@ static struct fastrpc_session_ctx *fastrpc_session_alloc( break; } } - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); return session; } @@ -942,9 +1068,11 @@ static struct fastrpc_session_ctx *fastrpc_session_alloc( static void fastrpc_session_free(struct fastrpc_channel_ctx *cctx, struct fastrpc_session_ctx *session) { - spin_lock(&cctx->lock); + unsigned long flags; + + spin_lock_irqsave(&cctx->lock, flags); session->used = false; - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); } static int fastrpc_release_current_dsp_process(struct fastrpc_user *fl) @@ -970,12 +1098,13 @@ static int fastrpc_device_release(struct inode *inode, struct file *file) struct fastrpc_channel_ctx *cctx = fl->cctx; struct fastrpc_invoke_ctx *ctx, *n; struct fastrpc_map *map, *m; + unsigned long flags; fastrpc_release_current_dsp_process(fl); - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); list_del(&fl->user); - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); if (fl->init_mem) fastrpc_buf_free(fl->init_mem); @@ -1003,6 +1132,7 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) { struct fastrpc_channel_ctx *cctx = miscdev_to_cctx(filp->private_data); struct fastrpc_user *fl = NULL; + unsigned long flags; fl = kzalloc(sizeof(*fl), GFP_KERNEL); if (!fl) @@ -1026,9 +1156,9 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp) return -EBUSY; } - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); list_add_tail(&fl->user, &cctx->users); - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); return 0; } @@ -1184,6 +1314,7 @@ static int fastrpc_cb_probe(struct platform_device *pdev) struct fastrpc_session_ctx *sess; struct device *dev = &pdev->dev; int i, sessions = 0; + unsigned long flags; int rc; cctx = dev_get_drvdata(dev->parent); @@ -1192,7 +1323,7 @@ static int fastrpc_cb_probe(struct platform_device *pdev) of_property_read_u32(dev->of_node, "qcom,nsessions", &sessions); - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); sess = &cctx->session[cctx->sesscount]; sess->used = false; sess->valid = true; @@ -1213,7 +1344,7 @@ static int fastrpc_cb_probe(struct platform_device *pdev) } } cctx->sesscount++; - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); rc = dma_set_mask(dev, DMA_BIT_MASK(32)); if (rc) { dev_err(dev, "32-bit DMA enable failed\n"); @@ -1227,16 +1358,17 @@ static int fastrpc_cb_remove(struct platform_device *pdev) { struct fastrpc_channel_ctx *cctx = dev_get_drvdata(pdev->dev.parent); struct fastrpc_session_ctx *sess = dev_get_drvdata(&pdev->dev); + unsigned long flags; int i; - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); for (i = 1; i < FASTRPC_MAX_SESSIONS; i++) { if (cctx->session[i].sid == sess->sid) { cctx->session[i].valid = false; cctx->sesscount--; } } - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); return 0; } @@ -1318,11 +1450,12 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev) { struct fastrpc_channel_ctx *cctx = dev_get_drvdata(&rpdev->dev); struct fastrpc_user *user; + unsigned long flags; - spin_lock(&cctx->lock); + spin_lock_irqsave(&cctx->lock, flags); list_for_each_entry(user, &cctx->users, user) fastrpc_notify_users(user); - spin_unlock(&cctx->lock); + spin_unlock_irqrestore(&cctx->lock, flags); misc_deregister(&cctx->miscdev); of_platform_depopulate(&rpdev->dev); @@ -1354,7 +1487,13 @@ static int fastrpc_rpmsg_callback(struct rpmsg_device *rpdev, void *data, ctx->retval = rsp->retval; complete(&ctx->work); - fastrpc_context_put(ctx); + + /* + * The DMA buffer associated with the context cannot be freed in + * interrupt context so schedule it through a worker thread to + * avoid a kernel BUG. + */ + schedule_work(&ctx->put_work); return 0; } diff --git a/drivers/misc/genwqe/card_debugfs.c b/drivers/misc/genwqe/card_debugfs.c index 7c713e01d198..6f7e39f07811 100644 --- a/drivers/misc/genwqe/card_debugfs.c +++ b/drivers/misc/genwqe/card_debugfs.c @@ -227,7 +227,7 @@ static int ddcb_info_show(struct seq_file *s, void *unused) seq_puts(s, "DDCB QUEUE:\n"); seq_printf(s, " ddcb_max: %d\n" " ddcb_daddr: %016llx - %016llx\n" - " ddcb_vaddr: %016llx\n" + " ddcb_vaddr: %p\n" " ddcbs_in_flight: %u\n" " ddcbs_max_in_flight: %u\n" " ddcbs_completed: %u\n" @@ -237,7 +237,7 @@ static int ddcb_info_show(struct seq_file *s, void *unused) queue->ddcb_max, (long long)queue->ddcb_daddr, (long long)queue->ddcb_daddr + (queue->ddcb_max * DDCB_LENGTH), - (long long)queue->ddcb_vaddr, queue->ddcbs_in_flight, + queue->ddcb_vaddr, queue->ddcbs_in_flight, queue->ddcbs_max_in_flight, queue->ddcbs_completed, queue->return_on_busy, queue->wait_on_busy, cd->irqs_processed); diff --git a/drivers/misc/habanalabs/Makefile b/drivers/misc/habanalabs/Makefile index c6592db59b25..f8e85243d672 100644 --- a/drivers/misc/habanalabs/Makefile +++ b/drivers/misc/habanalabs/Makefile @@ -6,7 +6,7 @@ obj-m := habanalabs.o habanalabs-y := habanalabs_drv.o device.o context.o asid.o habanalabs_ioctl.o \ command_buffer.o hw_queue.o irq.o sysfs.o hwmon.o memory.o \ - command_submission.o mmu.o + command_submission.o mmu.o firmware_if.o pci.o habanalabs-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/misc/habanalabs/command_buffer.c b/drivers/misc/habanalabs/command_buffer.c index 85f75806a9a7..e495f44064fa 100644 --- a/drivers/misc/habanalabs/command_buffer.c +++ b/drivers/misc/habanalabs/command_buffer.c @@ -13,7 +13,7 @@ static void cb_fini(struct hl_device *hdev, struct hl_cb *cb) { - hdev->asic_funcs->dma_free_coherent(hdev, cb->size, + hdev->asic_funcs->asic_dma_free_coherent(hdev, cb->size, (void *) (uintptr_t) cb->kernel_address, cb->bus_address); kfree(cb); @@ -66,10 +66,10 @@ static struct hl_cb *hl_cb_alloc(struct hl_device *hdev, u32 cb_size, return NULL; if (ctx_id == HL_KERNEL_ASID_ID) - p = hdev->asic_funcs->dma_alloc_coherent(hdev, cb_size, + p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, cb_size, &cb->bus_address, GFP_ATOMIC); else - p = hdev->asic_funcs->dma_alloc_coherent(hdev, cb_size, + p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, cb_size, &cb->bus_address, GFP_USER | __GFP_ZERO); if (!p) { @@ -214,6 +214,13 @@ int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data) u64 handle; int rc; + if (hl_device_disabled_or_in_reset(hdev)) { + dev_warn_ratelimited(hdev->dev, + "Device is %s. Can't execute CB IOCTL\n", + atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); + return -EBUSY; + } + switch (args->in.op) { case HL_CB_OP_CREATE: rc = hl_cb_create(hdev, &hpriv->cb_mgr, args->in.cb_size, diff --git a/drivers/misc/habanalabs/command_submission.c b/drivers/misc/habanalabs/command_submission.c index 19c84214a7ea..6fe785e26859 100644 --- a/drivers/misc/habanalabs/command_submission.c +++ b/drivers/misc/habanalabs/command_submission.c @@ -93,7 +93,6 @@ static int cs_parser(struct hl_fpriv *hpriv, struct hl_cs_job *job) parser.user_cb_size = job->user_cb_size; parser.ext_queue = job->ext_queue; job->patched_cb = NULL; - parser.use_virt_addr = hdev->mmu_enable; rc = hdev->asic_funcs->cs_parser(hdev, &parser); if (job->ext_queue) { @@ -261,7 +260,8 @@ static void cs_timedout(struct work_struct *work) ctx_asid = cs->ctx->asid; /* TODO: add information about last signaled seq and last emitted seq */ - dev_err(hdev->dev, "CS %d.%llu got stuck!\n", ctx_asid, cs->sequence); + dev_err(hdev->dev, "User %d command submission %llu got stuck!\n", + ctx_asid, cs->sequence); cs_put(cs); @@ -600,20 +600,20 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) void __user *chunks; u32 num_chunks; u64 cs_seq = ULONG_MAX; - int rc, do_restore; + int rc, do_ctx_switch; bool need_soft_reset = false; if (hl_device_disabled_or_in_reset(hdev)) { - dev_warn(hdev->dev, + dev_warn_ratelimited(hdev->dev, "Device is %s. Can't submit new CS\n", atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); rc = -EBUSY; goto out; } - do_restore = atomic_cmpxchg(&ctx->thread_restore_token, 1, 0); + do_ctx_switch = atomic_cmpxchg(&ctx->thread_ctx_switch_token, 1, 0); - if (do_restore || (args->in.cs_flags & HL_CS_FLAGS_FORCE_RESTORE)) { + if (do_ctx_switch || (args->in.cs_flags & HL_CS_FLAGS_FORCE_RESTORE)) { long ret; chunks = (void __user *)(uintptr_t)args->in.chunks_restore; @@ -621,7 +621,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) mutex_lock(&hpriv->restore_phase_mutex); - if (do_restore) { + if (do_ctx_switch) { rc = hdev->asic_funcs->context_switch(hdev, ctx->asid); if (rc) { dev_err_ratelimited(hdev->dev, @@ -677,18 +677,18 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data) } } - ctx->thread_restore_wait_token = 1; - } else if (!ctx->thread_restore_wait_token) { + ctx->thread_ctx_switch_wait_token = 1; + } else if (!ctx->thread_ctx_switch_wait_token) { u32 tmp; rc = hl_poll_timeout_memory(hdev, - (u64) (uintptr_t) &ctx->thread_restore_wait_token, + (u64) (uintptr_t) &ctx->thread_ctx_switch_wait_token, jiffies_to_usecs(hdev->timeout_jiffies), &tmp); if (rc || !tmp) { dev_err(hdev->dev, - "restore phase hasn't finished in time\n"); + "context switch phase didn't finish in time\n"); rc = -ETIMEDOUT; goto out; } diff --git a/drivers/misc/habanalabs/context.c b/drivers/misc/habanalabs/context.c index 619ace1c4ef7..4804cdcf4c48 100644 --- a/drivers/misc/habanalabs/context.c +++ b/drivers/misc/habanalabs/context.c @@ -106,8 +106,8 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx) ctx->cs_sequence = 1; spin_lock_init(&ctx->cs_lock); - atomic_set(&ctx->thread_restore_token, 1); - ctx->thread_restore_wait_token = 0; + atomic_set(&ctx->thread_ctx_switch_token, 1); + ctx->thread_ctx_switch_wait_token = 0; if (is_kernel_ctx) { ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */ diff --git a/drivers/misc/habanalabs/debugfs.c b/drivers/misc/habanalabs/debugfs.c index 974a87789bd8..a4447699ff4e 100644 --- a/drivers/misc/habanalabs/debugfs.c +++ b/drivers/misc/habanalabs/debugfs.c @@ -505,22 +505,97 @@ err: return -EINVAL; } +static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr, + u64 *phys_addr) +{ + struct hl_ctx *ctx = hdev->user_ctx; + u64 hop_addr, hop_pte_addr, hop_pte; + int rc = 0; + + if (!ctx) { + dev_err(hdev->dev, "no ctx available\n"); + return -EINVAL; + } + + mutex_lock(&ctx->mmu_lock); + + /* hop 0 */ + hop_addr = get_hop0_addr(ctx); + hop_pte_addr = get_hop0_pte_addr(ctx, hop_addr, virt_addr); + hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr); + + /* hop 1 */ + hop_addr = get_next_hop_addr(hop_pte); + if (hop_addr == ULLONG_MAX) + goto not_mapped; + hop_pte_addr = get_hop1_pte_addr(ctx, hop_addr, virt_addr); + hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr); + + /* hop 2 */ + hop_addr = get_next_hop_addr(hop_pte); + if (hop_addr == ULLONG_MAX) + goto not_mapped; + hop_pte_addr = get_hop2_pte_addr(ctx, hop_addr, virt_addr); + hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr); + + /* hop 3 */ + hop_addr = get_next_hop_addr(hop_pte); + if (hop_addr == ULLONG_MAX) + goto not_mapped; + hop_pte_addr = get_hop3_pte_addr(ctx, hop_addr, virt_addr); + hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr); + + if (!(hop_pte & LAST_MASK)) { + /* hop 4 */ + hop_addr = get_next_hop_addr(hop_pte); + if (hop_addr == ULLONG_MAX) + goto not_mapped; + hop_pte_addr = get_hop4_pte_addr(ctx, hop_addr, virt_addr); + hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr); + } + + if (!(hop_pte & PAGE_PRESENT_MASK)) + goto not_mapped; + + *phys_addr = (hop_pte & PTE_PHYS_ADDR_MASK) | (virt_addr & OFFSET_MASK); + + goto out; + +not_mapped: + dev_err(hdev->dev, "virt addr 0x%llx is not mapped to phys addr\n", + virt_addr); + rc = -EINVAL; +out: + mutex_unlock(&ctx->mmu_lock); + return rc; +} + static ssize_t hl_data_read32(struct file *f, char __user *buf, size_t count, loff_t *ppos) { struct hl_dbg_device_entry *entry = file_inode(f)->i_private; struct hl_device *hdev = entry->hdev; + struct asic_fixed_properties *prop = &hdev->asic_prop; char tmp_buf[32]; + u64 addr = entry->addr; u32 val; ssize_t rc; if (*ppos) return 0; - rc = hdev->asic_funcs->debugfs_read32(hdev, entry->addr, &val); + if (addr >= prop->va_space_dram_start_address && + addr < prop->va_space_dram_end_address && + hdev->mmu_enable && + hdev->dram_supports_virtual_memory) { + rc = device_va_to_pa(hdev, entry->addr, &addr); + if (rc) + return rc; + } + + rc = hdev->asic_funcs->debugfs_read32(hdev, addr, &val); if (rc) { - dev_err(hdev->dev, "Failed to read from 0x%010llx\n", - entry->addr); + dev_err(hdev->dev, "Failed to read from 0x%010llx\n", addr); return rc; } @@ -536,6 +611,8 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf, { struct hl_dbg_device_entry *entry = file_inode(f)->i_private; struct hl_device *hdev = entry->hdev; + struct asic_fixed_properties *prop = &hdev->asic_prop; + u64 addr = entry->addr; u32 value; ssize_t rc; @@ -543,10 +620,19 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf, if (rc) return rc; - rc = hdev->asic_funcs->debugfs_write32(hdev, entry->addr, value); + if (addr >= prop->va_space_dram_start_address && + addr < prop->va_space_dram_end_address && + hdev->mmu_enable && + hdev->dram_supports_virtual_memory) { + rc = device_va_to_pa(hdev, entry->addr, &addr); + if (rc) + return rc; + } + + rc = hdev->asic_funcs->debugfs_write32(hdev, addr, value); if (rc) { dev_err(hdev->dev, "Failed to write 0x%08x to 0x%010llx\n", - value, entry->addr); + value, addr); return rc; } diff --git a/drivers/misc/habanalabs/device.c b/drivers/misc/habanalabs/device.c index 77d51be66c7e..91a9e47a3482 100644 --- a/drivers/misc/habanalabs/device.c +++ b/drivers/misc/habanalabs/device.c @@ -5,11 +5,14 @@ * All Rights Reserved. */ +#define pr_fmt(fmt) "habanalabs: " fmt + #include "habanalabs.h" #include <linux/pci.h> #include <linux/sched/signal.h> #include <linux/hwmon.h> +#include <uapi/misc/habanalabs.h> #define HL_PLDM_PENDING_RESET_PER_SEC (HL_PENDING_RESET_PER_SEC * 10) @@ -21,6 +24,20 @@ bool hl_device_disabled_or_in_reset(struct hl_device *hdev) return false; } +enum hl_device_status hl_device_status(struct hl_device *hdev) +{ + enum hl_device_status status; + + if (hdev->disabled) + status = HL_DEVICE_STATUS_MALFUNCTION; + else if (atomic_read(&hdev->in_reset)) + status = HL_DEVICE_STATUS_IN_RESET; + else + status = HL_DEVICE_STATUS_OPERATIONAL; + + return status; +}; + static void hpriv_release(struct kref *ref) { struct hl_fpriv *hpriv; @@ -498,11 +515,8 @@ disable_device: return rc; } -static void hl_device_hard_reset_pending(struct work_struct *work) +static void device_kill_open_processes(struct hl_device *hdev) { - struct hl_device_reset_work *device_reset_work = - container_of(work, struct hl_device_reset_work, reset_work); - struct hl_device *hdev = device_reset_work->hdev; u16 pending_total, pending_cnt; struct task_struct *task = NULL; @@ -537,6 +551,12 @@ static void hl_device_hard_reset_pending(struct work_struct *work) } } + /* We killed the open users, but because the driver cleans up after the + * user contexts are closed (e.g. mmu mappings), we need to wait again + * to make sure the cleaning phase is finished before continuing with + * the reset + */ + pending_cnt = pending_total; while ((atomic_read(&hdev->fd_open_cnt)) && (pending_cnt)) { @@ -552,6 +572,16 @@ static void hl_device_hard_reset_pending(struct work_struct *work) mutex_unlock(&hdev->fd_open_cnt_lock); +} + +static void device_hard_reset_pending(struct work_struct *work) +{ + struct hl_device_reset_work *device_reset_work = + container_of(work, struct hl_device_reset_work, reset_work); + struct hl_device *hdev = device_reset_work->hdev; + + device_kill_open_processes(hdev); + hl_device_reset(hdev, true, true); kfree(device_reset_work); @@ -613,6 +643,8 @@ again: if ((hard_reset) && (!from_hard_reset_thread)) { struct hl_device_reset_work *device_reset_work; + hdev->hard_reset_pending = true; + if (!hdev->pdev) { dev_err(hdev->dev, "Reset action is NOT supported in simulator\n"); @@ -620,8 +652,6 @@ again: goto out_err; } - hdev->hard_reset_pending = true; - device_reset_work = kzalloc(sizeof(*device_reset_work), GFP_ATOMIC); if (!device_reset_work) { @@ -635,7 +665,7 @@ again: * from a dedicated work */ INIT_WORK(&device_reset_work->reset_work, - hl_device_hard_reset_pending); + device_hard_reset_pending); device_reset_work->hdev = hdev; schedule_work(&device_reset_work->reset_work); @@ -663,17 +693,9 @@ again: /* Go over all the queues, release all CS and their jobs */ hl_cs_rollback_all(hdev); - if (hard_reset) { - /* Release kernel context */ - if (hl_ctx_put(hdev->kernel_ctx) != 1) { - dev_err(hdev->dev, - "kernel ctx is alive during hard reset\n"); - rc = -EBUSY; - goto out_err; - } - + /* Release kernel context */ + if ((hard_reset) && (hl_ctx_put(hdev->kernel_ctx) == 1)) hdev->kernel_ctx = NULL; - } /* Reset the H/W. It will be in idle state after this returns */ hdev->asic_funcs->hw_fini(hdev, hard_reset); @@ -688,16 +710,24 @@ again: for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) hl_cq_reset(hdev, &hdev->completion_queue[i]); - /* Make sure the setup phase for the user context will run again */ + /* Make sure the context switch phase will run again */ if (hdev->user_ctx) { - atomic_set(&hdev->user_ctx->thread_restore_token, 1); - hdev->user_ctx->thread_restore_wait_token = 0; + atomic_set(&hdev->user_ctx->thread_ctx_switch_token, 1); + hdev->user_ctx->thread_ctx_switch_wait_token = 0; } /* Finished tear-down, starting to re-initialize */ if (hard_reset) { hdev->device_cpu_disabled = false; + hdev->hard_reset_pending = false; + + if (hdev->kernel_ctx) { + dev_crit(hdev->dev, + "kernel ctx was alive during hard reset, something is terribly wrong\n"); + rc = -EBUSY; + goto out_err; + } /* Allocate the kernel context */ hdev->kernel_ctx = kzalloc(sizeof(*hdev->kernel_ctx), @@ -752,8 +782,6 @@ again: } hl_set_max_power(hdev, hdev->max_power); - - hdev->hard_reset_pending = false; } else { rc = hdev->asic_funcs->soft_reset_late_init(hdev); if (rc) { @@ -1030,11 +1058,22 @@ void hl_device_fini(struct hl_device *hdev) WARN(1, "Failed to remove device because reset function did not finish\n"); return; } - }; + } /* Mark device as disabled */ hdev->disabled = true; + /* + * Flush anyone that is inside the critical section of enqueue + * jobs to the H/W + */ + hdev->asic_funcs->hw_queues_lock(hdev); + hdev->asic_funcs->hw_queues_unlock(hdev); + + hdev->hard_reset_pending = true; + + device_kill_open_processes(hdev); + hl_hwmon_fini(hdev); device_late_fini(hdev); @@ -1108,7 +1147,13 @@ int hl_poll_timeout_memory(struct hl_device *hdev, u64 addr, * either by the direct access of the device or by another core */ u32 *paddr = (u32 *) (uintptr_t) addr; - ktime_t timeout = ktime_add_us(ktime_get(), timeout_us); + ktime_t timeout; + + /* timeout should be longer when working with simulator */ + if (!hdev->pdev) + timeout_us *= 10; + + timeout = ktime_add_us(ktime_get(), timeout_us); might_sleep(); diff --git a/drivers/misc/habanalabs/firmware_if.c b/drivers/misc/habanalabs/firmware_if.c new file mode 100644 index 000000000000..eda5d7fcb79f --- /dev/null +++ b/drivers/misc/habanalabs/firmware_if.c @@ -0,0 +1,322 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright 2016-2019 HabanaLabs, Ltd. + * All Rights Reserved. + */ + +#include "habanalabs.h" + +#include <linux/firmware.h> +#include <linux/genalloc.h> +#include <linux/io-64-nonatomic-lo-hi.h> + +/** + * hl_fw_push_fw_to_device() - Push FW code to device. + * @hdev: pointer to hl_device structure. + * + * Copy fw code from firmware file to device memory. + * + * Return: 0 on success, non-zero for failure. + */ +int hl_fw_push_fw_to_device(struct hl_device *hdev, const char *fw_name, + void __iomem *dst) +{ + const struct firmware *fw; + const u64 *fw_data; + size_t fw_size, i; + int rc; + + rc = request_firmware(&fw, fw_name, hdev->dev); + if (rc) { + dev_err(hdev->dev, "Failed to request %s\n", fw_name); + goto out; + } + + fw_size = fw->size; + if ((fw_size % 4) != 0) { + dev_err(hdev->dev, "illegal %s firmware size %zu\n", + fw_name, fw_size); + rc = -EINVAL; + goto out; + } + + dev_dbg(hdev->dev, "%s firmware size == %zu\n", fw_name, fw_size); + + fw_data = (const u64 *) fw->data; + + if ((fw->size % 8) != 0) + fw_size -= 8; + + for (i = 0 ; i < fw_size ; i += 8, fw_data++, dst += 8) { + if (!(i & (0x80000 - 1))) { + dev_dbg(hdev->dev, + "copied so far %zu out of %zu for %s firmware", + i, fw_size, fw_name); + usleep_range(20, 100); + } + + writeq(*fw_data, dst); + } + + if ((fw->size % 8) != 0) + writel(*(const u32 *) fw_data, dst); + +out: + release_firmware(fw); + return rc; +} + +int hl_fw_send_pci_access_msg(struct hl_device *hdev, u32 opcode) +{ + struct armcp_packet pkt = {}; + + pkt.ctl = cpu_to_le32(opcode << ARMCP_PKT_CTL_OPCODE_SHIFT); + + return hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, + sizeof(pkt), HL_DEVICE_TIMEOUT_USEC, NULL); +} + +int hl_fw_send_cpu_message(struct hl_device *hdev, u32 hw_queue_id, u32 *msg, + u16 len, u32 timeout, long *result) +{ + struct armcp_packet *pkt; + dma_addr_t pkt_dma_addr; + u32 tmp; + int rc = 0; + + if (len > HL_CPU_CB_SIZE) { + dev_err(hdev->dev, "Invalid CPU message size of %d bytes\n", + len); + return -ENOMEM; + } + + pkt = hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, len, + &pkt_dma_addr); + if (!pkt) { + dev_err(hdev->dev, + "Failed to allocate DMA memory for packet to CPU\n"); + return -ENOMEM; + } + + memcpy(pkt, msg, len); + + mutex_lock(&hdev->send_cpu_message_lock); + + if (hdev->disabled) + goto out; + + if (hdev->device_cpu_disabled) { + rc = -EIO; + goto out; + } + + rc = hl_hw_queue_send_cb_no_cmpl(hdev, hw_queue_id, len, pkt_dma_addr); + if (rc) { + dev_err(hdev->dev, "Failed to send CB on CPU PQ (%d)\n", rc); + goto out; + } + + rc = hl_poll_timeout_memory(hdev, (u64) (uintptr_t) &pkt->fence, + timeout, &tmp); + + hl_hw_queue_inc_ci_kernel(hdev, hw_queue_id); + + if (rc == -ETIMEDOUT) { + dev_err(hdev->dev, "Timeout while waiting for device CPU\n"); + hdev->device_cpu_disabled = true; + goto out; + } + + if (tmp == ARMCP_PACKET_FENCE_VAL) { + u32 ctl = le32_to_cpu(pkt->ctl); + + rc = (ctl & ARMCP_PKT_CTL_RC_MASK) >> ARMCP_PKT_CTL_RC_SHIFT; + if (rc) { + dev_err(hdev->dev, + "F/W ERROR %d for CPU packet %d\n", + rc, (ctl & ARMCP_PKT_CTL_OPCODE_MASK) + >> ARMCP_PKT_CTL_OPCODE_SHIFT); + rc = -EINVAL; + } else if (result) { + *result = (long) le64_to_cpu(pkt->result); + } + } else { + dev_err(hdev->dev, "CPU packet wrong fence value\n"); + rc = -EINVAL; + } + +out: + mutex_unlock(&hdev->send_cpu_message_lock); + + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, len, pkt); + + return rc; +} + +int hl_fw_test_cpu_queue(struct hl_device *hdev) +{ + struct armcp_packet test_pkt = {}; + long result; + int rc; + + test_pkt.ctl = cpu_to_le32(ARMCP_PACKET_TEST << + ARMCP_PKT_CTL_OPCODE_SHIFT); + test_pkt.value = cpu_to_le64(ARMCP_PACKET_FENCE_VAL); + + rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &test_pkt, + sizeof(test_pkt), HL_DEVICE_TIMEOUT_USEC, &result); + + if (!rc) { + if (result == ARMCP_PACKET_FENCE_VAL) + dev_info(hdev->dev, + "queue test on CPU queue succeeded\n"); + else + dev_err(hdev->dev, + "CPU queue test failed (0x%08lX)\n", result); + } else { + dev_err(hdev->dev, "CPU queue test failed, error %d\n", rc); + } + + return rc; +} + +void *hl_fw_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, + dma_addr_t *dma_handle) +{ + u64 kernel_addr; + + /* roundup to HL_CPU_PKT_SIZE */ + size = (size + (HL_CPU_PKT_SIZE - 1)) & HL_CPU_PKT_MASK; + + kernel_addr = gen_pool_alloc(hdev->cpu_accessible_dma_pool, size); + + *dma_handle = hdev->cpu_accessible_dma_address + + (kernel_addr - (u64) (uintptr_t) hdev->cpu_accessible_dma_mem); + + return (void *) (uintptr_t) kernel_addr; +} + +void hl_fw_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, + void *vaddr) +{ + /* roundup to HL_CPU_PKT_SIZE */ + size = (size + (HL_CPU_PKT_SIZE - 1)) & HL_CPU_PKT_MASK; + + gen_pool_free(hdev->cpu_accessible_dma_pool, (u64) (uintptr_t) vaddr, + size); +} + +int hl_fw_send_heartbeat(struct hl_device *hdev) +{ + struct armcp_packet hb_pkt = {}; + long result; + int rc; + + hb_pkt.ctl = cpu_to_le32(ARMCP_PACKET_TEST << + ARMCP_PKT_CTL_OPCODE_SHIFT); + hb_pkt.value = cpu_to_le64(ARMCP_PACKET_FENCE_VAL); + + rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &hb_pkt, + sizeof(hb_pkt), HL_DEVICE_TIMEOUT_USEC, &result); + + if ((rc) || (result != ARMCP_PACKET_FENCE_VAL)) + rc = -EIO; + + return rc; +} + +int hl_fw_armcp_info_get(struct hl_device *hdev) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + struct armcp_packet pkt = {}; + void *armcp_info_cpu_addr; + dma_addr_t armcp_info_dma_addr; + long result; + int rc; + + armcp_info_cpu_addr = + hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, + sizeof(struct armcp_info), + &armcp_info_dma_addr); + if (!armcp_info_cpu_addr) { + dev_err(hdev->dev, + "Failed to allocate DMA memory for ArmCP info packet\n"); + return -ENOMEM; + } + + memset(armcp_info_cpu_addr, 0, sizeof(struct armcp_info)); + + pkt.ctl = cpu_to_le32(ARMCP_PACKET_INFO_GET << + ARMCP_PKT_CTL_OPCODE_SHIFT); + pkt.addr = cpu_to_le64(armcp_info_dma_addr); + pkt.data_max_size = cpu_to_le32(sizeof(struct armcp_info)); + + rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), + HL_ARMCP_INFO_TIMEOUT_USEC, &result); + if (rc) { + dev_err(hdev->dev, + "Failed to send armcp info pkt, error %d\n", rc); + goto out; + } + + memcpy(&prop->armcp_info, armcp_info_cpu_addr, + sizeof(prop->armcp_info)); + + rc = hl_build_hwmon_channel_info(hdev, prop->armcp_info.sensors); + if (rc) { + dev_err(hdev->dev, + "Failed to build hwmon channel info, error %d\n", rc); + rc = -EFAULT; + goto out; + } + +out: + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, + sizeof(struct armcp_info), armcp_info_cpu_addr); + + return rc; +} + +int hl_fw_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size) +{ + struct armcp_packet pkt = {}; + void *eeprom_info_cpu_addr; + dma_addr_t eeprom_info_dma_addr; + long result; + int rc; + + eeprom_info_cpu_addr = + hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, + max_size, &eeprom_info_dma_addr); + if (!eeprom_info_cpu_addr) { + dev_err(hdev->dev, + "Failed to allocate DMA memory for EEPROM info packet\n"); + return -ENOMEM; + } + + memset(eeprom_info_cpu_addr, 0, max_size); + + pkt.ctl = cpu_to_le32(ARMCP_PACKET_EEPROM_DATA_GET << + ARMCP_PKT_CTL_OPCODE_SHIFT); + pkt.addr = cpu_to_le64(eeprom_info_dma_addr); + pkt.data_max_size = cpu_to_le32(max_size); + + rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), + HL_ARMCP_EEPROM_TIMEOUT_USEC, &result); + + if (rc) { + dev_err(hdev->dev, + "Failed to send armcp EEPROM pkt, error %d\n", rc); + goto out; + } + + /* result contains the actual size */ + memcpy(data, eeprom_info_cpu_addr, min((size_t)result, max_size)); + +out: + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, max_size, + eeprom_info_cpu_addr); + + return rc; +} diff --git a/drivers/misc/habanalabs/goya/Makefile b/drivers/misc/habanalabs/goya/Makefile index e458e5ba500b..131432f677e2 100644 --- a/drivers/misc/habanalabs/goya/Makefile +++ b/drivers/misc/habanalabs/goya/Makefile @@ -1,3 +1,4 @@ subdir-ccflags-y += -I$(src) -HL_GOYA_FILES := goya/goya.o goya/goya_security.o goya/goya_hwmgr.o +HL_GOYA_FILES := goya/goya.o goya/goya_security.o goya/goya_hwmgr.o \ + goya/goya_coresight.o diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c index 3c509e19d69d..a582e29c1ee4 100644 --- a/drivers/misc/habanalabs/goya/goya.c +++ b/drivers/misc/habanalabs/goya/goya.c @@ -12,10 +12,8 @@ #include <linux/pci.h> #include <linux/genalloc.h> -#include <linux/firmware.h> #include <linux/hwmon.h> #include <linux/io-64-nonatomic-lo-hi.h> -#include <linux/io-64-nonatomic-hi-lo.h> /* * GOYA security scheme: @@ -71,7 +69,7 @@ * */ -#define GOYA_MMU_REGS_NUM 61 +#define GOYA_MMU_REGS_NUM 63 #define GOYA_DMA_POOL_BLK_SIZE 0x100 /* 256 bytes */ @@ -80,15 +78,12 @@ #define GOYA_RESET_WAIT_MSEC 1 /* 1ms */ #define GOYA_CPU_RESET_WAIT_MSEC 100 /* 100ms */ #define GOYA_PLDM_RESET_WAIT_MSEC 1000 /* 1s */ -#define GOYA_CPU_TIMEOUT_USEC 10000000 /* 10s */ #define GOYA_TEST_QUEUE_WAIT_USEC 100000 /* 100ms */ #define GOYA_PLDM_MMU_TIMEOUT_USEC (MMU_CONFIG_TIMEOUT_USEC * 100) #define GOYA_PLDM_QMAN0_TIMEOUT_USEC (HL_DEVICE_TIMEOUT_USEC * 30) #define GOYA_QMAN0_FENCE_VAL 0xD169B243 -#define GOYA_MAX_INITIATORS 20 - #define GOYA_MAX_STRING_LEN 20 #define GOYA_CB_POOL_CB_CNT 512 @@ -173,12 +168,12 @@ static u64 goya_mmu_regs[GOYA_MMU_REGS_NUM] = { mmMME_SBA_CONTROL_DATA, mmMME_SBB_CONTROL_DATA, mmMME_SBC_CONTROL_DATA, - mmMME_WBC_CONTROL_DATA + mmMME_WBC_CONTROL_DATA, + mmPCIE_WRAP_PSOC_ARUSER, + mmPCIE_WRAP_PSOC_AWUSER }; -#define GOYA_ASYC_EVENT_GROUP_NON_FATAL_SIZE 121 - -static u32 goya_non_fatal_events[GOYA_ASYC_EVENT_GROUP_NON_FATAL_SIZE] = { +static u32 goya_all_events[] = { GOYA_ASYNC_EVENT_ID_PCIE_IF, GOYA_ASYNC_EVENT_ID_TPC0_ECC, GOYA_ASYNC_EVENT_ID_TPC1_ECC, @@ -302,14 +297,7 @@ static u32 goya_non_fatal_events[GOYA_ASYC_EVENT_GROUP_NON_FATAL_SIZE] = { GOYA_ASYNC_EVENT_ID_DMA_BM_CH4 }; -static int goya_armcp_info_get(struct hl_device *hdev); -static void goya_mmu_prepare(struct hl_device *hdev, u32 asid); -static int goya_mmu_clear_pgt_range(struct hl_device *hdev); -static int goya_mmu_set_dram_default_page(struct hl_device *hdev); -static int goya_mmu_update_asid_hop0_addr(struct hl_device *hdev, u32 asid, - u64 phys_addr); - -static void goya_get_fixed_properties(struct hl_device *hdev) +void goya_get_fixed_properties(struct hl_device *hdev) { struct asic_fixed_properties *prop = &hdev->asic_prop; int i; @@ -357,7 +345,6 @@ static void goya_get_fixed_properties(struct hl_device *hdev) prop->mmu_hop0_tables_total_size = HOP0_TABLES_TOTAL_SIZE; prop->dram_page_size = PAGE_SIZE_2MB; - prop->host_phys_base_address = HOST_PHYS_BASE; prop->va_space_host_start_address = VA_HOST_SPACE_START; prop->va_space_host_end_address = VA_HOST_SPACE_END; prop->va_space_dram_start_address = VA_DDR_SPACE_START; @@ -367,24 +354,13 @@ static void goya_get_fixed_properties(struct hl_device *hdev) prop->cfg_size = CFG_SIZE; prop->max_asid = MAX_ASID; prop->num_of_events = GOYA_ASYNC_EVENT_ID_SIZE; + prop->high_pll = PLL_HIGH_DEFAULT; prop->cb_pool_cb_cnt = GOYA_CB_POOL_CB_CNT; prop->cb_pool_cb_size = GOYA_CB_POOL_CB_SIZE; prop->max_power_default = MAX_POWER_DEFAULT; prop->tpc_enabled_mask = TPC_ENABLED_MASK; - - prop->high_pll = PLL_HIGH_DEFAULT; -} - -int goya_send_pci_access_msg(struct hl_device *hdev, u32 opcode) -{ - struct armcp_packet pkt; - - memset(&pkt, 0, sizeof(pkt)); - - pkt.ctl = cpu_to_le32(opcode << ARMCP_PKT_CTL_OPCODE_SHIFT); - - return hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, - sizeof(pkt), HL_DEVICE_TIMEOUT_USEC, NULL); + prop->pcie_dbi_base_address = mmPCIE_DBI_BASE; + prop->pcie_aux_dbi_reg_addr = CFG_BASE + mmPCIE_AUX_DBI; } /* @@ -398,199 +374,40 @@ int goya_send_pci_access_msg(struct hl_device *hdev, u32 opcode) */ static int goya_pci_bars_map(struct hl_device *hdev) { - struct pci_dev *pdev = hdev->pdev; + static const char * const name[] = {"SRAM_CFG", "MSIX", "DDR"}; + bool is_wc[3] = {false, false, true}; int rc; - rc = pci_request_regions(pdev, HL_NAME); - if (rc) { - dev_err(hdev->dev, "Cannot obtain PCI resources\n"); + rc = hl_pci_bars_map(hdev, name, is_wc); + if (rc) return rc; - } - - hdev->pcie_bar[SRAM_CFG_BAR_ID] = - pci_ioremap_bar(pdev, SRAM_CFG_BAR_ID); - if (!hdev->pcie_bar[SRAM_CFG_BAR_ID]) { - dev_err(hdev->dev, "pci_ioremap_bar failed for CFG\n"); - rc = -ENODEV; - goto err_release_regions; - } - - hdev->pcie_bar[MSIX_BAR_ID] = pci_ioremap_bar(pdev, MSIX_BAR_ID); - if (!hdev->pcie_bar[MSIX_BAR_ID]) { - dev_err(hdev->dev, "pci_ioremap_bar failed for MSIX\n"); - rc = -ENODEV; - goto err_unmap_sram_cfg; - } - - hdev->pcie_bar[DDR_BAR_ID] = pci_ioremap_wc_bar(pdev, DDR_BAR_ID); - if (!hdev->pcie_bar[DDR_BAR_ID]) { - dev_err(hdev->dev, "pci_ioremap_bar failed for DDR\n"); - rc = -ENODEV; - goto err_unmap_msix; - } hdev->rmmio = hdev->pcie_bar[SRAM_CFG_BAR_ID] + - (CFG_BASE - SRAM_BASE_ADDR); - - return 0; - -err_unmap_msix: - iounmap(hdev->pcie_bar[MSIX_BAR_ID]); -err_unmap_sram_cfg: - iounmap(hdev->pcie_bar[SRAM_CFG_BAR_ID]); -err_release_regions: - pci_release_regions(pdev); - - return rc; -} - -/* - * goya_pci_bars_unmap - Unmap PCI BARS of Goya device - * - * @hdev: pointer to hl_device structure - * - * Release all PCI BARS and unmap their virtual addresses - * - */ -static void goya_pci_bars_unmap(struct hl_device *hdev) -{ - struct pci_dev *pdev = hdev->pdev; - - iounmap(hdev->pcie_bar[DDR_BAR_ID]); - iounmap(hdev->pcie_bar[MSIX_BAR_ID]); - iounmap(hdev->pcie_bar[SRAM_CFG_BAR_ID]); - pci_release_regions(pdev); -} - -/* - * goya_elbi_write - Write through the ELBI interface - * - * @hdev: pointer to hl_device structure - * - * return 0 on success, -1 on failure - * - */ -static int goya_elbi_write(struct hl_device *hdev, u64 addr, u32 data) -{ - struct pci_dev *pdev = hdev->pdev; - ktime_t timeout; - u32 val; - - /* Clear previous status */ - pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, 0); - - pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_ADDR, (u32) addr); - pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_DATA, data); - pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_CTRL, - PCI_CONFIG_ELBI_CTRL_WRITE); - - timeout = ktime_add_ms(ktime_get(), 10); - for (;;) { - pci_read_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, &val); - if (val & PCI_CONFIG_ELBI_STS_MASK) - break; - if (ktime_compare(ktime_get(), timeout) > 0) { - pci_read_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, - &val); - break; - } - usleep_range(300, 500); - } - - if ((val & PCI_CONFIG_ELBI_STS_MASK) == PCI_CONFIG_ELBI_STS_DONE) - return 0; - - if (val & PCI_CONFIG_ELBI_STS_ERR) { - dev_err(hdev->dev, "Error writing to ELBI\n"); - return -EIO; - } - - if (!(val & PCI_CONFIG_ELBI_STS_MASK)) { - dev_err(hdev->dev, "ELBI write didn't finish in time\n"); - return -EIO; - } - - dev_err(hdev->dev, "ELBI write has undefined bits in status\n"); - return -EIO; -} - -/* - * goya_iatu_write - iatu write routine - * - * @hdev: pointer to hl_device structure - * - */ -static int goya_iatu_write(struct hl_device *hdev, u32 addr, u32 data) -{ - u32 dbi_offset; - int rc; - - dbi_offset = addr & 0xFFF; - - rc = goya_elbi_write(hdev, CFG_BASE + mmPCIE_AUX_DBI, 0x00300000); - rc |= goya_elbi_write(hdev, mmPCIE_DBI_BASE + dbi_offset, data); - - if (rc) - return -EIO; + (CFG_BASE - SRAM_BASE_ADDR); return 0; } -static void goya_reset_link_through_bridge(struct hl_device *hdev) -{ - struct pci_dev *pdev = hdev->pdev; - struct pci_dev *parent_port; - u16 val; - - parent_port = pdev->bus->self; - pci_read_config_word(parent_port, PCI_BRIDGE_CONTROL, &val); - val |= PCI_BRIDGE_CTL_BUS_RESET; - pci_write_config_word(parent_port, PCI_BRIDGE_CONTROL, val); - ssleep(1); - - val &= ~(PCI_BRIDGE_CTL_BUS_RESET); - pci_write_config_word(parent_port, PCI_BRIDGE_CONTROL, val); - ssleep(3); -} - -/* - * goya_set_ddr_bar_base - set DDR bar to map specific device address - * - * @hdev: pointer to hl_device structure - * @addr: address in DDR. Must be aligned to DDR bar size - * - * This function configures the iATU so that the DDR bar will start at the - * specified addr. - * - */ -static int goya_set_ddr_bar_base(struct hl_device *hdev, u64 addr) +static u64 goya_set_ddr_bar_base(struct hl_device *hdev, u64 addr) { struct goya_device *goya = hdev->asic_specific; + u64 old_addr = addr; int rc; if ((goya) && (goya->ddr_bar_cur_addr == addr)) - return 0; + return old_addr; /* Inbound Region 1 - Bar 4 - Point to DDR */ - rc = goya_iatu_write(hdev, 0x314, lower_32_bits(addr)); - rc |= goya_iatu_write(hdev, 0x318, upper_32_bits(addr)); - rc |= goya_iatu_write(hdev, 0x300, 0); - /* Enable + Bar match + match enable + Bar 4 */ - rc |= goya_iatu_write(hdev, 0x304, 0xC0080400); - - /* Return the DBI window to the default location */ - rc |= goya_elbi_write(hdev, CFG_BASE + mmPCIE_AUX_DBI, 0); - rc |= goya_elbi_write(hdev, CFG_BASE + mmPCIE_AUX_DBI_32, 0); - - if (rc) { - dev_err(hdev->dev, "failed to map DDR bar to 0x%08llx\n", addr); - return -EIO; - } + rc = hl_pci_set_dram_bar_base(hdev, 1, 4, addr); + if (rc) + return U64_MAX; - if (goya) + if (goya) { + old_addr = goya->ddr_bar_cur_addr; goya->ddr_bar_cur_addr = addr; + } - return 0; + return old_addr; } /* @@ -603,40 +420,8 @@ static int goya_set_ddr_bar_base(struct hl_device *hdev, u64 addr) */ static int goya_init_iatu(struct hl_device *hdev) { - int rc; - - /* Inbound Region 0 - Bar 0 - Point to SRAM_BASE_ADDR */ - rc = goya_iatu_write(hdev, 0x114, lower_32_bits(SRAM_BASE_ADDR)); - rc |= goya_iatu_write(hdev, 0x118, upper_32_bits(SRAM_BASE_ADDR)); - rc |= goya_iatu_write(hdev, 0x100, 0); - /* Enable + Bar match + match enable */ - rc |= goya_iatu_write(hdev, 0x104, 0xC0080000); - - /* Inbound Region 1 - Bar 4 - Point to DDR */ - rc |= goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE); - - /* Outbound Region 0 - Point to Host */ - rc |= goya_iatu_write(hdev, 0x008, lower_32_bits(HOST_PHYS_BASE)); - rc |= goya_iatu_write(hdev, 0x00C, upper_32_bits(HOST_PHYS_BASE)); - rc |= goya_iatu_write(hdev, 0x010, - lower_32_bits(HOST_PHYS_BASE + HOST_PHYS_SIZE - 1)); - rc |= goya_iatu_write(hdev, 0x014, 0); - rc |= goya_iatu_write(hdev, 0x018, 0); - rc |= goya_iatu_write(hdev, 0x020, - upper_32_bits(HOST_PHYS_BASE + HOST_PHYS_SIZE - 1)); - /* Increase region size */ - rc |= goya_iatu_write(hdev, 0x000, 0x00002000); - /* Enable */ - rc |= goya_iatu_write(hdev, 0x004, 0x80000000); - - /* Return the DBI window to the default location */ - rc |= goya_elbi_write(hdev, CFG_BASE + mmPCIE_AUX_DBI, 0); - rc |= goya_elbi_write(hdev, CFG_BASE + mmPCIE_AUX_DBI_32, 0); - - if (rc) - return -EIO; - - return 0; + return hl_pci_init_iatu(hdev, SRAM_BASE_ADDR, DRAM_PHYS_BASE, + HOST_PHYS_BASE, HOST_PHYS_SIZE); } /* @@ -682,52 +467,9 @@ static int goya_early_init(struct hl_device *hdev) prop->dram_pci_bar_size = pci_resource_len(pdev, DDR_BAR_ID); - /* set DMA mask for GOYA */ - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(39)); - if (rc) { - dev_warn(hdev->dev, "Unable to set pci dma mask to 39 bits\n"); - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); - if (rc) { - dev_err(hdev->dev, - "Unable to set pci dma mask to 32 bits\n"); - return rc; - } - } - - rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(39)); - if (rc) { - dev_warn(hdev->dev, - "Unable to set pci consistent dma mask to 39 bits\n"); - rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); - if (rc) { - dev_err(hdev->dev, - "Unable to set pci consistent dma mask to 32 bits\n"); - return rc; - } - } - - if (hdev->reset_pcilink) - goya_reset_link_through_bridge(hdev); - - rc = pci_enable_device_mem(pdev); - if (rc) { - dev_err(hdev->dev, "can't enable PCI device\n"); + rc = hl_pci_init(hdev, 39); + if (rc) return rc; - } - - pci_set_master(pdev); - - rc = goya_init_iatu(hdev); - if (rc) { - dev_err(hdev->dev, "Failed to initialize iATU\n"); - goto disable_device; - } - - rc = goya_pci_bars_map(hdev); - if (rc) { - dev_err(hdev->dev, "Failed to initialize PCI BARS\n"); - goto disable_device; - } if (!hdev->pldm) { val = RREG32(mmPSOC_GLOBAL_CONF_BOOT_STRAP_PINS); @@ -737,12 +479,6 @@ static int goya_early_init(struct hl_device *hdev) } return 0; - -disable_device: - pci_clear_master(pdev); - pci_disable_device(pdev); - - return rc; } /* @@ -755,14 +491,33 @@ disable_device: */ static int goya_early_fini(struct hl_device *hdev) { - goya_pci_bars_unmap(hdev); - - pci_clear_master(hdev->pdev); - pci_disable_device(hdev->pdev); + hl_pci_fini(hdev); return 0; } +static void goya_mmu_prepare_reg(struct hl_device *hdev, u64 reg, u32 asid) +{ + /* mask to zero the MMBP and ASID bits */ + WREG32_AND(reg, ~0x7FF); + WREG32_OR(reg, asid); +} + +static void goya_qman0_set_security(struct hl_device *hdev, bool secure) +{ + struct goya_device *goya = hdev->asic_specific; + + if (!(goya->hw_cap_initialized & HW_CAP_MMU)) + return; + + if (secure) + WREG32(mmDMA_QM_0_GLBL_PROT, QMAN_DMA_FULLY_TRUSTED); + else + WREG32(mmDMA_QM_0_GLBL_PROT, QMAN_DMA_PARTLY_TRUSTED); + + RREG32(mmDMA_QM_0_GLBL_PROT); +} + /* * goya_fetch_psoc_frequency - Fetch PSOC frequency values * @@ -779,20 +534,12 @@ static void goya_fetch_psoc_frequency(struct hl_device *hdev) prop->psoc_pci_pll_div_factor = RREG32(mmPSOC_PCI_PLL_DIV_FACTOR_1); } -/* - * goya_late_init - GOYA late initialization code - * - * @hdev: pointer to hl_device structure - * - * Get ArmCP info and send message to CPU to enable PCI access - */ -static int goya_late_init(struct hl_device *hdev) +int goya_late_init(struct hl_device *hdev) { struct asic_fixed_properties *prop = &hdev->asic_prop; - struct goya_device *goya = hdev->asic_specific; int rc; - rc = goya->armcp_info_get(hdev); + rc = goya_armcp_info_get(hdev); if (rc) { dev_err(hdev->dev, "Failed to get armcp info\n"); return rc; @@ -804,7 +551,7 @@ static int goya_late_init(struct hl_device *hdev) */ WREG32(mmMMU_LOG2_DDR_SIZE, ilog2(prop->dram_size)); - rc = goya_send_pci_access_msg(hdev, ARMCP_PACKET_ENABLE_PCI_ACCESS); + rc = hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_ENABLE_PCI_ACCESS); if (rc) { dev_err(hdev->dev, "Failed to enable PCI access from CPU\n"); return rc; @@ -830,7 +577,7 @@ static int goya_late_init(struct hl_device *hdev) return 0; disable_pci_access: - goya_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); + hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); return rc; } @@ -879,9 +626,6 @@ static int goya_sw_init(struct hl_device *hdev) if (!goya) return -ENOMEM; - goya->test_cpu_queue = goya_test_cpu_queue; - goya->armcp_info_get = goya_armcp_info_get; - /* according to goya_init_iatu */ goya->ddr_bar_cur_addr = DRAM_PHYS_BASE; @@ -901,45 +645,43 @@ static int goya_sw_init(struct hl_device *hdev) } hdev->cpu_accessible_dma_mem = - hdev->asic_funcs->dma_alloc_coherent(hdev, - CPU_ACCESSIBLE_MEM_SIZE, + hdev->asic_funcs->asic_dma_alloc_coherent(hdev, + HL_CPU_ACCESSIBLE_MEM_SIZE, &hdev->cpu_accessible_dma_address, GFP_KERNEL | __GFP_ZERO); if (!hdev->cpu_accessible_dma_mem) { - dev_err(hdev->dev, - "failed to allocate %d of dma memory for CPU accessible memory space\n", - CPU_ACCESSIBLE_MEM_SIZE); rc = -ENOMEM; goto free_dma_pool; } - hdev->cpu_accessible_dma_pool = gen_pool_create(CPU_PKT_SHIFT, -1); + hdev->cpu_accessible_dma_pool = gen_pool_create(HL_CPU_PKT_SHIFT, -1); if (!hdev->cpu_accessible_dma_pool) { dev_err(hdev->dev, "Failed to create CPU accessible DMA pool\n"); rc = -ENOMEM; - goto free_cpu_pq_dma_mem; + goto free_cpu_dma_mem; } rc = gen_pool_add(hdev->cpu_accessible_dma_pool, (uintptr_t) hdev->cpu_accessible_dma_mem, - CPU_ACCESSIBLE_MEM_SIZE, -1); + HL_CPU_ACCESSIBLE_MEM_SIZE, -1); if (rc) { dev_err(hdev->dev, "Failed to add memory to CPU accessible DMA pool\n"); rc = -EFAULT; - goto free_cpu_pq_pool; + goto free_cpu_accessible_dma_pool; } spin_lock_init(&goya->hw_queues_lock); return 0; -free_cpu_pq_pool: +free_cpu_accessible_dma_pool: gen_pool_destroy(hdev->cpu_accessible_dma_pool); -free_cpu_pq_dma_mem: - hdev->asic_funcs->dma_free_coherent(hdev, CPU_ACCESSIBLE_MEM_SIZE, +free_cpu_dma_mem: + hdev->asic_funcs->asic_dma_free_coherent(hdev, + HL_CPU_ACCESSIBLE_MEM_SIZE, hdev->cpu_accessible_dma_mem, hdev->cpu_accessible_dma_address); free_dma_pool: @@ -962,7 +704,8 @@ static int goya_sw_fini(struct hl_device *hdev) gen_pool_destroy(hdev->cpu_accessible_dma_pool); - hdev->asic_funcs->dma_free_coherent(hdev, CPU_ACCESSIBLE_MEM_SIZE, + hdev->asic_funcs->asic_dma_free_coherent(hdev, + HL_CPU_ACCESSIBLE_MEM_SIZE, hdev->cpu_accessible_dma_mem, hdev->cpu_accessible_dma_address); @@ -1056,11 +799,10 @@ static void goya_init_dma_ch(struct hl_device *hdev, int dma_id) * Initialize the H/W registers of the QMAN DMA channels * */ -static void goya_init_dma_qmans(struct hl_device *hdev) +void goya_init_dma_qmans(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; struct hl_hw_queue *q; - dma_addr_t bus_address; int i; if (goya->hw_cap_initialized & HW_CAP_DMA) @@ -1069,10 +811,7 @@ static void goya_init_dma_qmans(struct hl_device *hdev) q = &hdev->kernel_queues[0]; for (i = 0 ; i < NUMBER_OF_EXT_HW_QUEUES ; i++, q++) { - bus_address = q->bus_address + - hdev->asic_prop.host_phys_base_address; - - goya_init_dma_qman(hdev, i, bus_address); + goya_init_dma_qman(hdev, i, q->bus_address); goya_init_dma_ch(hdev, i); } @@ -1209,11 +948,10 @@ static int goya_stop_external_queues(struct hl_device *hdev) * Returns 0 on success * */ -static int goya_init_cpu_queues(struct hl_device *hdev) +int goya_init_cpu_queues(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; struct hl_eq *eq; - dma_addr_t bus_address; u32 status; struct hl_hw_queue *cpu_pq = &hdev->kernel_queues[GOYA_QUEUE_ID_CPU_PQ]; int err; @@ -1226,23 +964,22 @@ static int goya_init_cpu_queues(struct hl_device *hdev) eq = &hdev->event_queue; - bus_address = cpu_pq->bus_address + - hdev->asic_prop.host_phys_base_address; - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_0, lower_32_bits(bus_address)); - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_1, upper_32_bits(bus_address)); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_0, + lower_32_bits(cpu_pq->bus_address)); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_1, + upper_32_bits(cpu_pq->bus_address)); - bus_address = eq->bus_address + hdev->asic_prop.host_phys_base_address; - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_2, lower_32_bits(bus_address)); - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_3, upper_32_bits(bus_address)); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_2, lower_32_bits(eq->bus_address)); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_3, upper_32_bits(eq->bus_address)); - bus_address = hdev->cpu_accessible_dma_address + - hdev->asic_prop.host_phys_base_address; - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_8, lower_32_bits(bus_address)); - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_9, upper_32_bits(bus_address)); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_8, + lower_32_bits(hdev->cpu_accessible_dma_address)); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_9, + upper_32_bits(hdev->cpu_accessible_dma_address)); WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_5, HL_QUEUE_SIZE_IN_BYTES); WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_4, HL_EQ_SIZE_IN_BYTES); - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_10, CPU_ACCESSIBLE_MEM_SIZE); + WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_10, HL_CPU_ACCESSIBLE_MEM_SIZE); /* Used for EQ CI */ WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_6, 0); @@ -1695,6 +1432,8 @@ static void goya_init_golden_registers(struct hl_device *hdev) */ WREG32(mmDMA_CH_1_CFG0, 0x0fff00F0); + WREG32(mmTPC_PLL_CLK_RLX_0, 0x200020); + goya->hw_cap_initialized |= HW_CAP_GOLDEN; } @@ -1788,7 +1527,7 @@ static void goya_init_mme_cmdq(struct hl_device *hdev) WREG32(mmMME_CMDQ_GLBL_CFG0, CMDQ_MME_ENABLE); } -static void goya_init_mme_qmans(struct hl_device *hdev) +void goya_init_mme_qmans(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; u32 so_base_lo, so_base_hi; @@ -1895,7 +1634,7 @@ static void goya_init_tpc_cmdq(struct hl_device *hdev, int tpc_id) WREG32(mmTPC0_CMDQ_GLBL_CFG0 + reg_off, CMDQ_TPC_ENABLE); } -static void goya_init_tpc_qmans(struct hl_device *hdev) +void goya_init_tpc_qmans(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; u32 so_base_lo, so_base_hi; @@ -2222,10 +1961,10 @@ static int goya_enable_msix(struct hl_device *hdev) } } - irq = pci_irq_vector(hdev->pdev, EVENT_QUEUE_MSIX_IDX); + irq = pci_irq_vector(hdev->pdev, GOYA_EVENT_QUEUE_MSIX_IDX); rc = request_irq(irq, hl_irq_handler_eq, 0, - goya_irq_name[EVENT_QUEUE_MSIX_IDX], + goya_irq_name[GOYA_EVENT_QUEUE_MSIX_IDX], &hdev->event_queue); if (rc) { dev_err(hdev->dev, "Failed to request IRQ %d", irq); @@ -2256,7 +1995,7 @@ static void goya_sync_irqs(struct hl_device *hdev) for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) synchronize_irq(pci_irq_vector(hdev->pdev, i)); - synchronize_irq(pci_irq_vector(hdev->pdev, EVENT_QUEUE_MSIX_IDX)); + synchronize_irq(pci_irq_vector(hdev->pdev, GOYA_EVENT_QUEUE_MSIX_IDX)); } static void goya_disable_msix(struct hl_device *hdev) @@ -2269,7 +2008,7 @@ static void goya_disable_msix(struct hl_device *hdev) goya_sync_irqs(hdev); - irq = pci_irq_vector(hdev->pdev, EVENT_QUEUE_MSIX_IDX); + irq = pci_irq_vector(hdev->pdev, GOYA_EVENT_QUEUE_MSIX_IDX); free_irq(irq, &hdev->event_queue); for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) { @@ -2329,67 +2068,45 @@ static void goya_halt_engines(struct hl_device *hdev, bool hard_reset) } /* - * goya_push_fw_to_device - Push FW code to device - * - * @hdev: pointer to hl_device structure + * goya_push_uboot_to_device() - Push u-boot FW code to device. + * @hdev: Pointer to hl_device structure. * - * Copy fw code from firmware file to device memory. - * Returns 0 on success + * Copy u-boot fw code from firmware file to SRAM BAR. * + * Return: 0 on success, non-zero for failure. */ -static int goya_push_fw_to_device(struct hl_device *hdev, const char *fw_name, - void __iomem *dst) +static int goya_push_uboot_to_device(struct hl_device *hdev) { - const struct firmware *fw; - const u64 *fw_data; - size_t fw_size, i; - int rc; - - rc = request_firmware(&fw, fw_name, hdev->dev); - - if (rc) { - dev_err(hdev->dev, "Failed to request %s\n", fw_name); - goto out; - } - - fw_size = fw->size; - if ((fw_size % 4) != 0) { - dev_err(hdev->dev, "illegal %s firmware size %zu\n", - fw_name, fw_size); - rc = -EINVAL; - goto out; - } - - dev_dbg(hdev->dev, "%s firmware size == %zu\n", fw_name, fw_size); - - fw_data = (const u64 *) fw->data; + char fw_name[200]; + void __iomem *dst; - if ((fw->size % 8) != 0) - fw_size -= 8; + snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-u-boot.bin"); + dst = hdev->pcie_bar[SRAM_CFG_BAR_ID] + UBOOT_FW_OFFSET; - for (i = 0 ; i < fw_size ; i += 8, fw_data++, dst += 8) { - if (!(i & (0x80000 - 1))) { - dev_dbg(hdev->dev, - "copied so far %zu out of %zu for %s firmware", - i, fw_size, fw_name); - usleep_range(20, 100); - } + return hl_fw_push_fw_to_device(hdev, fw_name, dst); +} - writeq(*fw_data, dst); - } +/* + * goya_push_linux_to_device() - Push LINUX FW code to device. + * @hdev: Pointer to hl_device structure. + * + * Copy LINUX fw code from firmware file to HBM BAR. + * + * Return: 0 on success, non-zero for failure. + */ +static int goya_push_linux_to_device(struct hl_device *hdev) +{ + char fw_name[200]; + void __iomem *dst; - if ((fw->size % 8) != 0) - writel(*(const u32 *) fw_data, dst); + snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-fit.itb"); + dst = hdev->pcie_bar[DDR_BAR_ID] + LINUX_FW_OFFSET; -out: - release_firmware(fw); - return rc; + return hl_fw_push_fw_to_device(hdev, fw_name, dst); } static int goya_pldm_init_cpu(struct hl_device *hdev) { - char fw_name[200]; - void __iomem *dst; u32 val, unit_rst_val; int rc; @@ -2407,15 +2124,11 @@ static int goya_pldm_init_cpu(struct hl_device *hdev) WREG32(mmPSOC_GLOBAL_CONF_UNIT_RST_N, unit_rst_val); val = RREG32(mmPSOC_GLOBAL_CONF_UNIT_RST_N); - snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-u-boot.bin"); - dst = hdev->pcie_bar[SRAM_CFG_BAR_ID] + UBOOT_FW_OFFSET; - rc = goya_push_fw_to_device(hdev, fw_name, dst); + rc = goya_push_uboot_to_device(hdev); if (rc) return rc; - snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-fit.itb"); - dst = hdev->pcie_bar[DDR_BAR_ID] + LINUX_FW_OFFSET; - rc = goya_push_fw_to_device(hdev, fw_name, dst); + rc = goya_push_linux_to_device(hdev); if (rc) return rc; @@ -2477,8 +2190,6 @@ static void goya_read_device_fw_version(struct hl_device *hdev, static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout) { struct goya_device *goya = hdev->asic_specific; - char fw_name[200]; - void __iomem *dst; u32 status; int rc; @@ -2492,11 +2203,10 @@ static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout) * Before pushing u-boot/linux to device, need to set the ddr bar to * base address of dram */ - rc = goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE); - if (rc) { + if (goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE) == U64_MAX) { dev_err(hdev->dev, "failed to map DDR bar to DRAM base address\n"); - return rc; + return -EIO; } if (hdev->pldm) { @@ -2549,6 +2259,11 @@ static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout) "ARM status %d - DDR initialization failed\n", status); break; + case CPU_BOOT_STATUS_UBOOT_NOT_READY: + dev_err(hdev->dev, + "ARM status %d - u-boot stopped by user\n", + status); + break; default: dev_err(hdev->dev, "ARM status %d - Invalid status code\n", @@ -2570,9 +2285,7 @@ static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout) goto out; } - snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-fit.itb"); - dst = hdev->pcie_bar[DDR_BAR_ID] + LINUX_FW_OFFSET; - rc = goya_push_fw_to_device(hdev, fw_name, dst); + rc = goya_push_linux_to_device(hdev); if (rc) return rc; @@ -2605,7 +2318,39 @@ out: return 0; } -static int goya_mmu_init(struct hl_device *hdev) +static int goya_mmu_update_asid_hop0_addr(struct hl_device *hdev, u32 asid, + u64 phys_addr) +{ + u32 status, timeout_usec; + int rc; + + if (hdev->pldm) + timeout_usec = GOYA_PLDM_MMU_TIMEOUT_USEC; + else + timeout_usec = MMU_CONFIG_TIMEOUT_USEC; + + WREG32(MMU_HOP0_PA43_12, phys_addr >> MMU_HOP0_PA43_12_SHIFT); + WREG32(MMU_HOP0_PA49_44, phys_addr >> MMU_HOP0_PA49_44_SHIFT); + WREG32(MMU_ASID_BUSY, 0x80000000 | asid); + + rc = hl_poll_timeout( + hdev, + MMU_ASID_BUSY, + status, + !(status & 0x80000000), + 1000, + timeout_usec); + + if (rc) { + dev_err(hdev->dev, + "Timeout during MMU hop0 config of asid %d\n", asid); + return rc; + } + + return 0; +} + +int goya_mmu_init(struct hl_device *hdev) { struct asic_fixed_properties *prop = &hdev->asic_prop; struct goya_device *goya = hdev->asic_specific; @@ -2696,12 +2441,12 @@ static int goya_hw_init(struct hl_device *hdev) * After CPU initialization is finished, change DDR bar mapping inside * iATU to point to the start address of the MMU page tables */ - rc = goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE + - (MMU_PAGE_TABLES_ADDR & ~(prop->dram_pci_bar_size - 0x1ull))); - if (rc) { + if (goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE + + (MMU_PAGE_TABLES_ADDR & + ~(prop->dram_pci_bar_size - 0x1ull))) == U64_MAX) { dev_err(hdev->dev, "failed to map DDR bar to MMU page tables\n"); - return rc; + return -EIO; } rc = goya_mmu_init(hdev); @@ -2728,28 +2473,16 @@ static int goya_hw_init(struct hl_device *hdev) goto disable_msix; } - /* CPU initialization is finished, we can now move to 48 bit DMA mask */ - rc = pci_set_dma_mask(hdev->pdev, DMA_BIT_MASK(48)); - if (rc) { - dev_warn(hdev->dev, "Unable to set pci dma mask to 48 bits\n"); - rc = pci_set_dma_mask(hdev->pdev, DMA_BIT_MASK(32)); - if (rc) { - dev_err(hdev->dev, - "Unable to set pci dma mask to 32 bits\n"); - goto disable_pci_access; - } - } - - rc = pci_set_consistent_dma_mask(hdev->pdev, DMA_BIT_MASK(48)); - if (rc) { - dev_warn(hdev->dev, - "Unable to set pci consistent dma mask to 48 bits\n"); - rc = pci_set_consistent_dma_mask(hdev->pdev, DMA_BIT_MASK(32)); - if (rc) { - dev_err(hdev->dev, - "Unable to set pci consistent dma mask to 32 bits\n"); + /* + * Check if we managed to set the DMA mask to more then 32 bits. If so, + * let's try to increase it again because in Goya we set the initial + * dma mask to less then 39 bits so that the allocation of the memory + * area for the device's cpu will be under 39 bits + */ + if (hdev->dma_mask > 32) { + rc = hl_pci_set_dma_mask(hdev, 48); + if (rc) goto disable_pci_access; - } } /* Perform read from the device to flush all MSI-X configuration */ @@ -2758,7 +2491,7 @@ static int goya_hw_init(struct hl_device *hdev) return 0; disable_pci_access: - goya_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); + hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); disable_msix: goya_disable_msix(hdev); disable_queues: @@ -2865,7 +2598,7 @@ int goya_suspend(struct hl_device *hdev) { int rc; - rc = goya_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); + rc = hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); if (rc) dev_err(hdev->dev, "Failed to disable PCI access from CPU\n"); @@ -2893,7 +2626,7 @@ static int goya_cb_mmap(struct hl_device *hdev, struct vm_area_struct *vma, return rc; } -static void goya_ring_doorbell(struct hl_device *hdev, u32 hw_queue_id, u32 pi) +void goya_ring_doorbell(struct hl_device *hdev, u32 hw_queue_id, u32 pi) { u32 db_reg_offset, db_value; bool invalid_queue = false; @@ -2991,13 +2724,23 @@ void goya_flush_pq_write(struct hl_device *hdev, u64 *pq, u64 exp_val) static void *goya_dma_alloc_coherent(struct hl_device *hdev, size_t size, dma_addr_t *dma_handle, gfp_t flags) { - return dma_alloc_coherent(&hdev->pdev->dev, size, dma_handle, flags); + void *kernel_addr = dma_alloc_coherent(&hdev->pdev->dev, size, + dma_handle, flags); + + /* Shift to the device's base physical address of host memory */ + if (kernel_addr) + *dma_handle += HOST_PHYS_BASE; + + return kernel_addr; } static void goya_dma_free_coherent(struct hl_device *hdev, size_t size, void *cpu_addr, dma_addr_t dma_handle) { - dma_free_coherent(&hdev->pdev->dev, size, cpu_addr, dma_handle); + /* Cancel the device's base physical address of host memory */ + dma_addr_t fixed_dma_handle = dma_handle - HOST_PHYS_BASE; + + dma_free_coherent(&hdev->pdev->dev, size, cpu_addr, fixed_dma_handle); } void *goya_get_int_queue_base(struct hl_device *hdev, u32 queue_id, @@ -3060,12 +2803,12 @@ void *goya_get_int_queue_base(struct hl_device *hdev, u32 queue_id, static int goya_send_job_on_qman0(struct hl_device *hdev, struct hl_cs_job *job) { - struct goya_device *goya = hdev->asic_specific; struct packet_msg_prot *fence_pkt; u32 *fence_ptr; dma_addr_t fence_dma_addr; struct hl_cb *cb; u32 tmp, timeout; + char buf[16] = {}; int rc; if (hdev->pldm) @@ -3073,13 +2816,14 @@ static int goya_send_job_on_qman0(struct hl_device *hdev, struct hl_cs_job *job) else timeout = HL_DEVICE_TIMEOUT_USEC; - if (!hdev->asic_funcs->is_device_idle(hdev)) { + if (!hdev->asic_funcs->is_device_idle(hdev, buf, sizeof(buf))) { dev_err_ratelimited(hdev->dev, - "Can't send KMD job on QMAN0 if device is not idle\n"); + "Can't send KMD job on QMAN0 because %s is busy\n", + buf); return -EBUSY; } - fence_ptr = hdev->asic_funcs->dma_pool_zalloc(hdev, 4, GFP_KERNEL, + fence_ptr = hdev->asic_funcs->asic_dma_pool_zalloc(hdev, 4, GFP_KERNEL, &fence_dma_addr); if (!fence_ptr) { dev_err(hdev->dev, @@ -3089,10 +2833,7 @@ static int goya_send_job_on_qman0(struct hl_device *hdev, struct hl_cs_job *job) *fence_ptr = 0; - if (goya->hw_cap_initialized & HW_CAP_MMU) { - WREG32(mmDMA_QM_0_GLBL_PROT, QMAN_DMA_FULLY_TRUSTED); - RREG32(mmDMA_QM_0_GLBL_PROT); - } + goya_qman0_set_security(hdev, true); /* * goya cs parser saves space for 2xpacket_msg_prot at end of CB. For @@ -3110,8 +2851,7 @@ static int goya_send_job_on_qman0(struct hl_device *hdev, struct hl_cs_job *job) (1 << GOYA_PKT_CTL_MB_SHIFT); fence_pkt->ctl = cpu_to_le32(tmp); fence_pkt->value = cpu_to_le32(GOYA_QMAN0_FENCE_VAL); - fence_pkt->addr = cpu_to_le64(fence_dma_addr + - hdev->asic_prop.host_phys_base_address); + fence_pkt->addr = cpu_to_le64(fence_dma_addr); rc = hl_hw_queue_send_cb_no_cmpl(hdev, GOYA_QUEUE_ID_DMA_0, job->job_cb_size, cb->bus_address); @@ -3131,13 +2871,10 @@ static int goya_send_job_on_qman0(struct hl_device *hdev, struct hl_cs_job *job) } free_fence_ptr: - hdev->asic_funcs->dma_pool_free(hdev, (void *) fence_ptr, + hdev->asic_funcs->asic_dma_pool_free(hdev, (void *) fence_ptr, fence_dma_addr); - if (goya->hw_cap_initialized & HW_CAP_MMU) { - WREG32(mmDMA_QM_0_GLBL_PROT, QMAN_DMA_PARTLY_TRUSTED); - RREG32(mmDMA_QM_0_GLBL_PROT); - } + goya_qman0_set_security(hdev, false); return rc; } @@ -3146,10 +2883,6 @@ int goya_send_cpu_message(struct hl_device *hdev, u32 *msg, u16 len, u32 timeout, long *result) { struct goya_device *goya = hdev->asic_specific; - struct armcp_packet *pkt; - dma_addr_t pkt_dma_addr; - u32 tmp; - int rc = 0; if (!(goya->hw_cap_initialized & HW_CAP_CPU_Q)) { if (result) @@ -3157,74 +2890,8 @@ int goya_send_cpu_message(struct hl_device *hdev, u32 *msg, u16 len, return 0; } - if (len > CPU_CB_SIZE) { - dev_err(hdev->dev, "Invalid CPU message size of %d bytes\n", - len); - return -ENOMEM; - } - - pkt = hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, len, - &pkt_dma_addr); - if (!pkt) { - dev_err(hdev->dev, - "Failed to allocate DMA memory for packet to CPU\n"); - return -ENOMEM; - } - - memcpy(pkt, msg, len); - - mutex_lock(&hdev->send_cpu_message_lock); - - if (hdev->disabled) - goto out; - - if (hdev->device_cpu_disabled) { - rc = -EIO; - goto out; - } - - rc = hl_hw_queue_send_cb_no_cmpl(hdev, GOYA_QUEUE_ID_CPU_PQ, len, - pkt_dma_addr); - if (rc) { - dev_err(hdev->dev, "Failed to send CB on CPU PQ (%d)\n", rc); - goto out; - } - - rc = hl_poll_timeout_memory(hdev, (u64) (uintptr_t) &pkt->fence, - timeout, &tmp); - - hl_hw_queue_inc_ci_kernel(hdev, GOYA_QUEUE_ID_CPU_PQ); - - if (rc == -ETIMEDOUT) { - dev_err(hdev->dev, "Timeout while waiting for device CPU\n"); - hdev->device_cpu_disabled = true; - goto out; - } - - if (tmp == ARMCP_PACKET_FENCE_VAL) { - u32 ctl = le32_to_cpu(pkt->ctl); - - rc = (ctl & ARMCP_PKT_CTL_RC_MASK) >> ARMCP_PKT_CTL_RC_SHIFT; - if (rc) { - dev_err(hdev->dev, - "F/W ERROR %d for CPU packet %d\n", - rc, (ctl & ARMCP_PKT_CTL_OPCODE_MASK) - >> ARMCP_PKT_CTL_OPCODE_SHIFT); - rc = -EINVAL; - } else if (result) { - *result = (long) le64_to_cpu(pkt->result); - } - } else { - dev_err(hdev->dev, "CPU packet wrong fence value\n"); - rc = -EINVAL; - } - -out: - mutex_unlock(&hdev->send_cpu_message_lock); - - hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, len, pkt); - - return rc; + return hl_fw_send_cpu_message(hdev, GOYA_QUEUE_ID_CPU_PQ, msg, len, + timeout, result); } int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id) @@ -3238,7 +2905,7 @@ int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id) fence_val = GOYA_QMAN0_FENCE_VAL; - fence_ptr = hdev->asic_funcs->dma_pool_zalloc(hdev, 4, GFP_KERNEL, + fence_ptr = hdev->asic_funcs->asic_dma_pool_zalloc(hdev, 4, GFP_KERNEL, &fence_dma_addr); if (!fence_ptr) { dev_err(hdev->dev, @@ -3248,7 +2915,7 @@ int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id) *fence_ptr = 0; - fence_pkt = hdev->asic_funcs->dma_pool_zalloc(hdev, + fence_pkt = hdev->asic_funcs->asic_dma_pool_zalloc(hdev, sizeof(struct packet_msg_prot), GFP_KERNEL, &pkt_dma_addr); if (!fence_pkt) { @@ -3263,8 +2930,7 @@ int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id) (1 << GOYA_PKT_CTL_MB_SHIFT); fence_pkt->ctl = cpu_to_le32(tmp); fence_pkt->value = cpu_to_le32(fence_val); - fence_pkt->addr = cpu_to_le64(fence_dma_addr + - hdev->asic_prop.host_phys_base_address); + fence_pkt->addr = cpu_to_le64(fence_dma_addr); rc = hl_hw_queue_send_cb_no_cmpl(hdev, hw_queue_id, sizeof(struct packet_msg_prot), @@ -3292,48 +2958,30 @@ int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id) } free_pkt: - hdev->asic_funcs->dma_pool_free(hdev, (void *) fence_pkt, + hdev->asic_funcs->asic_dma_pool_free(hdev, (void *) fence_pkt, pkt_dma_addr); free_fence_ptr: - hdev->asic_funcs->dma_pool_free(hdev, (void *) fence_ptr, + hdev->asic_funcs->asic_dma_pool_free(hdev, (void *) fence_ptr, fence_dma_addr); return rc; } int goya_test_cpu_queue(struct hl_device *hdev) { - struct armcp_packet test_pkt; - long result; - int rc; - - /* cpu_queues_enable flag is always checked in send cpu message */ - - memset(&test_pkt, 0, sizeof(test_pkt)); - - test_pkt.ctl = cpu_to_le32(ARMCP_PACKET_TEST << - ARMCP_PKT_CTL_OPCODE_SHIFT); - test_pkt.value = cpu_to_le64(ARMCP_PACKET_FENCE_VAL); - - rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &test_pkt, - sizeof(test_pkt), HL_DEVICE_TIMEOUT_USEC, &result); + struct goya_device *goya = hdev->asic_specific; - if (!rc) { - if (result == ARMCP_PACKET_FENCE_VAL) - dev_info(hdev->dev, - "queue test on CPU queue succeeded\n"); - else - dev_err(hdev->dev, - "CPU queue test failed (0x%08lX)\n", result); - } else { - dev_err(hdev->dev, "CPU queue test failed, error %d\n", rc); - } + /* + * check capability here as send_cpu_message() won't update the result + * value if no capability + */ + if (!(goya->hw_cap_initialized & HW_CAP_CPU_Q)) + return 0; - return rc; + return hl_fw_test_cpu_queue(hdev); } -static int goya_test_queues(struct hl_device *hdev) +int goya_test_queues(struct hl_device *hdev) { - struct goya_device *goya = hdev->asic_specific; int i, rc, ret_val = 0; for (i = 0 ; i < NUMBER_OF_EXT_HW_QUEUES ; i++) { @@ -3343,7 +2991,7 @@ static int goya_test_queues(struct hl_device *hdev) } if (hdev->cpu_queues_enable) { - rc = goya->test_cpu_queue(hdev); + rc = goya_test_cpu_queue(hdev); if (rc) ret_val = -EINVAL; } @@ -3354,57 +3002,68 @@ static int goya_test_queues(struct hl_device *hdev) static void *goya_dma_pool_zalloc(struct hl_device *hdev, size_t size, gfp_t mem_flags, dma_addr_t *dma_handle) { + void *kernel_addr; + if (size > GOYA_DMA_POOL_BLK_SIZE) return NULL; - return dma_pool_zalloc(hdev->dma_pool, mem_flags, dma_handle); + kernel_addr = dma_pool_zalloc(hdev->dma_pool, mem_flags, dma_handle); + + /* Shift to the device's base physical address of host memory */ + if (kernel_addr) + *dma_handle += HOST_PHYS_BASE; + + return kernel_addr; } static void goya_dma_pool_free(struct hl_device *hdev, void *vaddr, dma_addr_t dma_addr) { - dma_pool_free(hdev->dma_pool, vaddr, dma_addr); + /* Cancel the device's base physical address of host memory */ + dma_addr_t fixed_dma_addr = dma_addr - HOST_PHYS_BASE; + + dma_pool_free(hdev->dma_pool, vaddr, fixed_dma_addr); } -static void *goya_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, - size_t size, dma_addr_t *dma_handle) +void *goya_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, + dma_addr_t *dma_handle) { - u64 kernel_addr; - - /* roundup to CPU_PKT_SIZE */ - size = (size + (CPU_PKT_SIZE - 1)) & CPU_PKT_MASK; - - kernel_addr = gen_pool_alloc(hdev->cpu_accessible_dma_pool, size); - - *dma_handle = hdev->cpu_accessible_dma_address + - (kernel_addr - (u64) (uintptr_t) hdev->cpu_accessible_dma_mem); - - return (void *) (uintptr_t) kernel_addr; + return hl_fw_cpu_accessible_dma_pool_alloc(hdev, size, dma_handle); } -static void goya_cpu_accessible_dma_pool_free(struct hl_device *hdev, - size_t size, void *vaddr) +void goya_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, + void *vaddr) { - /* roundup to CPU_PKT_SIZE */ - size = (size + (CPU_PKT_SIZE - 1)) & CPU_PKT_MASK; - - gen_pool_free(hdev->cpu_accessible_dma_pool, (u64) (uintptr_t) vaddr, - size); + hl_fw_cpu_accessible_dma_pool_free(hdev, size, vaddr); } -static int goya_dma_map_sg(struct hl_device *hdev, struct scatterlist *sg, +static int goya_dma_map_sg(struct hl_device *hdev, struct scatterlist *sgl, int nents, enum dma_data_direction dir) { - if (!dma_map_sg(&hdev->pdev->dev, sg, nents, dir)) + struct scatterlist *sg; + int i; + + if (!dma_map_sg(&hdev->pdev->dev, sgl, nents, dir)) return -ENOMEM; + /* Shift to the device's base physical address of host memory */ + for_each_sg(sgl, sg, nents, i) + sg->dma_address += HOST_PHYS_BASE; + return 0; } -static void goya_dma_unmap_sg(struct hl_device *hdev, struct scatterlist *sg, +static void goya_dma_unmap_sg(struct hl_device *hdev, struct scatterlist *sgl, int nents, enum dma_data_direction dir) { - dma_unmap_sg(&hdev->pdev->dev, sg, nents, dir); + struct scatterlist *sg; + int i; + + /* Cancel the device's base physical address of host memory */ + for_each_sg(sgl, sg, nents, i) + sg->dma_address -= HOST_PHYS_BASE; + + dma_unmap_sg(&hdev->pdev->dev, sgl, nents, dir); } u32 goya_get_dma_desc_list_size(struct hl_device *hdev, struct sg_table *sgt) @@ -3554,31 +3213,29 @@ static int goya_validate_dma_pkt_host(struct hl_device *hdev, return -EFAULT; } - if (parser->ctx_id != HL_KERNEL_ASID_ID) { - if (sram_addr) { - if (!hl_mem_area_inside_range(device_memory_addr, - le32_to_cpu(user_dma_pkt->tsize), - hdev->asic_prop.sram_user_base_address, - hdev->asic_prop.sram_end_address)) { + if (sram_addr) { + if (!hl_mem_area_inside_range(device_memory_addr, + le32_to_cpu(user_dma_pkt->tsize), + hdev->asic_prop.sram_user_base_address, + hdev->asic_prop.sram_end_address)) { + + dev_err(hdev->dev, + "SRAM address 0x%llx + 0x%x is invalid\n", + device_memory_addr, + user_dma_pkt->tsize); + return -EFAULT; + } + } else { + if (!hl_mem_area_inside_range(device_memory_addr, + le32_to_cpu(user_dma_pkt->tsize), + hdev->asic_prop.dram_user_base_address, + hdev->asic_prop.dram_end_address)) { - dev_err(hdev->dev, - "SRAM address 0x%llx + 0x%x is invalid\n", - device_memory_addr, - user_dma_pkt->tsize); - return -EFAULT; - } - } else { - if (!hl_mem_area_inside_range(device_memory_addr, - le32_to_cpu(user_dma_pkt->tsize), - hdev->asic_prop.dram_user_base_address, - hdev->asic_prop.dram_end_address)) { - - dev_err(hdev->dev, - "DRAM address 0x%llx + 0x%x is invalid\n", - device_memory_addr, - user_dma_pkt->tsize); - return -EFAULT; - } + dev_err(hdev->dev, + "DRAM address 0x%llx + 0x%x is invalid\n", + device_memory_addr, + user_dma_pkt->tsize); + return -EFAULT; } } @@ -3956,8 +3613,6 @@ static int goya_patch_dma_packet(struct hl_device *hdev, new_dma_pkt->ctl = cpu_to_le32(ctl); new_dma_pkt->tsize = cpu_to_le32((u32) len); - dma_addr += hdev->asic_prop.host_phys_base_address; - if (dir == DMA_TO_DEVICE) { new_dma_pkt->src_addr = cpu_to_le64(dma_addr); new_dma_pkt->dst_addr = cpu_to_le64(device_memory_addr); @@ -4208,36 +3863,35 @@ free_userptr: return rc; } -static int goya_parse_cb_no_ext_quque(struct hl_device *hdev, +static int goya_parse_cb_no_ext_queue(struct hl_device *hdev, struct hl_cs_parser *parser) { struct asic_fixed_properties *asic_prop = &hdev->asic_prop; struct goya_device *goya = hdev->asic_specific; - if (!(goya->hw_cap_initialized & HW_CAP_MMU)) { - /* For internal queue jobs, just check if cb address is valid */ - if (hl_mem_area_inside_range( - (u64) (uintptr_t) parser->user_cb, - parser->user_cb_size, - asic_prop->sram_user_base_address, - asic_prop->sram_end_address)) - return 0; + if (goya->hw_cap_initialized & HW_CAP_MMU) + return 0; - if (hl_mem_area_inside_range( - (u64) (uintptr_t) parser->user_cb, - parser->user_cb_size, - asic_prop->dram_user_base_address, - asic_prop->dram_end_address)) - return 0; + /* For internal queue jobs, just check if CB address is valid */ + if (hl_mem_area_inside_range( + (u64) (uintptr_t) parser->user_cb, + parser->user_cb_size, + asic_prop->sram_user_base_address, + asic_prop->sram_end_address)) + return 0; - dev_err(hdev->dev, - "Internal CB address %px + 0x%x is not in SRAM nor in DRAM\n", - parser->user_cb, parser->user_cb_size); + if (hl_mem_area_inside_range( + (u64) (uintptr_t) parser->user_cb, + parser->user_cb_size, + asic_prop->dram_user_base_address, + asic_prop->dram_end_address)) + return 0; - return -EFAULT; - } + dev_err(hdev->dev, + "Internal CB address %px + 0x%x is not in SRAM nor in DRAM\n", + parser->user_cb, parser->user_cb_size); - return 0; + return -EFAULT; } int goya_cs_parser(struct hl_device *hdev, struct hl_cs_parser *parser) @@ -4245,9 +3899,9 @@ int goya_cs_parser(struct hl_device *hdev, struct hl_cs_parser *parser) struct goya_device *goya = hdev->asic_specific; if (!parser->ext_queue) - return goya_parse_cb_no_ext_quque(hdev, parser); + return goya_parse_cb_no_ext_queue(hdev, parser); - if ((goya->hw_cap_initialized & HW_CAP_MMU) && parser->use_virt_addr) + if (goya->hw_cap_initialized & HW_CAP_MMU) return goya_parse_cb_mmu(hdev, parser); else return goya_parse_cb_no_mmu(hdev, parser); @@ -4278,12 +3932,12 @@ void goya_add_end_of_cb_packets(u64 kernel_address, u32 len, u64 cq_addr, cq_pkt->addr = cpu_to_le64(CFG_BASE + mmPCIE_DBI_MSIX_DOORBELL_OFF); } -static void goya_update_eq_ci(struct hl_device *hdev, u32 val) +void goya_update_eq_ci(struct hl_device *hdev, u32 val) { WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_6, val); } -static void goya_restore_phase_topology(struct hl_device *hdev) +void goya_restore_phase_topology(struct hl_device *hdev) { int i, num_of_sob_in_longs, num_of_mon_in_longs; @@ -4320,6 +3974,7 @@ static void goya_restore_phase_topology(struct hl_device *hdev) static int goya_debugfs_read32(struct hl_device *hdev, u64 addr, u32 *val) { struct asic_fixed_properties *prop = &hdev->asic_prop; + u64 ddr_bar_addr; int rc = 0; if ((addr >= CFG_BASE) && (addr < CFG_BASE + CFG_SIZE)) { @@ -4337,15 +3992,16 @@ static int goya_debugfs_read32(struct hl_device *hdev, u64 addr, u32 *val) u64 bar_base_addr = DRAM_PHYS_BASE + (addr & ~(prop->dram_pci_bar_size - 0x1ull)); - rc = goya_set_ddr_bar_base(hdev, bar_base_addr); - if (!rc) { + ddr_bar_addr = goya_set_ddr_bar_base(hdev, bar_base_addr); + if (ddr_bar_addr != U64_MAX) { *val = readl(hdev->pcie_bar[DDR_BAR_ID] + (addr - bar_base_addr)); - rc = goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE + - (MMU_PAGE_TABLES_ADDR & - ~(prop->dram_pci_bar_size - 0x1ull))); + ddr_bar_addr = goya_set_ddr_bar_base(hdev, + ddr_bar_addr); } + if (ddr_bar_addr == U64_MAX) + rc = -EIO; } else { rc = -EFAULT; } @@ -4370,6 +4026,7 @@ static int goya_debugfs_read32(struct hl_device *hdev, u64 addr, u32 *val) static int goya_debugfs_write32(struct hl_device *hdev, u64 addr, u32 val) { struct asic_fixed_properties *prop = &hdev->asic_prop; + u64 ddr_bar_addr; int rc = 0; if ((addr >= CFG_BASE) && (addr < CFG_BASE + CFG_SIZE)) { @@ -4387,15 +4044,16 @@ static int goya_debugfs_write32(struct hl_device *hdev, u64 addr, u32 val) u64 bar_base_addr = DRAM_PHYS_BASE + (addr & ~(prop->dram_pci_bar_size - 0x1ull)); - rc = goya_set_ddr_bar_base(hdev, bar_base_addr); - if (!rc) { + ddr_bar_addr = goya_set_ddr_bar_base(hdev, bar_base_addr); + if (ddr_bar_addr != U64_MAX) { writel(val, hdev->pcie_bar[DDR_BAR_ID] + (addr - bar_base_addr)); - rc = goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE + - (MMU_PAGE_TABLES_ADDR & - ~(prop->dram_pci_bar_size - 0x1ull))); + ddr_bar_addr = goya_set_ddr_bar_base(hdev, + ddr_bar_addr); } + if (ddr_bar_addr == U64_MAX) + rc = -EIO; } else { rc = -EFAULT; } @@ -4407,6 +4065,9 @@ static u64 goya_read_pte(struct hl_device *hdev, u64 addr) { struct goya_device *goya = hdev->asic_specific; + if (hdev->hard_reset_pending) + return U64_MAX; + return readq(hdev->pcie_bar[DDR_BAR_ID] + (addr - goya->ddr_bar_cur_addr)); } @@ -4415,6 +4076,9 @@ static void goya_write_pte(struct hl_device *hdev, u64 addr, u64 val) { struct goya_device *goya = hdev->asic_specific; + if (hdev->hard_reset_pending) + return; + writeq(val, hdev->pcie_bar[DDR_BAR_ID] + (addr - goya->ddr_bar_cur_addr)); } @@ -4604,8 +4268,8 @@ static int goya_unmask_irq_arr(struct hl_device *hdev, u32 *irq_arr, pkt->armcp_pkt.ctl = cpu_to_le32(ARMCP_PACKET_UNMASK_RAZWI_IRQ_ARRAY << ARMCP_PKT_CTL_OPCODE_SHIFT); - rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) pkt, - total_pkt_size, HL_DEVICE_TIMEOUT_USEC, &result); + rc = goya_send_cpu_message(hdev, (u32 *) pkt, total_pkt_size, + HL_DEVICE_TIMEOUT_USEC, &result); if (rc) dev_err(hdev->dev, "failed to unmask IRQ array\n"); @@ -4621,8 +4285,8 @@ static int goya_soft_reset_late_init(struct hl_device *hdev) * Unmask all IRQs since some could have been received * during the soft reset */ - return goya_unmask_irq_arr(hdev, goya_non_fatal_events, - sizeof(goya_non_fatal_events)); + return goya_unmask_irq_arr(hdev, goya_all_events, + sizeof(goya_all_events)); } static int goya_unmask_irq(struct hl_device *hdev, u16 event_type) @@ -4637,7 +4301,7 @@ static int goya_unmask_irq(struct hl_device *hdev, u16 event_type) ARMCP_PKT_CTL_OPCODE_SHIFT); pkt.value = cpu_to_le64(event_type); - rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), + rc = goya_send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), HL_DEVICE_TIMEOUT_USEC, &result); if (rc) @@ -4758,7 +4422,6 @@ static int goya_memset_device_memory(struct hl_device *hdev, u64 addr, u32 size, u64 val, bool is_dram) { struct packet_lin_dma *lin_dma_pkt; - struct hl_cs_parser parser; struct hl_cs_job *job; u32 cb_size, ctl; struct hl_cb *cb; @@ -4798,36 +4461,16 @@ static int goya_memset_device_memory(struct hl_device *hdev, u64 addr, u32 size, job->user_cb->cs_cnt++; job->user_cb_size = cb_size; job->hw_queue_id = GOYA_QUEUE_ID_DMA_0; + job->patched_cb = job->user_cb; + job->job_cb_size = job->user_cb_size + + sizeof(struct packet_msg_prot) * 2; hl_debugfs_add_job(hdev, job); - parser.ctx_id = HL_KERNEL_ASID_ID; - parser.cs_sequence = 0; - parser.job_id = job->id; - parser.hw_queue_id = job->hw_queue_id; - parser.job_userptr_list = &job->userptr_list; - parser.user_cb = job->user_cb; - parser.user_cb_size = job->user_cb_size; - parser.ext_queue = job->ext_queue; - parser.use_virt_addr = hdev->mmu_enable; - - rc = hdev->asic_funcs->cs_parser(hdev, &parser); - if (rc) { - dev_err(hdev->dev, "Failed to parse kernel CB\n"); - goto free_job; - } - - job->patched_cb = parser.patched_cb; - job->job_cb_size = parser.patched_cb_size; - job->patched_cb->cs_cnt++; - rc = goya_send_job_on_qman0(hdev, job); - job->patched_cb->cs_cnt--; hl_cb_put(job->patched_cb); -free_job: - hl_userptr_delete_list(hdev, &job->userptr_list); hl_debugfs_remove_job(hdev, job); kfree(job); cb->cs_cnt--; @@ -4839,7 +4482,7 @@ release_cb: return rc; } -static int goya_context_switch(struct hl_device *hdev, u32 asid) +int goya_context_switch(struct hl_device *hdev, u32 asid) { struct asic_fixed_properties *prop = &hdev->asic_prop; u64 addr = prop->sram_base_address; @@ -4853,12 +4496,13 @@ static int goya_context_switch(struct hl_device *hdev, u32 asid) return rc; } + WREG32(mmTPC_PLL_CLK_RLX_0, 0x200020); goya_mmu_prepare(hdev, asid); return 0; } -static int goya_mmu_clear_pgt_range(struct hl_device *hdev) +int goya_mmu_clear_pgt_range(struct hl_device *hdev) { struct asic_fixed_properties *prop = &hdev->asic_prop; struct goya_device *goya = hdev->asic_specific; @@ -4872,7 +4516,7 @@ static int goya_mmu_clear_pgt_range(struct hl_device *hdev) return goya_memset_device_memory(hdev, addr, size, 0, true); } -static int goya_mmu_set_dram_default_page(struct hl_device *hdev) +int goya_mmu_set_dram_default_page(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; u64 addr = hdev->asic_prop.mmu_dram_default_page_addr; @@ -4885,7 +4529,7 @@ static int goya_mmu_set_dram_default_page(struct hl_device *hdev) return goya_memset_device_memory(hdev, addr, size, val, true); } -static void goya_mmu_prepare(struct hl_device *hdev, u32 asid) +void goya_mmu_prepare(struct hl_device *hdev, u32 asid) { struct goya_device *goya = hdev->asic_specific; int i; @@ -4899,10 +4543,8 @@ static void goya_mmu_prepare(struct hl_device *hdev, u32 asid) } /* zero the MMBP and ASID bits and then set the ASID */ - for (i = 0 ; i < GOYA_MMU_REGS_NUM ; i++) { - WREG32_AND(goya_mmu_regs[i], ~0x7FF); - WREG32_OR(goya_mmu_regs[i], asid); - } + for (i = 0 ; i < GOYA_MMU_REGS_NUM ; i++) + goya_mmu_prepare_reg(hdev, goya_mmu_regs[i], asid); } static void goya_mmu_invalidate_cache(struct hl_device *hdev, bool is_hard) @@ -4993,107 +4635,29 @@ static void goya_mmu_invalidate_cache_range(struct hl_device *hdev, "Timeout when waiting for MMU cache invalidation\n"); } -static int goya_mmu_update_asid_hop0_addr(struct hl_device *hdev, u32 asid, - u64 phys_addr) -{ - u32 status, timeout_usec; - int rc; - - if (hdev->pldm) - timeout_usec = GOYA_PLDM_MMU_TIMEOUT_USEC; - else - timeout_usec = MMU_CONFIG_TIMEOUT_USEC; - - WREG32(MMU_HOP0_PA43_12, phys_addr >> MMU_HOP0_PA43_12_SHIFT); - WREG32(MMU_HOP0_PA49_44, phys_addr >> MMU_HOP0_PA49_44_SHIFT); - WREG32(MMU_ASID_BUSY, 0x80000000 | asid); - - rc = hl_poll_timeout( - hdev, - MMU_ASID_BUSY, - status, - !(status & 0x80000000), - 1000, - timeout_usec); - - if (rc) { - dev_err(hdev->dev, - "Timeout during MMU hop0 config of asid %d\n", asid); - return rc; - } - - return 0; -} - int goya_send_heartbeat(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; - struct armcp_packet hb_pkt; - long result; - int rc; if (!(goya->hw_cap_initialized & HW_CAP_CPU_Q)) return 0; - memset(&hb_pkt, 0, sizeof(hb_pkt)); - - hb_pkt.ctl = cpu_to_le32(ARMCP_PACKET_TEST << - ARMCP_PKT_CTL_OPCODE_SHIFT); - hb_pkt.value = cpu_to_le64(ARMCP_PACKET_FENCE_VAL); - - rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &hb_pkt, - sizeof(hb_pkt), HL_DEVICE_TIMEOUT_USEC, &result); - - if ((rc) || (result != ARMCP_PACKET_FENCE_VAL)) - rc = -EIO; - - return rc; + return hl_fw_send_heartbeat(hdev); } -static int goya_armcp_info_get(struct hl_device *hdev) +int goya_armcp_info_get(struct hl_device *hdev) { struct goya_device *goya = hdev->asic_specific; struct asic_fixed_properties *prop = &hdev->asic_prop; - struct armcp_packet pkt; - void *armcp_info_cpu_addr; - dma_addr_t armcp_info_dma_addr; u64 dram_size; - long result; int rc; if (!(goya->hw_cap_initialized & HW_CAP_CPU_Q)) return 0; - armcp_info_cpu_addr = - hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, - sizeof(struct armcp_info), &armcp_info_dma_addr); - if (!armcp_info_cpu_addr) { - dev_err(hdev->dev, - "Failed to allocate DMA memory for ArmCP info packet\n"); - return -ENOMEM; - } - - memset(armcp_info_cpu_addr, 0, sizeof(struct armcp_info)); - - memset(&pkt, 0, sizeof(pkt)); - - pkt.ctl = cpu_to_le32(ARMCP_PACKET_INFO_GET << - ARMCP_PKT_CTL_OPCODE_SHIFT); - pkt.addr = cpu_to_le64(armcp_info_dma_addr + - prop->host_phys_base_address); - pkt.data_max_size = cpu_to_le32(sizeof(struct armcp_info)); - - rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), - GOYA_ARMCP_INFO_TIMEOUT, &result); - - if (rc) { - dev_err(hdev->dev, - "Failed to send armcp info pkt, error %d\n", rc); - goto out; - } - - memcpy(&prop->armcp_info, armcp_info_cpu_addr, - sizeof(prop->armcp_info)); + rc = hl_fw_armcp_info_get(hdev); + if (rc) + return rc; dram_size = le64_to_cpu(prop->armcp_info.dram_size); if (dram_size) { @@ -5109,32 +4673,10 @@ static int goya_armcp_info_get(struct hl_device *hdev) prop->dram_end_address = prop->dram_base_address + dram_size; } - rc = hl_build_hwmon_channel_info(hdev, prop->armcp_info.sensors); - if (rc) { - dev_err(hdev->dev, - "Failed to build hwmon channel info, error %d\n", rc); - rc = -EFAULT; - goto out; - } - -out: - hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, - sizeof(struct armcp_info), armcp_info_cpu_addr); - - return rc; -} - -static void goya_init_clock_gating(struct hl_device *hdev) -{ - -} - -static void goya_disable_clock_gating(struct hl_device *hdev) -{ - + return 0; } -static bool goya_is_device_idle(struct hl_device *hdev) +static bool goya_is_device_idle(struct hl_device *hdev, char *buf, size_t size) { u64 offset, dma_qm_reg, tpc_qm_reg, tpc_cmdq_reg, tpc_cfg_reg; int i; @@ -5146,7 +4688,7 @@ static bool goya_is_device_idle(struct hl_device *hdev) if ((RREG32(dma_qm_reg) & DMA_QM_IDLE_MASK) != DMA_QM_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "DMA%d_QM", i); } offset = mmTPC1_QM_GLBL_STS0 - mmTPC0_QM_GLBL_STS0; @@ -5158,31 +4700,31 @@ static bool goya_is_device_idle(struct hl_device *hdev) if ((RREG32(tpc_qm_reg) & TPC_QM_IDLE_MASK) != TPC_QM_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "TPC%d_QM", i); if ((RREG32(tpc_cmdq_reg) & TPC_CMDQ_IDLE_MASK) != TPC_CMDQ_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "TPC%d_CMDQ", i); if ((RREG32(tpc_cfg_reg) & TPC_CFG_IDLE_MASK) != TPC_CFG_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "TPC%d_CFG", i); } if ((RREG32(mmMME_QM_GLBL_STS0) & MME_QM_IDLE_MASK) != MME_QM_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "MME_QM"); if ((RREG32(mmMME_CMDQ_GLBL_STS0) & MME_CMDQ_IDLE_MASK) != MME_CMDQ_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "MME_CMDQ"); if ((RREG32(mmMME_ARCH_STATUS) & MME_ARCH_IDLE_MASK) != MME_ARCH_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "MME_ARCH"); if (RREG32(mmMME_SHADOW_0_STATUS) & MME_SHADOW_IDLE_MASK) - return false; + return HL_ENG_BUSY(buf, size, "MME"); return true; } @@ -5210,52 +4752,11 @@ static int goya_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size) { struct goya_device *goya = hdev->asic_specific; - struct asic_fixed_properties *prop = &hdev->asic_prop; - struct armcp_packet pkt; - void *eeprom_info_cpu_addr; - dma_addr_t eeprom_info_dma_addr; - long result; - int rc; if (!(goya->hw_cap_initialized & HW_CAP_CPU_Q)) return 0; - eeprom_info_cpu_addr = - hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, - max_size, &eeprom_info_dma_addr); - if (!eeprom_info_cpu_addr) { - dev_err(hdev->dev, - "Failed to allocate DMA memory for EEPROM info packet\n"); - return -ENOMEM; - } - - memset(eeprom_info_cpu_addr, 0, max_size); - - memset(&pkt, 0, sizeof(pkt)); - - pkt.ctl = cpu_to_le32(ARMCP_PACKET_EEPROM_DATA_GET << - ARMCP_PKT_CTL_OPCODE_SHIFT); - pkt.addr = cpu_to_le64(eeprom_info_dma_addr + - prop->host_phys_base_address); - pkt.data_max_size = cpu_to_le32(max_size); - - rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), - GOYA_ARMCP_EEPROM_TIMEOUT, &result); - - if (rc) { - dev_err(hdev->dev, - "Failed to send armcp EEPROM pkt, error %d\n", rc); - goto out; - } - - /* result contains the actual size */ - memcpy(data, eeprom_info_cpu_addr, min((size_t)result, max_size)); - -out: - hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, max_size, - eeprom_info_cpu_addr); - - return rc; + return hl_fw_get_eeprom_data(hdev, data, max_size); } static enum hl_device_hw_state goya_get_hw_state(struct hl_device *hdev) @@ -5278,12 +4779,12 @@ static const struct hl_asic_funcs goya_funcs = { .cb_mmap = goya_cb_mmap, .ring_doorbell = goya_ring_doorbell, .flush_pq_write = goya_flush_pq_write, - .dma_alloc_coherent = goya_dma_alloc_coherent, - .dma_free_coherent = goya_dma_free_coherent, + .asic_dma_alloc_coherent = goya_dma_alloc_coherent, + .asic_dma_free_coherent = goya_dma_free_coherent, .get_int_queue_base = goya_get_int_queue_base, .test_queues = goya_test_queues, - .dma_pool_zalloc = goya_dma_pool_zalloc, - .dma_pool_free = goya_dma_pool_free, + .asic_dma_pool_zalloc = goya_dma_pool_zalloc, + .asic_dma_pool_free = goya_dma_pool_free, .cpu_accessible_dma_pool_alloc = goya_cpu_accessible_dma_pool_alloc, .cpu_accessible_dma_pool_free = goya_cpu_accessible_dma_pool_free, .hl_dma_unmap_sg = goya_dma_unmap_sg, @@ -5305,8 +4806,7 @@ static const struct hl_asic_funcs goya_funcs = { .mmu_invalidate_cache = goya_mmu_invalidate_cache, .mmu_invalidate_cache_range = goya_mmu_invalidate_cache_range, .send_heartbeat = goya_send_heartbeat, - .enable_clock_gating = goya_init_clock_gating, - .disable_clock_gating = goya_disable_clock_gating, + .debug_coresight = goya_debug_coresight, .is_device_idle = goya_is_device_idle, .soft_reset_late_init = goya_soft_reset_late_init, .hw_queues_lock = goya_hw_queues_lock, @@ -5314,7 +4814,12 @@ static const struct hl_asic_funcs goya_funcs = { .get_pci_id = goya_get_pci_id, .get_eeprom_data = goya_get_eeprom_data, .send_cpu_message = goya_send_cpu_message, - .get_hw_state = goya_get_hw_state + .get_hw_state = goya_get_hw_state, + .pci_bars_map = goya_pci_bars_map, + .set_dram_bar_base = goya_set_ddr_bar_base, + .init_iatu = goya_init_iatu, + .rreg = hl_rreg, + .wreg = hl_wreg }; /* diff --git a/drivers/misc/habanalabs/goya/goyaP.h b/drivers/misc/habanalabs/goya/goyaP.h index 830551b6b062..14e216cb3668 100644 --- a/drivers/misc/habanalabs/goya/goyaP.h +++ b/drivers/misc/habanalabs/goya/goyaP.h @@ -39,9 +39,13 @@ #error "Number of MSIX interrupts must be smaller or equal to GOYA_MSIX_ENTRIES" #endif -#define QMAN_FENCE_TIMEOUT_USEC 10000 /* 10 ms */ +#define QMAN_FENCE_TIMEOUT_USEC 10000 /* 10 ms */ -#define QMAN_STOP_TIMEOUT_USEC 100000 /* 100 ms */ +#define QMAN_STOP_TIMEOUT_USEC 100000 /* 100 ms */ + +#define CORESIGHT_TIMEOUT_USEC 100000 /* 100 ms */ + +#define GOYA_CPU_TIMEOUT_USEC 10000000 /* 10s */ #define TPC_ENABLED_MASK 0xFF @@ -49,19 +53,14 @@ #define MAX_POWER_DEFAULT 200000 /* 200W */ -#define GOYA_ARMCP_INFO_TIMEOUT 10000000 /* 10s */ -#define GOYA_ARMCP_EEPROM_TIMEOUT 10000000 /* 10s */ - #define DRAM_PHYS_DEFAULT_SIZE 0x100000000ull /* 4GB */ /* DRAM Memory Map */ #define CPU_FW_IMAGE_SIZE 0x10000000 /* 256MB */ -#define MMU_PAGE_TABLES_SIZE 0x0DE00000 /* 222MB */ +#define MMU_PAGE_TABLES_SIZE 0x0FC00000 /* 252MB */ #define MMU_DRAM_DEFAULT_PAGE_SIZE 0x00200000 /* 2MB */ #define MMU_CACHE_MNG_SIZE 0x00001000 /* 4KB */ -#define CPU_PQ_PKT_SIZE 0x00001000 /* 4KB */ -#define CPU_PQ_DATA_SIZE 0x01FFE000 /* 32MB - 8KB */ #define CPU_FW_IMAGE_ADDR DRAM_PHYS_BASE #define MMU_PAGE_TABLES_ADDR (CPU_FW_IMAGE_ADDR + CPU_FW_IMAGE_SIZE) @@ -69,13 +68,13 @@ MMU_PAGE_TABLES_SIZE) #define MMU_CACHE_MNG_ADDR (MMU_DRAM_DEFAULT_PAGE_ADDR + \ MMU_DRAM_DEFAULT_PAGE_SIZE) -#define CPU_PQ_PKT_ADDR (MMU_CACHE_MNG_ADDR + \ +#define DRAM_KMD_END_ADDR (MMU_CACHE_MNG_ADDR + \ MMU_CACHE_MNG_SIZE) -#define CPU_PQ_DATA_ADDR (CPU_PQ_PKT_ADDR + CPU_PQ_PKT_SIZE) -#define DRAM_BASE_ADDR_USER (CPU_PQ_DATA_ADDR + CPU_PQ_DATA_SIZE) -#if (DRAM_BASE_ADDR_USER != 0x20000000) -#error "KMD must reserve 512MB" +#define DRAM_BASE_ADDR_USER 0x20000000 + +#if (DRAM_KMD_END_ADDR > DRAM_BASE_ADDR_USER) +#error "KMD must reserve no more than 512MB" #endif /* @@ -142,22 +141,12 @@ #define HW_CAP_GOLDEN 0x00000400 #define HW_CAP_TPC 0x00000800 -#define CPU_PKT_SHIFT 5 -#define CPU_PKT_SIZE (1 << CPU_PKT_SHIFT) -#define CPU_PKT_MASK (~((1 << CPU_PKT_SHIFT) - 1)) -#define CPU_MAX_PKTS_IN_CB 32 -#define CPU_CB_SIZE (CPU_PKT_SIZE * CPU_MAX_PKTS_IN_CB) -#define CPU_ACCESSIBLE_MEM_SIZE (HL_QUEUE_LENGTH * CPU_CB_SIZE) - enum goya_fw_component { FW_COMP_UBOOT, FW_COMP_PREBOOT }; struct goya_device { - int (*test_cpu_queue)(struct hl_device *hdev); - int (*armcp_info_get)(struct hl_device *hdev); - /* TODO: remove hw_queues_lock after moving to scheduler code */ spinlock_t hw_queues_lock; @@ -170,13 +159,34 @@ struct goya_device { u32 hw_cap_initialized; }; +void goya_get_fixed_properties(struct hl_device *hdev); +int goya_mmu_init(struct hl_device *hdev); +void goya_init_dma_qmans(struct hl_device *hdev); +void goya_init_mme_qmans(struct hl_device *hdev); +void goya_init_tpc_qmans(struct hl_device *hdev); +int goya_init_cpu_queues(struct hl_device *hdev); +void goya_init_security(struct hl_device *hdev); +int goya_late_init(struct hl_device *hdev); +void goya_late_fini(struct hl_device *hdev); + +void goya_ring_doorbell(struct hl_device *hdev, u32 hw_queue_id, u32 pi); +void goya_flush_pq_write(struct hl_device *hdev, u64 *pq, u64 exp_val); +void goya_update_eq_ci(struct hl_device *hdev, u32 val); +void goya_restore_phase_topology(struct hl_device *hdev); +int goya_context_switch(struct hl_device *hdev, u32 asid); + int goya_debugfs_i2c_read(struct hl_device *hdev, u8 i2c_bus, u8 i2c_addr, u8 i2c_reg, u32 *val); int goya_debugfs_i2c_write(struct hl_device *hdev, u8 i2c_bus, u8 i2c_addr, u8 i2c_reg, u32 val); +void goya_debugfs_led_set(struct hl_device *hdev, u8 led, u8 state); + +int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id); +int goya_test_queues(struct hl_device *hdev); int goya_test_cpu_queue(struct hl_device *hdev); int goya_send_cpu_message(struct hl_device *hdev, u32 *msg, u16 len, u32 timeout, long *result); + long goya_get_temperature(struct hl_device *hdev, int sensor_index, u32 attr); long goya_get_voltage(struct hl_device *hdev, int sensor_index, u32 attr); long goya_get_current(struct hl_device *hdev, int sensor_index, u32 attr); @@ -184,28 +194,35 @@ long goya_get_fan_speed(struct hl_device *hdev, int sensor_index, u32 attr); long goya_get_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr); void goya_set_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr, long value); -void goya_debugfs_led_set(struct hl_device *hdev, u8 led, u8 state); +u64 goya_get_max_power(struct hl_device *hdev); +void goya_set_max_power(struct hl_device *hdev, u64 value); + void goya_set_pll_profile(struct hl_device *hdev, enum hl_pll_frequency freq); void goya_add_device_attr(struct hl_device *hdev, struct attribute_group *dev_attr_grp); -void goya_init_security(struct hl_device *hdev); -u64 goya_get_max_power(struct hl_device *hdev); -void goya_set_max_power(struct hl_device *hdev, u64 value); +int goya_armcp_info_get(struct hl_device *hdev); +int goya_debug_coresight(struct hl_device *hdev, void *data); + +void goya_mmu_prepare(struct hl_device *hdev, u32 asid); +int goya_mmu_clear_pgt_range(struct hl_device *hdev); +int goya_mmu_set_dram_default_page(struct hl_device *hdev); -int goya_send_pci_access_msg(struct hl_device *hdev, u32 opcode); -void goya_late_fini(struct hl_device *hdev); int goya_suspend(struct hl_device *hdev); int goya_resume(struct hl_device *hdev); -void goya_flush_pq_write(struct hl_device *hdev, u64 *pq, u64 exp_val); + void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry); void *goya_get_events_stat(struct hl_device *hdev, u32 *size); + void goya_add_end_of_cb_packets(u64 kernel_address, u32 len, u64 cq_addr, u32 cq_val, u32 msix_vec); int goya_cs_parser(struct hl_device *hdev, struct hl_cs_parser *parser); void *goya_get_int_queue_base(struct hl_device *hdev, u32 queue_id, - dma_addr_t *dma_handle, u16 *queue_len); + dma_addr_t *dma_handle, u16 *queue_len); u32 goya_get_dma_desc_list_size(struct hl_device *hdev, struct sg_table *sgt); -int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id); int goya_send_heartbeat(struct hl_device *hdev); +void *goya_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, + dma_addr_t *dma_handle); +void goya_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, + void *vaddr); #endif /* GOYAP_H_ */ diff --git a/drivers/misc/habanalabs/goya/goya_coresight.c b/drivers/misc/habanalabs/goya/goya_coresight.c new file mode 100644 index 000000000000..1ac951f52d1e --- /dev/null +++ b/drivers/misc/habanalabs/goya/goya_coresight.c @@ -0,0 +1,628 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright 2016-2019 HabanaLabs, Ltd. + * All Rights Reserved. + */ + +#include "goyaP.h" +#include "include/goya/goya_coresight.h" +#include "include/goya/asic_reg/goya_regs.h" + +#include <uapi/misc/habanalabs.h> + +#include <linux/coresight.h> + +#define GOYA_PLDM_CORESIGHT_TIMEOUT_USEC (CORESIGHT_TIMEOUT_USEC * 100) + +static u64 debug_stm_regs[GOYA_STM_LAST + 1] = { + [GOYA_STM_CPU] = mmCPU_STM_BASE, + [GOYA_STM_DMA_CH_0_CS] = mmDMA_CH_0_CS_STM_BASE, + [GOYA_STM_DMA_CH_1_CS] = mmDMA_CH_1_CS_STM_BASE, + [GOYA_STM_DMA_CH_2_CS] = mmDMA_CH_2_CS_STM_BASE, + [GOYA_STM_DMA_CH_3_CS] = mmDMA_CH_3_CS_STM_BASE, + [GOYA_STM_DMA_CH_4_CS] = mmDMA_CH_4_CS_STM_BASE, + [GOYA_STM_DMA_MACRO_CS] = mmDMA_MACRO_CS_STM_BASE, + [GOYA_STM_MME1_SBA] = mmMME1_SBA_STM_BASE, + [GOYA_STM_MME3_SBB] = mmMME3_SBB_STM_BASE, + [GOYA_STM_MME4_WACS2] = mmMME4_WACS2_STM_BASE, + [GOYA_STM_MME4_WACS] = mmMME4_WACS_STM_BASE, + [GOYA_STM_MMU_CS] = mmMMU_CS_STM_BASE, + [GOYA_STM_PCIE] = mmPCIE_STM_BASE, + [GOYA_STM_PSOC] = mmPSOC_STM_BASE, + [GOYA_STM_TPC0_EML] = mmTPC0_EML_STM_BASE, + [GOYA_STM_TPC1_EML] = mmTPC1_EML_STM_BASE, + [GOYA_STM_TPC2_EML] = mmTPC2_EML_STM_BASE, + [GOYA_STM_TPC3_EML] = mmTPC3_EML_STM_BASE, + [GOYA_STM_TPC4_EML] = mmTPC4_EML_STM_BASE, + [GOYA_STM_TPC5_EML] = mmTPC5_EML_STM_BASE, + [GOYA_STM_TPC6_EML] = mmTPC6_EML_STM_BASE, + [GOYA_STM_TPC7_EML] = mmTPC7_EML_STM_BASE +}; + +static u64 debug_etf_regs[GOYA_ETF_LAST + 1] = { + [GOYA_ETF_CPU_0] = mmCPU_ETF_0_BASE, + [GOYA_ETF_CPU_1] = mmCPU_ETF_1_BASE, + [GOYA_ETF_CPU_TRACE] = mmCPU_ETF_TRACE_BASE, + [GOYA_ETF_DMA_CH_0_CS] = mmDMA_CH_0_CS_ETF_BASE, + [GOYA_ETF_DMA_CH_1_CS] = mmDMA_CH_1_CS_ETF_BASE, + [GOYA_ETF_DMA_CH_2_CS] = mmDMA_CH_2_CS_ETF_BASE, + [GOYA_ETF_DMA_CH_3_CS] = mmDMA_CH_3_CS_ETF_BASE, + [GOYA_ETF_DMA_CH_4_CS] = mmDMA_CH_4_CS_ETF_BASE, + [GOYA_ETF_DMA_MACRO_CS] = mmDMA_MACRO_CS_ETF_BASE, + [GOYA_ETF_MME1_SBA] = mmMME1_SBA_ETF_BASE, + [GOYA_ETF_MME3_SBB] = mmMME3_SBB_ETF_BASE, + [GOYA_ETF_MME4_WACS2] = mmMME4_WACS2_ETF_BASE, + [GOYA_ETF_MME4_WACS] = mmMME4_WACS_ETF_BASE, + [GOYA_ETF_MMU_CS] = mmMMU_CS_ETF_BASE, + [GOYA_ETF_PCIE] = mmPCIE_ETF_BASE, + [GOYA_ETF_PSOC] = mmPSOC_ETF_BASE, + [GOYA_ETF_TPC0_EML] = mmTPC0_EML_ETF_BASE, + [GOYA_ETF_TPC1_EML] = mmTPC1_EML_ETF_BASE, + [GOYA_ETF_TPC2_EML] = mmTPC2_EML_ETF_BASE, + [GOYA_ETF_TPC3_EML] = mmTPC3_EML_ETF_BASE, + [GOYA_ETF_TPC4_EML] = mmTPC4_EML_ETF_BASE, + [GOYA_ETF_TPC5_EML] = mmTPC5_EML_ETF_BASE, + [GOYA_ETF_TPC6_EML] = mmTPC6_EML_ETF_BASE, + [GOYA_ETF_TPC7_EML] = mmTPC7_EML_ETF_BASE +}; + +static u64 debug_funnel_regs[GOYA_FUNNEL_LAST + 1] = { + [GOYA_FUNNEL_CPU] = mmCPU_FUNNEL_BASE, + [GOYA_FUNNEL_DMA_CH_6_1] = mmDMA_CH_FUNNEL_6_1_BASE, + [GOYA_FUNNEL_DMA_MACRO_3_1] = mmDMA_MACRO_FUNNEL_3_1_BASE, + [GOYA_FUNNEL_MME0_RTR] = mmMME0_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_MME1_RTR] = mmMME1_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_MME2_RTR] = mmMME2_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_MME3_RTR] = mmMME3_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_MME4_RTR] = mmMME4_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_MME5_RTR] = mmMME5_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_PCIE] = mmPCIE_FUNNEL_BASE, + [GOYA_FUNNEL_PSOC] = mmPSOC_FUNNEL_BASE, + [GOYA_FUNNEL_TPC0_EML] = mmTPC0_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC1_EML] = mmTPC1_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC1_RTR] = mmTPC1_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_TPC2_EML] = mmTPC2_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC2_RTR] = mmTPC2_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_TPC3_EML] = mmTPC3_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC3_RTR] = mmTPC3_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_TPC4_EML] = mmTPC4_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC4_RTR] = mmTPC4_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_TPC5_EML] = mmTPC5_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC5_RTR] = mmTPC5_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_TPC6_EML] = mmTPC6_EML_FUNNEL_BASE, + [GOYA_FUNNEL_TPC6_RTR] = mmTPC6_RTR_FUNNEL_BASE, + [GOYA_FUNNEL_TPC7_EML] = mmTPC7_EML_FUNNEL_BASE +}; + +static u64 debug_bmon_regs[GOYA_BMON_LAST + 1] = { + [GOYA_BMON_CPU_RD] = mmCPU_RD_BMON_BASE, + [GOYA_BMON_CPU_WR] = mmCPU_WR_BMON_BASE, + [GOYA_BMON_DMA_CH_0_0] = mmDMA_CH_0_BMON_0_BASE, + [GOYA_BMON_DMA_CH_0_1] = mmDMA_CH_0_BMON_1_BASE, + [GOYA_BMON_DMA_CH_1_0] = mmDMA_CH_1_BMON_0_BASE, + [GOYA_BMON_DMA_CH_1_1] = mmDMA_CH_1_BMON_1_BASE, + [GOYA_BMON_DMA_CH_2_0] = mmDMA_CH_2_BMON_0_BASE, + [GOYA_BMON_DMA_CH_2_1] = mmDMA_CH_2_BMON_1_BASE, + [GOYA_BMON_DMA_CH_3_0] = mmDMA_CH_3_BMON_0_BASE, + [GOYA_BMON_DMA_CH_3_1] = mmDMA_CH_3_BMON_1_BASE, + [GOYA_BMON_DMA_CH_4_0] = mmDMA_CH_4_BMON_0_BASE, + [GOYA_BMON_DMA_CH_4_1] = mmDMA_CH_4_BMON_1_BASE, + [GOYA_BMON_DMA_MACRO_0] = mmDMA_MACRO_BMON_0_BASE, + [GOYA_BMON_DMA_MACRO_1] = mmDMA_MACRO_BMON_1_BASE, + [GOYA_BMON_DMA_MACRO_2] = mmDMA_MACRO_BMON_2_BASE, + [GOYA_BMON_DMA_MACRO_3] = mmDMA_MACRO_BMON_3_BASE, + [GOYA_BMON_DMA_MACRO_4] = mmDMA_MACRO_BMON_4_BASE, + [GOYA_BMON_DMA_MACRO_5] = mmDMA_MACRO_BMON_5_BASE, + [GOYA_BMON_DMA_MACRO_6] = mmDMA_MACRO_BMON_6_BASE, + [GOYA_BMON_DMA_MACRO_7] = mmDMA_MACRO_BMON_7_BASE, + [GOYA_BMON_MME1_SBA_0] = mmMME1_SBA_BMON0_BASE, + [GOYA_BMON_MME1_SBA_1] = mmMME1_SBA_BMON1_BASE, + [GOYA_BMON_MME3_SBB_0] = mmMME3_SBB_BMON0_BASE, + [GOYA_BMON_MME3_SBB_1] = mmMME3_SBB_BMON1_BASE, + [GOYA_BMON_MME4_WACS2_0] = mmMME4_WACS2_BMON0_BASE, + [GOYA_BMON_MME4_WACS2_1] = mmMME4_WACS2_BMON1_BASE, + [GOYA_BMON_MME4_WACS2_2] = mmMME4_WACS2_BMON2_BASE, + [GOYA_BMON_MME4_WACS_0] = mmMME4_WACS_BMON0_BASE, + [GOYA_BMON_MME4_WACS_1] = mmMME4_WACS_BMON1_BASE, + [GOYA_BMON_MME4_WACS_2] = mmMME4_WACS_BMON2_BASE, + [GOYA_BMON_MME4_WACS_3] = mmMME4_WACS_BMON3_BASE, + [GOYA_BMON_MME4_WACS_4] = mmMME4_WACS_BMON4_BASE, + [GOYA_BMON_MME4_WACS_5] = mmMME4_WACS_BMON5_BASE, + [GOYA_BMON_MME4_WACS_6] = mmMME4_WACS_BMON6_BASE, + [GOYA_BMON_MMU_0] = mmMMU_BMON_0_BASE, + [GOYA_BMON_MMU_1] = mmMMU_BMON_1_BASE, + [GOYA_BMON_PCIE_MSTR_RD] = mmPCIE_BMON_MSTR_RD_BASE, + [GOYA_BMON_PCIE_MSTR_WR] = mmPCIE_BMON_MSTR_WR_BASE, + [GOYA_BMON_PCIE_SLV_RD] = mmPCIE_BMON_SLV_RD_BASE, + [GOYA_BMON_PCIE_SLV_WR] = mmPCIE_BMON_SLV_WR_BASE, + [GOYA_BMON_TPC0_EML_0] = mmTPC0_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC0_EML_1] = mmTPC0_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC0_EML_2] = mmTPC0_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC0_EML_3] = mmTPC0_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC1_EML_0] = mmTPC1_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC1_EML_1] = mmTPC1_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC1_EML_2] = mmTPC1_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC1_EML_3] = mmTPC1_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC2_EML_0] = mmTPC2_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC2_EML_1] = mmTPC2_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC2_EML_2] = mmTPC2_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC2_EML_3] = mmTPC2_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC3_EML_0] = mmTPC3_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC3_EML_1] = mmTPC3_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC3_EML_2] = mmTPC3_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC3_EML_3] = mmTPC3_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC4_EML_0] = mmTPC4_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC4_EML_1] = mmTPC4_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC4_EML_2] = mmTPC4_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC4_EML_3] = mmTPC4_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC5_EML_0] = mmTPC5_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC5_EML_1] = mmTPC5_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC5_EML_2] = mmTPC5_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC5_EML_3] = mmTPC5_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC6_EML_0] = mmTPC6_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC6_EML_1] = mmTPC6_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC6_EML_2] = mmTPC6_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC6_EML_3] = mmTPC6_EML_BUSMON_3_BASE, + [GOYA_BMON_TPC7_EML_0] = mmTPC7_EML_BUSMON_0_BASE, + [GOYA_BMON_TPC7_EML_1] = mmTPC7_EML_BUSMON_1_BASE, + [GOYA_BMON_TPC7_EML_2] = mmTPC7_EML_BUSMON_2_BASE, + [GOYA_BMON_TPC7_EML_3] = mmTPC7_EML_BUSMON_3_BASE +}; + +static u64 debug_spmu_regs[GOYA_SPMU_LAST + 1] = { + [GOYA_SPMU_DMA_CH_0_CS] = mmDMA_CH_0_CS_SPMU_BASE, + [GOYA_SPMU_DMA_CH_1_CS] = mmDMA_CH_1_CS_SPMU_BASE, + [GOYA_SPMU_DMA_CH_2_CS] = mmDMA_CH_2_CS_SPMU_BASE, + [GOYA_SPMU_DMA_CH_3_CS] = mmDMA_CH_3_CS_SPMU_BASE, + [GOYA_SPMU_DMA_CH_4_CS] = mmDMA_CH_4_CS_SPMU_BASE, + [GOYA_SPMU_DMA_MACRO_CS] = mmDMA_MACRO_CS_SPMU_BASE, + [GOYA_SPMU_MME1_SBA] = mmMME1_SBA_SPMU_BASE, + [GOYA_SPMU_MME3_SBB] = mmMME3_SBB_SPMU_BASE, + [GOYA_SPMU_MME4_WACS2] = mmMME4_WACS2_SPMU_BASE, + [GOYA_SPMU_MME4_WACS] = mmMME4_WACS_SPMU_BASE, + [GOYA_SPMU_MMU_CS] = mmMMU_CS_SPMU_BASE, + [GOYA_SPMU_PCIE] = mmPCIE_SPMU_BASE, + [GOYA_SPMU_TPC0_EML] = mmTPC0_EML_SPMU_BASE, + [GOYA_SPMU_TPC1_EML] = mmTPC1_EML_SPMU_BASE, + [GOYA_SPMU_TPC2_EML] = mmTPC2_EML_SPMU_BASE, + [GOYA_SPMU_TPC3_EML] = mmTPC3_EML_SPMU_BASE, + [GOYA_SPMU_TPC4_EML] = mmTPC4_EML_SPMU_BASE, + [GOYA_SPMU_TPC5_EML] = mmTPC5_EML_SPMU_BASE, + [GOYA_SPMU_TPC6_EML] = mmTPC6_EML_SPMU_BASE, + [GOYA_SPMU_TPC7_EML] = mmTPC7_EML_SPMU_BASE +}; + +static int goya_coresight_timeout(struct hl_device *hdev, u64 addr, + int position, bool up) +{ + int rc; + u32 val, timeout_usec; + + if (hdev->pldm) + timeout_usec = GOYA_PLDM_CORESIGHT_TIMEOUT_USEC; + else + timeout_usec = CORESIGHT_TIMEOUT_USEC; + + rc = hl_poll_timeout( + hdev, + addr, + val, + up ? val & BIT(position) : !(val & BIT(position)), + 1000, + timeout_usec); + + if (rc) { + dev_err(hdev->dev, + "Timeout while waiting for coresight, addr: 0x%llx, position: %d, up: %d\n", + addr, position, up); + return -EFAULT; + } + + return 0; +} + +static int goya_config_stm(struct hl_device *hdev, + struct hl_debug_params *params) +{ + struct hl_debug_params_stm *input; + u64 base_reg = debug_stm_regs[params->reg_idx] - CFG_BASE; + int rc; + + WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK); + + if (params->enable) { + input = params->input; + + if (!input) + return -EINVAL; + + WREG32(base_reg + 0xE80, 0x80004); + WREG32(base_reg + 0xD64, 7); + WREG32(base_reg + 0xD60, 0); + WREG32(base_reg + 0xD00, lower_32_bits(input->he_mask)); + WREG32(base_reg + 0xD20, lower_32_bits(input->sp_mask)); + WREG32(base_reg + 0xD60, 1); + WREG32(base_reg + 0xD00, upper_32_bits(input->he_mask)); + WREG32(base_reg + 0xD20, upper_32_bits(input->sp_mask)); + WREG32(base_reg + 0xE70, 0x10); + WREG32(base_reg + 0xE60, 0); + WREG32(base_reg + 0xE64, 0x420000); + WREG32(base_reg + 0xE00, 0xFFFFFFFF); + WREG32(base_reg + 0xE20, 0xFFFFFFFF); + WREG32(base_reg + 0xEF4, input->id); + WREG32(base_reg + 0xDF4, 0x80); + WREG32(base_reg + 0xE8C, input->frequency); + WREG32(base_reg + 0xE90, 0x7FF); + WREG32(base_reg + 0xE80, 0x7 | (input->id << 16)); + } else { + WREG32(base_reg + 0xE80, 4); + WREG32(base_reg + 0xD64, 0); + WREG32(base_reg + 0xD60, 1); + WREG32(base_reg + 0xD00, 0); + WREG32(base_reg + 0xD20, 0); + WREG32(base_reg + 0xD60, 0); + WREG32(base_reg + 0xE20, 0); + WREG32(base_reg + 0xE00, 0); + WREG32(base_reg + 0xDF4, 0x80); + WREG32(base_reg + 0xE70, 0); + WREG32(base_reg + 0xE60, 0); + WREG32(base_reg + 0xE64, 0); + WREG32(base_reg + 0xE8C, 0); + + rc = goya_coresight_timeout(hdev, base_reg + 0xE80, 23, false); + if (rc) { + dev_err(hdev->dev, + "Failed to disable STM on timeout, error %d\n", + rc); + return rc; + } + + WREG32(base_reg + 0xE80, 4); + } + + return 0; +} + +static int goya_config_etf(struct hl_device *hdev, + struct hl_debug_params *params) +{ + struct hl_debug_params_etf *input; + u64 base_reg = debug_etf_regs[params->reg_idx] - CFG_BASE; + u32 val; + int rc; + + WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK); + + val = RREG32(base_reg + 0x304); + val |= 0x1000; + WREG32(base_reg + 0x304, val); + val |= 0x40; + WREG32(base_reg + 0x304, val); + + rc = goya_coresight_timeout(hdev, base_reg + 0x304, 6, false); + if (rc) { + dev_err(hdev->dev, + "Failed to %s ETF on timeout, error %d\n", + params->enable ? "enable" : "disable", rc); + return rc; + } + + rc = goya_coresight_timeout(hdev, base_reg + 0xC, 2, true); + if (rc) { + dev_err(hdev->dev, + "Failed to %s ETF on timeout, error %d\n", + params->enable ? "enable" : "disable", rc); + return rc; + } + + WREG32(base_reg + 0x20, 0); + + if (params->enable) { + input = params->input; + + if (!input) + return -EINVAL; + + WREG32(base_reg + 0x34, 0x3FFC); + WREG32(base_reg + 0x28, input->sink_mode); + WREG32(base_reg + 0x304, 0x4001); + WREG32(base_reg + 0x308, 0xA); + WREG32(base_reg + 0x20, 1); + } else { + WREG32(base_reg + 0x34, 0); + WREG32(base_reg + 0x28, 0); + WREG32(base_reg + 0x304, 0); + } + + return 0; +} + +static int goya_etr_validate_address(struct hl_device *hdev, u64 addr, + u32 size) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + u64 range_start, range_end; + + if (hdev->mmu_enable) { + range_start = prop->va_space_dram_start_address; + range_end = prop->va_space_dram_end_address; + } else { + range_start = prop->dram_user_base_address; + range_end = prop->dram_end_address; + } + + return hl_mem_area_inside_range(addr, size, range_start, range_end); +} + +static int goya_config_etr(struct hl_device *hdev, + struct hl_debug_params *params) +{ + struct hl_debug_params_etr *input; + u64 base_reg = mmPSOC_ETR_BASE - CFG_BASE; + u32 val; + int rc; + + WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK); + + val = RREG32(base_reg + 0x304); + val |= 0x1000; + WREG32(base_reg + 0x304, val); + val |= 0x40; + WREG32(base_reg + 0x304, val); + + rc = goya_coresight_timeout(hdev, base_reg + 0x304, 6, false); + if (rc) { + dev_err(hdev->dev, "Failed to %s ETR on timeout, error %d\n", + params->enable ? "enable" : "disable", rc); + return rc; + } + + rc = goya_coresight_timeout(hdev, base_reg + 0xC, 2, true); + if (rc) { + dev_err(hdev->dev, "Failed to %s ETR on timeout, error %d\n", + params->enable ? "enable" : "disable", rc); + return rc; + } + + WREG32(base_reg + 0x20, 0); + + if (params->enable) { + input = params->input; + + if (!input) + return -EINVAL; + + if (input->buffer_size == 0) { + dev_err(hdev->dev, + "ETR buffer size should be bigger than 0\n"); + return -EINVAL; + } + + if (!goya_etr_validate_address(hdev, + input->buffer_address, input->buffer_size)) { + dev_err(hdev->dev, "buffer address is not valid\n"); + return -EINVAL; + } + + WREG32(base_reg + 0x34, 0x3FFC); + WREG32(base_reg + 0x4, input->buffer_size); + WREG32(base_reg + 0x28, input->sink_mode); + WREG32(base_reg + 0x110, 0x700); + WREG32(base_reg + 0x118, + lower_32_bits(input->buffer_address)); + WREG32(base_reg + 0x11C, + upper_32_bits(input->buffer_address)); + WREG32(base_reg + 0x304, 3); + WREG32(base_reg + 0x308, 0xA); + WREG32(base_reg + 0x20, 1); + } else { + WREG32(base_reg + 0x34, 0); + WREG32(base_reg + 0x4, 0x400); + WREG32(base_reg + 0x118, 0); + WREG32(base_reg + 0x11C, 0); + WREG32(base_reg + 0x308, 0); + WREG32(base_reg + 0x28, 0); + WREG32(base_reg + 0x304, 0); + + if (params->output_size >= sizeof(u32)) + *(u32 *) params->output = RREG32(base_reg + 0x18); + } + + return 0; +} + +static int goya_config_funnel(struct hl_device *hdev, + struct hl_debug_params *params) +{ + WREG32(debug_funnel_regs[params->reg_idx] - CFG_BASE + 0xFB0, + CORESIGHT_UNLOCK); + + WREG32(debug_funnel_regs[params->reg_idx] - CFG_BASE, + params->enable ? 0x33F : 0); + + return 0; +} + +static int goya_config_bmon(struct hl_device *hdev, + struct hl_debug_params *params) +{ + struct hl_debug_params_bmon *input; + u64 base_reg = debug_bmon_regs[params->reg_idx] - CFG_BASE; + u32 pcie_base = 0; + + WREG32(base_reg + 0x104, 1); + + if (params->enable) { + input = params->input; + + if (!input) + return -EINVAL; + + WREG32(base_reg + 0x200, lower_32_bits(input->start_addr0)); + WREG32(base_reg + 0x204, upper_32_bits(input->start_addr0)); + WREG32(base_reg + 0x208, lower_32_bits(input->addr_mask0)); + WREG32(base_reg + 0x20C, upper_32_bits(input->addr_mask0)); + WREG32(base_reg + 0x240, lower_32_bits(input->start_addr1)); + WREG32(base_reg + 0x244, upper_32_bits(input->start_addr1)); + WREG32(base_reg + 0x248, lower_32_bits(input->addr_mask1)); + WREG32(base_reg + 0x24C, upper_32_bits(input->addr_mask1)); + WREG32(base_reg + 0x224, 0); + WREG32(base_reg + 0x234, 0); + WREG32(base_reg + 0x30C, input->bw_win); + WREG32(base_reg + 0x308, input->win_capture); + + /* PCIE IF BMON bug WA */ + if (params->reg_idx != GOYA_BMON_PCIE_MSTR_RD && + params->reg_idx != GOYA_BMON_PCIE_MSTR_WR && + params->reg_idx != GOYA_BMON_PCIE_SLV_RD && + params->reg_idx != GOYA_BMON_PCIE_SLV_WR) + pcie_base = 0xA000000; + + WREG32(base_reg + 0x700, pcie_base | 0xB00 | (input->id << 12)); + WREG32(base_reg + 0x708, pcie_base | 0xA00 | (input->id << 12)); + WREG32(base_reg + 0x70C, pcie_base | 0xC00 | (input->id << 12)); + + WREG32(base_reg + 0x100, 0x11); + WREG32(base_reg + 0x304, 0x1); + } else { + WREG32(base_reg + 0x200, 0); + WREG32(base_reg + 0x204, 0); + WREG32(base_reg + 0x208, 0xFFFFFFFF); + WREG32(base_reg + 0x20C, 0xFFFFFFFF); + WREG32(base_reg + 0x240, 0); + WREG32(base_reg + 0x244, 0); + WREG32(base_reg + 0x248, 0xFFFFFFFF); + WREG32(base_reg + 0x24C, 0xFFFFFFFF); + WREG32(base_reg + 0x224, 0xFFFFFFFF); + WREG32(base_reg + 0x234, 0x1070F); + WREG32(base_reg + 0x30C, 0); + WREG32(base_reg + 0x308, 0xFFFF); + WREG32(base_reg + 0x700, 0xA000B00); + WREG32(base_reg + 0x708, 0xA000A00); + WREG32(base_reg + 0x70C, 0xA000C00); + WREG32(base_reg + 0x100, 1); + WREG32(base_reg + 0x304, 0); + WREG32(base_reg + 0x104, 0); + } + + return 0; +} + +static int goya_config_spmu(struct hl_device *hdev, + struct hl_debug_params *params) +{ + u64 base_reg = debug_spmu_regs[params->reg_idx] - CFG_BASE; + struct hl_debug_params_spmu *input = params->input; + u64 *output; + u32 output_arr_len; + u32 events_num; + u32 overflow_idx; + u32 cycle_cnt_idx; + int i; + + if (params->enable) { + input = params->input; + + if (!input) + return -EINVAL; + + if (input->event_types_num < 3) { + dev_err(hdev->dev, + "not enough values for SPMU enable\n"); + return -EINVAL; + } + + WREG32(base_reg + 0xE04, 0x41013046); + WREG32(base_reg + 0xE04, 0x41013040); + + for (i = 0 ; i < input->event_types_num ; i++) + WREG32(base_reg + 0x400 + i * 4, input->event_types[i]); + + WREG32(base_reg + 0xE04, 0x41013041); + WREG32(base_reg + 0xC00, 0x8000003F); + } else { + output = params->output; + output_arr_len = params->output_size / 8; + events_num = output_arr_len - 2; + overflow_idx = output_arr_len - 2; + cycle_cnt_idx = output_arr_len - 1; + + if (!output) + return -EINVAL; + + if (output_arr_len < 3) { + dev_err(hdev->dev, + "not enough values for SPMU disable\n"); + return -EINVAL; + } + + WREG32(base_reg + 0xE04, 0x41013040); + + for (i = 0 ; i < events_num ; i++) + output[i] = RREG32(base_reg + i * 8); + + output[overflow_idx] = RREG32(base_reg + 0xCC0); + + output[cycle_cnt_idx] = RREG32(base_reg + 0xFC); + output[cycle_cnt_idx] <<= 32; + output[cycle_cnt_idx] |= RREG32(base_reg + 0xF8); + + WREG32(base_reg + 0xCC0, 0); + } + + return 0; +} + +static int goya_config_timestamp(struct hl_device *hdev, + struct hl_debug_params *params) +{ + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0); + if (params->enable) { + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0xC, 0); + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0x8, 0); + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 1); + } + + return 0; +} + +int goya_debug_coresight(struct hl_device *hdev, void *data) +{ + struct hl_debug_params *params = data; + u32 val; + int rc; + + switch (params->op) { + case HL_DEBUG_OP_STM: + rc = goya_config_stm(hdev, params); + break; + case HL_DEBUG_OP_ETF: + rc = goya_config_etf(hdev, params); + break; + case HL_DEBUG_OP_ETR: + rc = goya_config_etr(hdev, params); + break; + case HL_DEBUG_OP_FUNNEL: + rc = goya_config_funnel(hdev, params); + break; + case HL_DEBUG_OP_BMON: + rc = goya_config_bmon(hdev, params); + break; + case HL_DEBUG_OP_SPMU: + rc = goya_config_spmu(hdev, params); + break; + case HL_DEBUG_OP_TIMESTAMP: + rc = goya_config_timestamp(hdev, params); + break; + + default: + dev_err(hdev->dev, "Unknown coresight id %d\n", params->op); + return -EINVAL; + } + + /* Perform read from the device to flush all configuration */ + val = RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG); + + return rc; +} diff --git a/drivers/misc/habanalabs/goya/goya_security.c b/drivers/misc/habanalabs/goya/goya_security.c index 575003238401..d95d1b2f860d 100644 --- a/drivers/misc/habanalabs/goya/goya_security.c +++ b/drivers/misc/habanalabs/goya/goya_security.c @@ -6,6 +6,7 @@ */ #include "goyaP.h" +#include "include/goya/asic_reg/goya_regs.h" /* * goya_set_block_as_protected - set the given block as protected @@ -2159,6 +2160,8 @@ static void goya_init_protection_bits(struct hl_device *hdev) * Bits 7-11 represents the word offset inside the 128 bytes. * Bits 2-6 represents the bit location inside the word. */ + u32 pb_addr, mask; + u8 word_offset; goya_pb_set_block(hdev, mmPCI_NRTR_BASE); goya_pb_set_block(hdev, mmPCI_RD_REGULATOR_BASE); @@ -2237,6 +2240,14 @@ static void goya_init_protection_bits(struct hl_device *hdev) goya_pb_set_block(hdev, mmPCIE_AUX_BASE); goya_pb_set_block(hdev, mmPCIE_DB_RSV_BASE); goya_pb_set_block(hdev, mmPCIE_PHY_BASE); + goya_pb_set_block(hdev, mmTPC0_NRTR_BASE); + goya_pb_set_block(hdev, mmTPC_PLL_BASE); + + pb_addr = (mmTPC_PLL_CLK_RLX_0 & ~0xFFF) + PROT_BITS_OFFS; + word_offset = ((mmTPC_PLL_CLK_RLX_0 & PROT_BITS_OFFS) >> 7) << 2; + mask = 1 << ((mmTPC_PLL_CLK_RLX_0 & 0x7C) >> 2); + + WREG32(pb_addr + word_offset, mask); goya_init_mme_protection_bits(hdev); @@ -2294,8 +2305,8 @@ void goya_init_security(struct hl_device *hdev) u32 lbw_rng10_base = 0xFCC60000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; u32 lbw_rng10_mask = 0xFFFE0000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; - u32 lbw_rng11_base = 0xFCE00000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; - u32 lbw_rng11_mask = 0xFFFFC000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; + u32 lbw_rng11_base = 0xFCE02000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; + u32 lbw_rng11_mask = 0xFFFFE000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; u32 lbw_rng12_base = 0xFE484000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; u32 lbw_rng12_mask = 0xFFFFF000 & DMA_MACRO_LBW_RANGE_BASE_R_MASK; diff --git a/drivers/misc/habanalabs/habanalabs.h b/drivers/misc/habanalabs/habanalabs.h index a8ee52c880cd..71243b319920 100644 --- a/drivers/misc/habanalabs/habanalabs.h +++ b/drivers/misc/habanalabs/habanalabs.h @@ -11,8 +11,6 @@ #include "include/armcp_if.h" #include "include/qman_if.h" -#define pr_fmt(fmt) "habanalabs: " fmt - #include <linux/cdev.h> #include <linux/iopoll.h> #include <linux/irqreturn.h> @@ -33,6 +31,9 @@ #define HL_PLL_LOW_JOB_FREQ_USEC 5000000 /* 5 s */ +#define HL_ARMCP_INFO_TIMEOUT_USEC 10000000 /* 10s */ +#define HL_ARMCP_EEPROM_TIMEOUT_USEC 10000000 /* 10s */ + #define HL_MAX_QUEUES 128 #define HL_MAX_JOBS_PER_CS 64 @@ -48,8 +49,9 @@ /** * struct pgt_info - MMU hop page info. - * @node: hash linked-list node for the pgts hash of pgts. - * @addr: physical address of the pgt. + * @node: hash linked-list node for the pgts shadow hash of pgts. + * @phys_addr: physical address of the pgt. + * @shadow_addr: shadow hop in the host. * @ctx: pointer to the owner ctx. * @num_of_ptes: indicates how many ptes are used in the pgt. * @@ -59,10 +61,11 @@ * page, it is freed with its pgt_info structure. */ struct pgt_info { - struct hlist_node node; - u64 addr; - struct hl_ctx *ctx; - int num_of_ptes; + struct hlist_node node; + u64 phys_addr; + u64 shadow_addr; + struct hl_ctx *ctx; + int num_of_ptes; }; struct hl_device; @@ -132,8 +135,6 @@ enum hl_device_hw_state { * @dram_user_base_address: DRAM physical start address for user access. * @dram_size: DRAM total size. * @dram_pci_bar_size: size of PCI bar towards DRAM. - * @host_phys_base_address: base physical address of host memory for - * transactions that the device generates. * @max_power_default: max power of the device after reset * @va_space_host_start_address: base address of virtual memory range for * mapping host memory. @@ -145,6 +146,8 @@ enum hl_device_hw_state { * mapping DRAM memory. * @dram_size_for_default_page_mapping: DRAM size needed to map to avoid page * fault. + * @pcie_dbi_base_address: Base address of the PCIE_DBI block. + * @pcie_aux_dbi_reg_addr: Address of the PCIE_AUX DBI register. * @mmu_pgt_addr: base physical address in DRAM of MMU page tables. * @mmu_dram_default_page_addr: DRAM default page physical address. * @mmu_pgt_size: MMU page tables total size. @@ -179,13 +182,14 @@ struct asic_fixed_properties { u64 dram_user_base_address; u64 dram_size; u64 dram_pci_bar_size; - u64 host_phys_base_address; u64 max_power_default; u64 va_space_host_start_address; u64 va_space_host_end_address; u64 va_space_dram_start_address; u64 va_space_dram_end_address; u64 dram_size_for_default_page_mapping; + u64 pcie_dbi_base_address; + u64 pcie_aux_dbi_reg_addr; u64 mmu_pgt_addr; u64 mmu_dram_default_page_addr; u32 mmu_pgt_size; @@ -314,6 +318,18 @@ struct hl_cs_job; #define HL_EQ_LENGTH 64 #define HL_EQ_SIZE_IN_BYTES (HL_EQ_LENGTH * HL_EQ_ENTRY_SIZE) +#define HL_CPU_PKT_SHIFT 5 +#define HL_CPU_PKT_SIZE (1 << HL_CPU_PKT_SHIFT) +#define HL_CPU_PKT_MASK (~((1 << HL_CPU_PKT_SHIFT) - 1)) +#define HL_CPU_MAX_PKTS_IN_CB 32 +#define HL_CPU_CB_SIZE (HL_CPU_PKT_SIZE * \ + HL_CPU_MAX_PKTS_IN_CB) +#define HL_CPU_CB_QUEUE_SIZE (HL_QUEUE_LENGTH * HL_CPU_CB_SIZE) + +/* KMD <-> ArmCP shared memory size (EQ + PQ + CPU CB queue) */ +#define HL_CPU_ACCESSIBLE_MEM_SIZE (HL_EQ_SIZE_IN_BYTES + \ + HL_QUEUE_SIZE_IN_BYTES + \ + HL_CPU_CB_QUEUE_SIZE) /** * struct hl_hw_queue - describes a H/W transport queue. @@ -381,14 +397,12 @@ struct hl_eq { /** * enum hl_asic_type - supported ASIC types. - * @ASIC_AUTO_DETECT: ASIC type will be automatically set. - * @ASIC_GOYA: Goya device. * @ASIC_INVALID: Invalid ASIC type. + * @ASIC_GOYA: Goya device. */ enum hl_asic_type { - ASIC_AUTO_DETECT, - ASIC_GOYA, - ASIC_INVALID + ASIC_INVALID, + ASIC_GOYA }; struct hl_cs_parser; @@ -436,19 +450,19 @@ enum hl_pll_frequency { * @cb_mmap: maps a CB. * @ring_doorbell: increment PI on a given QMAN. * @flush_pq_write: flush PQ entry write if necessary, WARN if flushing failed. - * @dma_alloc_coherent: Allocate coherent DMA memory by calling - * dma_alloc_coherent(). This is ASIC function because its - * implementation is not trivial when the driver is loaded - * in simulation mode (not upstreamed). - * @dma_free_coherent: Free coherent DMA memory by calling dma_free_coherent(). - * This is ASIC function because its implementation is not - * trivial when the driver is loaded in simulation mode - * (not upstreamed). + * @asic_dma_alloc_coherent: Allocate coherent DMA memory by calling + * dma_alloc_coherent(). This is ASIC function because + * its implementation is not trivial when the driver + * is loaded in simulation mode (not upstreamed). + * @asic_dma_free_coherent: Free coherent DMA memory by calling + * dma_free_coherent(). This is ASIC function because + * its implementation is not trivial when the driver + * is loaded in simulation mode (not upstreamed). * @get_int_queue_base: get the internal queue base address. * @test_queues: run simple test on all queues for sanity check. - * @dma_pool_zalloc: small DMA allocation of coherent memory from DMA pool. - * size of allocation is HL_DMA_POOL_BLK_SIZE. - * @dma_pool_free: free small DMA allocation from pool. + * @asic_dma_pool_zalloc: small DMA allocation of coherent memory from DMA pool. + * size of allocation is HL_DMA_POOL_BLK_SIZE. + * @asic_dma_pool_free: free small DMA allocation from pool. * @cpu_accessible_dma_pool_alloc: allocate CPU PQ packet from DMA pool. * @cpu_accessible_dma_pool_free: free CPU PQ packet from DMA pool. * @hl_dma_unmap_sg: DMA unmap scatter-gather list. @@ -472,8 +486,7 @@ enum hl_pll_frequency { * @mmu_invalidate_cache_range: flush specific MMU STLB cache lines with * ASID-VA-size mask. * @send_heartbeat: send is-alive packet to ArmCP and verify response. - * @enable_clock_gating: enable clock gating for reducing power consumption. - * @disable_clock_gating: disable clock for accessing registers on HBW. + * @debug_coresight: perform certain actions on Coresight for debugging. * @is_device_idle: return true if device is idle, false otherwise. * @soft_reset_late_init: perform certain actions needed after soft reset. * @hw_queues_lock: acquire H/W queues lock. @@ -482,6 +495,12 @@ enum hl_pll_frequency { * @get_eeprom_data: retrieve EEPROM data from F/W. * @send_cpu_message: send buffer to ArmCP. * @get_hw_state: retrieve the H/W state + * @pci_bars_map: Map PCI BARs. + * @set_dram_bar_base: Set DRAM BAR to map specific device address. Returns + * old address the bar pointed to or U64_MAX for failure + * @init_iatu: Initialize the iATU unit inside the PCI controller. + * @rreg: Read a register. Needed for simulator support. + * @wreg: Write a register. Needed for simulator support. */ struct hl_asic_funcs { int (*early_init)(struct hl_device *hdev); @@ -499,27 +518,27 @@ struct hl_asic_funcs { u64 kaddress, phys_addr_t paddress, u32 size); void (*ring_doorbell)(struct hl_device *hdev, u32 hw_queue_id, u32 pi); void (*flush_pq_write)(struct hl_device *hdev, u64 *pq, u64 exp_val); - void* (*dma_alloc_coherent)(struct hl_device *hdev, size_t size, + void* (*asic_dma_alloc_coherent)(struct hl_device *hdev, size_t size, dma_addr_t *dma_handle, gfp_t flag); - void (*dma_free_coherent)(struct hl_device *hdev, size_t size, + void (*asic_dma_free_coherent)(struct hl_device *hdev, size_t size, void *cpu_addr, dma_addr_t dma_handle); void* (*get_int_queue_base)(struct hl_device *hdev, u32 queue_id, dma_addr_t *dma_handle, u16 *queue_len); int (*test_queues)(struct hl_device *hdev); - void* (*dma_pool_zalloc)(struct hl_device *hdev, size_t size, + void* (*asic_dma_pool_zalloc)(struct hl_device *hdev, size_t size, gfp_t mem_flags, dma_addr_t *dma_handle); - void (*dma_pool_free)(struct hl_device *hdev, void *vaddr, + void (*asic_dma_pool_free)(struct hl_device *hdev, void *vaddr, dma_addr_t dma_addr); void* (*cpu_accessible_dma_pool_alloc)(struct hl_device *hdev, size_t size, dma_addr_t *dma_handle); void (*cpu_accessible_dma_pool_free)(struct hl_device *hdev, size_t size, void *vaddr); void (*hl_dma_unmap_sg)(struct hl_device *hdev, - struct scatterlist *sg, int nents, + struct scatterlist *sgl, int nents, enum dma_data_direction dir); int (*cs_parser)(struct hl_device *hdev, struct hl_cs_parser *parser); int (*asic_dma_map_sg)(struct hl_device *hdev, - struct scatterlist *sg, int nents, + struct scatterlist *sgl, int nents, enum dma_data_direction dir); u32 (*get_dma_desc_list_size)(struct hl_device *hdev, struct sg_table *sgt); @@ -543,9 +562,8 @@ struct hl_asic_funcs { void (*mmu_invalidate_cache_range)(struct hl_device *hdev, bool is_hard, u32 asid, u64 va, u64 size); int (*send_heartbeat)(struct hl_device *hdev); - void (*enable_clock_gating)(struct hl_device *hdev); - void (*disable_clock_gating)(struct hl_device *hdev); - bool (*is_device_idle)(struct hl_device *hdev); + int (*debug_coresight)(struct hl_device *hdev, void *data); + bool (*is_device_idle)(struct hl_device *hdev, char *buf, size_t size); int (*soft_reset_late_init)(struct hl_device *hdev); void (*hw_queues_lock)(struct hl_device *hdev); void (*hw_queues_unlock)(struct hl_device *hdev); @@ -555,6 +573,11 @@ struct hl_asic_funcs { int (*send_cpu_message)(struct hl_device *hdev, u32 *msg, u16 len, u32 timeout, long *result); enum hl_device_hw_state (*get_hw_state)(struct hl_device *hdev); + int (*pci_bars_map)(struct hl_device *hdev); + u64 (*set_dram_bar_base)(struct hl_device *hdev, u64 addr); + int (*init_iatu)(struct hl_device *hdev); + u32 (*rreg)(struct hl_device *hdev, u32 reg); + void (*wreg)(struct hl_device *hdev, u32 reg, u32 val); }; @@ -582,7 +605,8 @@ struct hl_va_range { * struct hl_ctx - user/kernel context. * @mem_hash: holds mapping from virtual address to virtual memory area * descriptor (hl_vm_phys_pg_list or hl_userptr). - * @mmu_hash: holds a mapping from virtual address to pgt_info structure. + * @mmu_phys_hash: holds a mapping from physical address to pgt_info structure. + * @mmu_shadow_hash: holds a mapping from shadow address to pgt_info structure. * @hpriv: pointer to the private (KMD) data of the process (fd). * @hdev: pointer to the device structure. * @refcount: reference counter for the context. Context is released only when @@ -601,17 +625,19 @@ struct hl_va_range { * DRAM mapping. * @cs_lock: spinlock to protect cs_sequence. * @dram_phys_mem: amount of used physical DRAM memory by this context. - * @thread_restore_token: token to prevent multiple threads of the same context - * from running the restore phase. Only one thread - * should run it. - * @thread_restore_wait_token: token to prevent the threads that didn't run - * the restore phase from moving to their execution - * phase before the restore phase has finished. + * @thread_ctx_switch_token: token to prevent multiple threads of the same + * context from running the context switch phase. + * Only a single thread should run it. + * @thread_ctx_switch_wait_token: token to prevent the threads that didn't run + * the context switch phase from moving to their + * execution phase before the context switch phase + * has finished. * @asid: context's unique address space ID in the device's MMU. */ struct hl_ctx { DECLARE_HASHTABLE(mem_hash, MEM_HASH_TABLE_BITS); - DECLARE_HASHTABLE(mmu_hash, MMU_HASH_TABLE_BITS); + DECLARE_HASHTABLE(mmu_phys_hash, MMU_HASH_TABLE_BITS); + DECLARE_HASHTABLE(mmu_shadow_hash, MMU_HASH_TABLE_BITS); struct hl_fpriv *hpriv; struct hl_device *hdev; struct kref refcount; @@ -625,8 +651,8 @@ struct hl_ctx { u64 *dram_default_hops; spinlock_t cs_lock; atomic64_t dram_phys_mem; - atomic_t thread_restore_token; - u32 thread_restore_wait_token; + atomic_t thread_ctx_switch_token; + u32 thread_ctx_switch_wait_token; u32 asid; }; @@ -753,8 +779,6 @@ struct hl_cs_job { * @patched_cb_size: the size of the CB after parsing. * @ext_queue: whether the job is for external queue or internal queue. * @job_id: the id of the related job inside the related CS. - * @use_virt_addr: whether to treat the addresses in the CB as virtual during - * parsing. */ struct hl_cs_parser { struct hl_cb *user_cb; @@ -767,7 +791,6 @@ struct hl_cs_parser { u32 patched_cb_size; u8 ext_queue; u8 job_id; - u8 use_virt_addr; }; @@ -850,6 +873,29 @@ struct hl_vm { u8 init_done; }; + +/* + * DEBUG, PROFILING STRUCTURE + */ + +/** + * struct hl_debug_params - Coresight debug parameters. + * @input: pointer to component specific input parameters. + * @output: pointer to component specific output parameters. + * @output_size: size of output buffer. + * @reg_idx: relevant register ID. + * @op: component operation to execute. + * @enable: true if to enable component debugging, false otherwise. + */ +struct hl_debug_params { + void *input; + void *output; + u32 output_size; + u32 reg_idx; + u32 op; + bool enable; +}; + /* * FILE PRIVATE STRUCTURE */ @@ -973,13 +1019,10 @@ struct hl_dbg_device_entry { u32 hl_rreg(struct hl_device *hdev, u32 reg); void hl_wreg(struct hl_device *hdev, u32 reg, u32 val); -#define hl_poll_timeout(hdev, addr, val, cond, sleep_us, timeout_us) \ - readl_poll_timeout(hdev->rmmio + addr, val, cond, sleep_us, timeout_us) - -#define RREG32(reg) hl_rreg(hdev, (reg)) -#define WREG32(reg, v) hl_wreg(hdev, (reg), (v)) +#define RREG32(reg) hdev->asic_funcs->rreg(hdev, (reg)) +#define WREG32(reg, v) hdev->asic_funcs->wreg(hdev, (reg), (v)) #define DREG32(reg) pr_info("REGISTER: " #reg " : 0x%08X\n", \ - hl_rreg(hdev, (reg))) + hdev->asic_funcs->rreg(hdev, (reg))) #define WREG32_P(reg, val, mask) \ do { \ @@ -997,6 +1040,36 @@ void hl_wreg(struct hl_device *hdev, u32 reg, u32 val); WREG32(mm##reg, (RREG32(mm##reg) & ~REG_FIELD_MASK(reg, field)) | \ (val) << REG_FIELD_SHIFT(reg, field)) +#define hl_poll_timeout(hdev, addr, val, cond, sleep_us, timeout_us) \ +({ \ + ktime_t __timeout; \ + /* timeout should be longer when working with simulator */ \ + if (hdev->pdev) \ + __timeout = ktime_add_us(ktime_get(), timeout_us); \ + else \ + __timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \ + might_sleep_if(sleep_us); \ + for (;;) { \ + (val) = RREG32(addr); \ + if (cond) \ + break; \ + if (timeout_us && ktime_compare(ktime_get(), __timeout) > 0) { \ + (val) = RREG32(addr); \ + break; \ + } \ + if (sleep_us) \ + usleep_range((sleep_us >> 2) + 1, sleep_us); \ + } \ + (cond) ? 0 : -ETIMEDOUT; \ +}) + + +#define HL_ENG_BUSY(buf, size, fmt, ...) ({ \ + if (buf) \ + snprintf(buf, size, fmt, ##__VA_ARGS__); \ + false; \ + }) + struct hwmon_chip_info; /** @@ -1047,7 +1120,8 @@ struct hl_device_reset_work { * @asic_specific: ASIC specific information to use only from ASIC files. * @mmu_pgt_pool: pool of available MMU hops. * @vm: virtual memory manager for MMU. - * @mmu_cache_lock: protects MMU cache invalidation as it can serve one context + * @mmu_cache_lock: protects MMU cache invalidation as it can serve one context. + * @mmu_shadow_hop0: shadow mapping of the MMU hop 0 zone. * @hwmon_dev: H/W monitor device. * @pm_mng_profile: current power management profile. * @hl_chip_info: ASIC's sensors information. @@ -1082,6 +1156,7 @@ struct hl_device_reset_work { * @init_done: is the initialization of the device done. * @mmu_enable: is MMU enabled. * @device_cpu_disabled: is the device CPU disabled (due to timeouts) + * @dma_mask: the dma mask that was set for this device */ struct hl_device { struct pci_dev *pdev; @@ -1117,6 +1192,7 @@ struct hl_device { struct gen_pool *mmu_pgt_pool; struct hl_vm vm; struct mutex mmu_cache_lock; + void *mmu_shadow_hop0; struct device *hwmon_dev; enum hl_pm_mng_profile pm_mng_profile; struct hwmon_chip_info *hl_chip_info; @@ -1151,6 +1227,7 @@ struct hl_device { u8 dram_default_page_mapping; u8 init_done; u8 device_cpu_disabled; + u8 dma_mask; /* Parameters for bring-up */ u8 mmu_enable; @@ -1245,6 +1322,7 @@ static inline bool hl_mem_area_crosses_range(u64 address, u32 size, int hl_device_open(struct inode *inode, struct file *filp); bool hl_device_disabled_or_in_reset(struct hl_device *hdev); +enum hl_device_status hl_device_status(struct hl_device *hdev); int create_hdev(struct hl_device **dev, struct pci_dev *pdev, enum hl_asic_type asic_type, int minor); void destroy_hdev(struct hl_device *hdev); @@ -1351,6 +1429,32 @@ int hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, u32 page_size); void hl_mmu_swap_out(struct hl_ctx *ctx); void hl_mmu_swap_in(struct hl_ctx *ctx); +int hl_fw_push_fw_to_device(struct hl_device *hdev, const char *fw_name, + void __iomem *dst); +int hl_fw_send_pci_access_msg(struct hl_device *hdev, u32 opcode); +int hl_fw_send_cpu_message(struct hl_device *hdev, u32 hw_queue_id, u32 *msg, + u16 len, u32 timeout, long *result); +int hl_fw_test_cpu_queue(struct hl_device *hdev); +void *hl_fw_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, + dma_addr_t *dma_handle); +void hl_fw_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, + void *vaddr); +int hl_fw_send_heartbeat(struct hl_device *hdev); +int hl_fw_armcp_info_get(struct hl_device *hdev); +int hl_fw_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size); + +int hl_pci_bars_map(struct hl_device *hdev, const char * const name[3], + bool is_wc[3]); +int hl_pci_iatu_write(struct hl_device *hdev, u32 addr, u32 data); +int hl_pci_set_dram_bar_base(struct hl_device *hdev, u8 inbound_region, u8 bar, + u64 addr); +int hl_pci_init_iatu(struct hl_device *hdev, u64 sram_base_address, + u64 dram_base_address, u64 host_phys_base_address, + u64 host_phys_size); +int hl_pci_init(struct hl_device *hdev, u8 dma_mask); +void hl_pci_fini(struct hl_device *hdev); +int hl_pci_set_dma_mask(struct hl_device *hdev, u8 dma_mask); + long hl_get_frequency(struct hl_device *hdev, u32 pll_index, bool curr); void hl_set_frequency(struct hl_device *hdev, u32 pll_index, u64 freq); long hl_get_temperature(struct hl_device *hdev, int sensor_index, u32 attr); diff --git a/drivers/misc/habanalabs/habanalabs_drv.c b/drivers/misc/habanalabs/habanalabs_drv.c index 748601463f11..5f4d155be767 100644 --- a/drivers/misc/habanalabs/habanalabs_drv.c +++ b/drivers/misc/habanalabs/habanalabs_drv.c @@ -6,6 +6,8 @@ * */ +#define pr_fmt(fmt) "habanalabs: " fmt + #include "habanalabs.h" #include <linux/pci.h> @@ -218,7 +220,7 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev, hdev->disabled = true; hdev->pdev = pdev; /* can be NULL in case of simulator device */ - if (asic_type == ASIC_AUTO_DETECT) { + if (pdev) { hdev->asic_type = get_asic_type(pdev->device); if (hdev->asic_type == ASIC_INVALID) { dev_err(&pdev->dev, "Unsupported ASIC\n"); @@ -229,6 +231,9 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev, hdev->asic_type = asic_type; } + /* Set default DMA mask to 32 bits */ + hdev->dma_mask = 32; + mutex_lock(&hl_devs_idr_lock); if (minor == -1) { @@ -334,7 +339,7 @@ static int hl_pci_probe(struct pci_dev *pdev, " device found [%04x:%04x] (rev %x)\n", (int)pdev->vendor, (int)pdev->device, (int)pdev->revision); - rc = create_hdev(&hdev, pdev, ASIC_AUTO_DETECT, -1); + rc = create_hdev(&hdev, pdev, ASIC_INVALID, -1); if (rc) return rc; diff --git a/drivers/misc/habanalabs/habanalabs_ioctl.c b/drivers/misc/habanalabs/habanalabs_ioctl.c index 2c2739a3c5ec..eeefb22023e9 100644 --- a/drivers/misc/habanalabs/habanalabs_ioctl.c +++ b/drivers/misc/habanalabs/habanalabs_ioctl.c @@ -12,6 +12,32 @@ #include <linux/uaccess.h> #include <linux/slab.h> +static u32 hl_debug_struct_size[HL_DEBUG_OP_TIMESTAMP + 1] = { + [HL_DEBUG_OP_ETR] = sizeof(struct hl_debug_params_etr), + [HL_DEBUG_OP_ETF] = sizeof(struct hl_debug_params_etf), + [HL_DEBUG_OP_STM] = sizeof(struct hl_debug_params_stm), + [HL_DEBUG_OP_FUNNEL] = 0, + [HL_DEBUG_OP_BMON] = sizeof(struct hl_debug_params_bmon), + [HL_DEBUG_OP_SPMU] = sizeof(struct hl_debug_params_spmu), + [HL_DEBUG_OP_TIMESTAMP] = 0 + +}; + +static int device_status_info(struct hl_device *hdev, struct hl_info_args *args) +{ + struct hl_info_device_status dev_stat = {0}; + u32 size = args->return_size; + void __user *out = (void __user *) (uintptr_t) args->return_pointer; + + if ((!size) || (!out)) + return -EINVAL; + + dev_stat.status = hl_device_status(hdev); + + return copy_to_user(out, &dev_stat, + min((size_t)size, sizeof(dev_stat))) ? -EFAULT : 0; +} + static int hw_ip_info(struct hl_device *hdev, struct hl_info_args *args) { struct hl_info_hw_ip_info hw_ip = {0}; @@ -93,21 +119,91 @@ static int hw_idle(struct hl_device *hdev, struct hl_info_args *args) if ((!max_size) || (!out)) return -EINVAL; - hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev); + hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev, NULL, 0); return copy_to_user(out, &hw_idle, min((size_t) max_size, sizeof(hw_idle))) ? -EFAULT : 0; } +static int debug_coresight(struct hl_device *hdev, struct hl_debug_args *args) +{ + struct hl_debug_params *params; + void *input = NULL, *output = NULL; + int rc; + + params = kzalloc(sizeof(*params), GFP_KERNEL); + if (!params) + return -ENOMEM; + + params->reg_idx = args->reg_idx; + params->enable = args->enable; + params->op = args->op; + + if (args->input_ptr && args->input_size) { + input = memdup_user((const void __user *) args->input_ptr, + args->input_size); + if (IS_ERR(input)) { + rc = PTR_ERR(input); + input = NULL; + dev_err(hdev->dev, + "error %d when copying input debug data\n", rc); + goto out; + } + + params->input = input; + } + + if (args->output_ptr && args->output_size) { + output = kzalloc(args->output_size, GFP_KERNEL); + if (!output) { + rc = -ENOMEM; + goto out; + } + + params->output = output; + params->output_size = args->output_size; + } + + rc = hdev->asic_funcs->debug_coresight(hdev, params); + if (rc) { + dev_err(hdev->dev, + "debug coresight operation failed %d\n", rc); + goto out; + } + + if (output) { + if (copy_to_user((void __user *) (uintptr_t) args->output_ptr, + output, + args->output_size)) { + dev_err(hdev->dev, + "copy to user failed in debug ioctl\n"); + rc = -EFAULT; + goto out; + } + } + +out: + kfree(params); + kfree(output); + kfree(input); + + return rc; +} + static int hl_info_ioctl(struct hl_fpriv *hpriv, void *data) { struct hl_info_args *args = data; struct hl_device *hdev = hpriv->hdev; int rc; + /* We want to return device status even if it disabled or in reset */ + if (args->op == HL_INFO_DEVICE_STATUS) + return device_status_info(hdev, args); + if (hl_device_disabled_or_in_reset(hdev)) { - dev_err(hdev->dev, - "Device is disabled or in reset. Can't execute INFO IOCTL\n"); + dev_warn_ratelimited(hdev->dev, + "Device is %s. Can't execute INFO IOCTL\n", + atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); return -EBUSY; } @@ -137,6 +233,40 @@ static int hl_info_ioctl(struct hl_fpriv *hpriv, void *data) return rc; } +static int hl_debug_ioctl(struct hl_fpriv *hpriv, void *data) +{ + struct hl_debug_args *args = data; + struct hl_device *hdev = hpriv->hdev; + int rc = 0; + + if (hl_device_disabled_or_in_reset(hdev)) { + dev_warn_ratelimited(hdev->dev, + "Device is %s. Can't execute DEBUG IOCTL\n", + atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); + return -EBUSY; + } + + switch (args->op) { + case HL_DEBUG_OP_ETR: + case HL_DEBUG_OP_ETF: + case HL_DEBUG_OP_STM: + case HL_DEBUG_OP_FUNNEL: + case HL_DEBUG_OP_BMON: + case HL_DEBUG_OP_SPMU: + case HL_DEBUG_OP_TIMESTAMP: + args->input_size = + min(args->input_size, hl_debug_struct_size[args->op]); + rc = debug_coresight(hdev, args); + break; + default: + dev_err(hdev->dev, "Invalid request %d\n", args->op); + rc = -ENOTTY; + break; + } + + return rc; +} + #define HL_IOCTL_DEF(ioctl, _func) \ [_IOC_NR(ioctl)] = {.cmd = ioctl, .func = _func} @@ -145,7 +275,8 @@ static const struct hl_ioctl_desc hl_ioctls[] = { HL_IOCTL_DEF(HL_IOCTL_CB, hl_cb_ioctl), HL_IOCTL_DEF(HL_IOCTL_CS, hl_cs_ioctl), HL_IOCTL_DEF(HL_IOCTL_WAIT_CS, hl_cs_wait_ioctl), - HL_IOCTL_DEF(HL_IOCTL_MEMORY, hl_mem_ioctl) + HL_IOCTL_DEF(HL_IOCTL_MEMORY, hl_mem_ioctl), + HL_IOCTL_DEF(HL_IOCTL_DEBUG, hl_debug_ioctl) }; #define HL_CORE_IOCTL_COUNT ARRAY_SIZE(hl_ioctls) diff --git a/drivers/misc/habanalabs/hw_queue.c b/drivers/misc/habanalabs/hw_queue.c index ef3bb6951360..2894d8975933 100644 --- a/drivers/misc/habanalabs/hw_queue.c +++ b/drivers/misc/habanalabs/hw_queue.c @@ -82,7 +82,7 @@ static void ext_queue_submit_bd(struct hl_device *hdev, struct hl_hw_queue *q, bd += hl_pi_2_offset(q->pi); bd->ctl = __cpu_to_le32(ctl); bd->len = __cpu_to_le32(len); - bd->ptr = __cpu_to_le64(ptr + hdev->asic_prop.host_phys_base_address); + bd->ptr = __cpu_to_le64(ptr); q->pi = hl_queue_inc_ptr(q->pi); hdev->asic_funcs->ring_doorbell(hdev, q->hw_queue_id, q->pi); @@ -263,9 +263,7 @@ static void ext_hw_queue_schedule_job(struct hl_cs_job *job) * checked in hl_queue_sanity_checks */ cq = &hdev->completion_queue[q->hw_queue_id]; - cq_addr = cq->bus_address + - hdev->asic_prop.host_phys_base_address; - cq_addr += cq->pi * sizeof(struct hl_cq_entry); + cq_addr = cq->bus_address + cq->pi * sizeof(struct hl_cq_entry); hdev->asic_funcs->add_end_of_cb_packets(cb->kernel_address, len, cq_addr, @@ -415,14 +413,20 @@ void hl_hw_queue_inc_ci_kernel(struct hl_device *hdev, u32 hw_queue_id) } static int ext_and_cpu_hw_queue_init(struct hl_device *hdev, - struct hl_hw_queue *q) + struct hl_hw_queue *q, bool is_cpu_queue) { void *p; int rc; - p = hdev->asic_funcs->dma_alloc_coherent(hdev, - HL_QUEUE_SIZE_IN_BYTES, - &q->bus_address, GFP_KERNEL | __GFP_ZERO); + if (is_cpu_queue) + p = hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, + HL_QUEUE_SIZE_IN_BYTES, + &q->bus_address); + else + p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, + HL_QUEUE_SIZE_IN_BYTES, + &q->bus_address, + GFP_KERNEL | __GFP_ZERO); if (!p) return -ENOMEM; @@ -446,8 +450,15 @@ static int ext_and_cpu_hw_queue_init(struct hl_device *hdev, return 0; free_queue: - hdev->asic_funcs->dma_free_coherent(hdev, HL_QUEUE_SIZE_IN_BYTES, - (void *) (uintptr_t) q->kernel_address, q->bus_address); + if (is_cpu_queue) + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, + HL_QUEUE_SIZE_IN_BYTES, + (void *) (uintptr_t) q->kernel_address); + else + hdev->asic_funcs->asic_dma_free_coherent(hdev, + HL_QUEUE_SIZE_IN_BYTES, + (void *) (uintptr_t) q->kernel_address, + q->bus_address); return rc; } @@ -474,12 +485,12 @@ static int int_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q) static int cpu_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q) { - return ext_and_cpu_hw_queue_init(hdev, q); + return ext_and_cpu_hw_queue_init(hdev, q, true); } static int ext_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q) { - return ext_and_cpu_hw_queue_init(hdev, q); + return ext_and_cpu_hw_queue_init(hdev, q, false); } /* @@ -569,8 +580,15 @@ static void hw_queue_fini(struct hl_device *hdev, struct hl_hw_queue *q) kfree(q->shadow_queue); - hdev->asic_funcs->dma_free_coherent(hdev, HL_QUEUE_SIZE_IN_BYTES, - (void *) (uintptr_t) q->kernel_address, q->bus_address); + if (q->queue_type == QUEUE_TYPE_CPU) + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, + HL_QUEUE_SIZE_IN_BYTES, + (void *) (uintptr_t) q->kernel_address); + else + hdev->asic_funcs->asic_dma_free_coherent(hdev, + HL_QUEUE_SIZE_IN_BYTES, + (void *) (uintptr_t) q->kernel_address, + q->bus_address); } int hl_hw_queues_create(struct hl_device *hdev) diff --git a/drivers/misc/habanalabs/include/armcp_if.h b/drivers/misc/habanalabs/include/armcp_if.h index 9dddb917e72c..1f1e35e86d84 100644 --- a/drivers/misc/habanalabs/include/armcp_if.h +++ b/drivers/misc/habanalabs/include/armcp_if.h @@ -32,8 +32,6 @@ struct hl_eq_entry { #define EQ_CTL_EVENT_TYPE_SHIFT 16 #define EQ_CTL_EVENT_TYPE_MASK 0x03FF0000 -#define EVENT_QUEUE_MSIX_IDX 5 - enum pq_init_status { PQ_INIT_STATUS_NA = 0, PQ_INIT_STATUS_READY_FOR_CP, diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_masks.h index 2cf5c46b6e8e..4e0dbbbbde20 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_masks.h @@ -188,4 +188,3 @@ #define CPU_CA53_CFG_ARM_PMU_EVENT_MASK 0x3FFFFFFF #endif /* ASIC_REG_CPU_CA53_CFG_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_regs.h index 840ccffa1081..f3faf1aad91a 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_ca53_cfg_regs.h @@ -58,4 +58,3 @@ #define mmCPU_CA53_CFG_ARM_PMU_1 0x441214 #endif /* ASIC_REG_CPU_CA53_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_if_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_if_regs.h index f23cb3e41c30..cf657918962a 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_if_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_if_regs.h @@ -46,4 +46,3 @@ #define mmCPU_IF_AXI_SPLIT_INTR 0x442130 #endif /* ASIC_REG_CPU_IF_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_pll_regs.h index 8fc97f838ada..8c8f9726d4b9 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/cpu_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/cpu_pll_regs.h @@ -102,4 +102,3 @@ #define mmCPU_PLL_FREQ_CALC_EN 0x4A2440 #endif /* ASIC_REG_CPU_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_0_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_0_regs.h index 61c8cd9ce58b..0b246fe6ad04 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_0_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_0_regs.h @@ -206,4 +206,3 @@ #define mmDMA_CH_0_MEM_INIT_BUSY 0x4011FC #endif /* ASIC_REG_DMA_CH_0_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_1_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_1_regs.h index 92960ef5e308..5449031722f2 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_1_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_1_regs.h @@ -206,4 +206,3 @@ #define mmDMA_CH_1_MEM_INIT_BUSY 0x4091FC #endif /* ASIC_REG_DMA_CH_1_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_2_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_2_regs.h index 4e37871a51bb..a4768521d18a 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_2_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_2_regs.h @@ -206,4 +206,3 @@ #define mmDMA_CH_2_MEM_INIT_BUSY 0x4111FC #endif /* ASIC_REG_DMA_CH_2_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_3_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_3_regs.h index a2d6aeb32a18..619d01897ff8 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_3_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_3_regs.h @@ -206,4 +206,3 @@ #define mmDMA_CH_3_MEM_INIT_BUSY 0x4191FC #endif /* ASIC_REG_DMA_CH_3_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_4_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_4_regs.h index 400d6fd3acf5..038617e163f1 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_4_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_4_regs.h @@ -206,4 +206,3 @@ #define mmDMA_CH_4_MEM_INIT_BUSY 0x4211FC #endif /* ASIC_REG_DMA_CH_4_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_masks.h index 8d965443c51e..f43b564af1be 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_masks.h @@ -102,4 +102,3 @@ #define DMA_MACRO_RAZWI_HBW_RD_ID_R_MASK 0x1FFFFFFF #endif /* ASIC_REG_DMA_MACRO_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_regs.h index 8bfcb001189d..c3bfc1b8e3fd 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_macro_regs.h @@ -178,4 +178,3 @@ #define mmDMA_MACRO_RAZWI_HBW_RD_ID 0x4B0158 #endif /* ASIC_REG_DMA_MACRO_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_masks.h index 9f33f351a3c1..bc977488c072 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_masks.h @@ -206,4 +206,3 @@ #define DMA_NRTR_NON_LIN_SCRAMB_EN_MASK 0x1 #endif /* ASIC_REG_DMA_NRTR_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_regs.h index d8293745a02b..c4abc7ff1fc6 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_nrtr_regs.h @@ -224,4 +224,3 @@ #define mmDMA_NRTR_NON_LIN_SCRAMB 0x1C0604 #endif /* ASIC_REG_DMA_NRTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_masks.h index 10619dbb9b17..b17f72c31ab6 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_masks.h @@ -462,4 +462,3 @@ #define DMA_QM_0_CQ_BUF_RDATA_VAL_MASK 0xFFFFFFFF #endif /* ASIC_REG_DMA_QM_0_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_regs.h index c693bc5dcb22..bf360b301154 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_0_regs.h @@ -176,4 +176,3 @@ #define mmDMA_QM_0_CQ_BUF_RDATA 0x40030C #endif /* ASIC_REG_DMA_QM_0_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_1_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_1_regs.h index da928390f89c..51d432d05ac4 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_1_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_1_regs.h @@ -176,4 +176,3 @@ #define mmDMA_QM_1_CQ_BUF_RDATA 0x40830C #endif /* ASIC_REG_DMA_QM_1_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_2_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_2_regs.h index b4f06e9b71d6..18fc0c2b6cc2 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_2_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_2_regs.h @@ -176,4 +176,3 @@ #define mmDMA_QM_2_CQ_BUF_RDATA 0x41030C #endif /* ASIC_REG_DMA_QM_2_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_3_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_3_regs.h index 53e3cd78a06b..6cf7204bf5cc 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_3_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_3_regs.h @@ -176,4 +176,3 @@ #define mmDMA_QM_3_CQ_BUF_RDATA 0x41830C #endif /* ASIC_REG_DMA_QM_3_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_4_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_4_regs.h index e0eb5f260201..36fef2682875 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_4_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/dma_qm_4_regs.h @@ -176,4 +176,3 @@ #define mmDMA_QM_4_CQ_BUF_RDATA 0x42030C #endif /* ASIC_REG_DMA_QM_4_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/goya_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/goya_masks.h index a161ecfe74de..8618891d5afa 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/goya_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/goya_masks.h @@ -189,18 +189,6 @@ 1 << CPU_CA53_CFG_ARM_RST_CONTROL_NL2RESET_SHIFT |\ 1 << CPU_CA53_CFG_ARM_RST_CONTROL_NMBISTRESET_SHIFT) -/* PCI CONFIGURATION SPACE */ -#define mmPCI_CONFIG_ELBI_ADDR 0xFF0 -#define mmPCI_CONFIG_ELBI_DATA 0xFF4 -#define mmPCI_CONFIG_ELBI_CTRL 0xFF8 -#define PCI_CONFIG_ELBI_CTRL_WRITE (1 << 31) - -#define mmPCI_CONFIG_ELBI_STS 0xFFC -#define PCI_CONFIG_ELBI_STS_ERR (1 << 30) -#define PCI_CONFIG_ELBI_STS_DONE (1 << 31) -#define PCI_CONFIG_ELBI_STS_MASK (PCI_CONFIG_ELBI_STS_ERR | \ - PCI_CONFIG_ELBI_STS_DONE) - #define GOYA_IRQ_HBW_ID_MASK 0x1FFF #define GOYA_IRQ_HBW_ID_SHIFT 0 #define GOYA_IRQ_HBW_INTERNAL_ID_MASK 0xE000 diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/goya_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/goya_regs.h index 6cb0b6e54d41..506e71e201e1 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/goya_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/goya_regs.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 * - * Copyright 2016-2018 HabanaLabs, Ltd. + * Copyright 2016-2019 HabanaLabs, Ltd. * All Rights Reserved. * */ @@ -12,6 +12,7 @@ #include "stlb_regs.h" #include "mmu_regs.h" #include "pcie_aux_regs.h" +#include "pcie_wrap_regs.h" #include "psoc_global_conf_regs.h" #include "psoc_spi_regs.h" #include "psoc_mme_pll_regs.h" diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/ic_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/ic_pll_regs.h index 0a743817aad7..4ae7fed8b18c 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/ic_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/ic_pll_regs.h @@ -102,4 +102,3 @@ #define mmIC_PLL_FREQ_CALC_EN 0x4A3440 #endif /* ASIC_REG_IC_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mc_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mc_pll_regs.h index 4408188aa067..6d35d852798b 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mc_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mc_pll_regs.h @@ -102,4 +102,3 @@ #define mmMC_PLL_FREQ_CALC_EN 0x4A1440 #endif /* ASIC_REG_MC_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_masks.h index 687bca5c5fe3..6c23f8b96e7e 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_masks.h @@ -650,4 +650,3 @@ #define MME1_RTR_NON_LIN_SCRAMB_EN_MASK 0x1 #endif /* ASIC_REG_MME1_RTR_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_regs.h index c248339a1cbe..122e9d529939 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme1_rtr_regs.h @@ -328,4 +328,3 @@ #define mmMME1_RTR_NON_LIN_SCRAMB 0x40604 #endif /* ASIC_REG_MME1_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme2_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme2_rtr_regs.h index 7a2b777bdc4f..00ce2252bbfb 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme2_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme2_rtr_regs.h @@ -328,4 +328,3 @@ #define mmMME2_RTR_NON_LIN_SCRAMB 0x80604 #endif /* ASIC_REG_MME2_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme3_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme3_rtr_regs.h index b78f8bc387fc..8e3eb7fd2070 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme3_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme3_rtr_regs.h @@ -328,4 +328,3 @@ #define mmMME3_RTR_NON_LIN_SCRAMB 0xC0604 #endif /* ASIC_REG_MME3_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme4_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme4_rtr_regs.h index d9a4a02cefa3..79b67bbc8567 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme4_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme4_rtr_regs.h @@ -328,4 +328,3 @@ #define mmMME4_RTR_NON_LIN_SCRAMB 0x100604 #endif /* ASIC_REG_MME4_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme5_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme5_rtr_regs.h index 205adc988407..0ac3c37ce47f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme5_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme5_rtr_regs.h @@ -328,4 +328,3 @@ #define mmMME5_RTR_NON_LIN_SCRAMB 0x140604 #endif /* ASIC_REG_MME5_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme6_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme6_rtr_regs.h index fcec68388278..50c49cce72a6 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme6_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme6_rtr_regs.h @@ -328,4 +328,3 @@ #define mmMME6_RTR_NON_LIN_SCRAMB 0x180604 #endif /* ASIC_REG_MME6_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_masks.h index a0d4382fbbd0..fe7d95bdcef9 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_masks.h @@ -370,4 +370,3 @@ #define MME_CMDQ_CQ_BUF_RDATA_VAL_MASK 0xFFFFFFFF #endif /* ASIC_REG_MME_CMDQ_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_regs.h index 5c2f6b870a58..5f8b85d2b4b1 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmMME_CMDQ_CQ_BUF_RDATA 0xD930C #endif /* ASIC_REG_MME_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme_masks.h index c7b1b0bb3384..1882c413cbe0 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme_masks.h @@ -1534,4 +1534,3 @@ #define MME_SHADOW_3_E_BUBBLES_PER_SPLIT_ID_MASK 0xFF000000 #endif /* ASIC_REG_MME_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_masks.h index d4bfa58dce19..e464e381555c 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_masks.h @@ -462,4 +462,3 @@ #define MME_QM_CQ_BUF_RDATA_VAL_MASK 0xFFFFFFFF #endif /* ASIC_REG_MME_QM_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_regs.h index b5b1c776f6c3..538708beffc9 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme_qm_regs.h @@ -176,4 +176,3 @@ #define mmMME_QM_CQ_BUF_RDATA 0xD830C #endif /* ASIC_REG_MME_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mme_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mme_regs.h index 9436b1e2705a..0396cbfd5c89 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mme_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mme_regs.h @@ -1150,4 +1150,3 @@ #define mmMME_SHADOW_3_E_BUBBLES_PER_SPLIT 0xD0BAC #endif /* ASIC_REG_MME_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mmu_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/mmu_masks.h index 3a78078d3c4c..c3e69062b135 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mmu_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mmu_masks.h @@ -140,4 +140,3 @@ #define MMU_ACCESS_ERROR_CAPTURE_VA_VA_31_0_MASK 0xFFFFFFFF #endif /* ASIC_REG_MMU_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/mmu_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/mmu_regs.h index bec6c014135c..7ec81f12031e 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/mmu_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/mmu_regs.h @@ -50,4 +50,3 @@ #define mmMMU_ACCESS_ERROR_CAPTURE_VA 0x480040 #endif /* ASIC_REG_MMU_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_masks.h index 209e41402a11..ceb59f2e28b3 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_masks.h @@ -206,4 +206,3 @@ #define PCI_NRTR_NON_LIN_SCRAMB_EN_MASK 0x1 #endif /* ASIC_REG_PCI_NRTR_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_regs.h index 447e5d4e7dc8..dd067f301ac2 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/pci_nrtr_regs.h @@ -224,4 +224,3 @@ #define mmPCI_NRTR_NON_LIN_SCRAMB 0x604 #endif /* ASIC_REG_PCI_NRTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/pcie_aux_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/pcie_aux_regs.h index daaf5d9079dc..35b1d8ac6f63 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/pcie_aux_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/pcie_aux_regs.h @@ -240,4 +240,3 @@ #define mmPCIE_AUX_PERST 0xC079B8 #endif /* ASIC_REG_PCIE_AUX_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/pcie_wrap_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/pcie_wrap_regs.h new file mode 100644 index 000000000000..d1e55aace4a0 --- /dev/null +++ b/drivers/misc/habanalabs/include/goya/asic_reg/pcie_wrap_regs.h @@ -0,0 +1,306 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright 2016-2018 HabanaLabs, Ltd. + * All Rights Reserved. + * + */ + +/************************************ + ** This is an auto-generated file ** + ** DO NOT EDIT BELOW ** + ************************************/ + +#ifndef ASIC_REG_PCIE_WRAP_REGS_H_ +#define ASIC_REG_PCIE_WRAP_REGS_H_ + +/* + ***************************************** + * PCIE_WRAP (Prototype: PCIE_WRAP) + ***************************************** + */ + +#define mmPCIE_WRAP_PHY_RST_N 0xC01300 + +#define mmPCIE_WRAP_OUTSTAND_TRANS 0xC01400 + +#define mmPCIE_WRAP_MASK_REQ 0xC01404 + +#define mmPCIE_WRAP_IND_AWADDR_L 0xC01500 + +#define mmPCIE_WRAP_IND_AWADDR_H 0xC01504 + +#define mmPCIE_WRAP_IND_AWLEN 0xC01508 + +#define mmPCIE_WRAP_IND_AWSIZE 0xC0150C + +#define mmPCIE_WRAP_IND_AWBURST 0xC01510 + +#define mmPCIE_WRAP_IND_AWLOCK 0xC01514 + +#define mmPCIE_WRAP_IND_AWCACHE 0xC01518 + +#define mmPCIE_WRAP_IND_AWPROT 0xC0151C + +#define mmPCIE_WRAP_IND_AWVALID 0xC01520 + +#define mmPCIE_WRAP_IND_WDATA_0 0xC01524 + +#define mmPCIE_WRAP_IND_WDATA_1 0xC01528 + +#define mmPCIE_WRAP_IND_WDATA_2 0xC0152C + +#define mmPCIE_WRAP_IND_WDATA_3 0xC01530 + +#define mmPCIE_WRAP_IND_WSTRB 0xC01544 + +#define mmPCIE_WRAP_IND_WLAST 0xC01548 + +#define mmPCIE_WRAP_IND_WVALID 0xC0154C + +#define mmPCIE_WRAP_IND_BRESP 0xC01550 + +#define mmPCIE_WRAP_IND_BVALID 0xC01554 + +#define mmPCIE_WRAP_IND_ARADDR_0 0xC01558 + +#define mmPCIE_WRAP_IND_ARADDR_1 0xC0155C + +#define mmPCIE_WRAP_IND_ARLEN 0xC01560 + +#define mmPCIE_WRAP_IND_ARSIZE 0xC01564 + +#define mmPCIE_WRAP_IND_ARBURST 0xC01568 + +#define mmPCIE_WRAP_IND_ARLOCK 0xC0156C + +#define mmPCIE_WRAP_IND_ARCACHE 0xC01570 + +#define mmPCIE_WRAP_IND_ARPROT 0xC01574 + +#define mmPCIE_WRAP_IND_ARVALID 0xC01578 + +#define mmPCIE_WRAP_IND_RDATA_0 0xC0157C + +#define mmPCIE_WRAP_IND_RDATA_1 0xC01580 + +#define mmPCIE_WRAP_IND_RDATA_2 0xC01584 + +#define mmPCIE_WRAP_IND_RDATA_3 0xC01588 + +#define mmPCIE_WRAP_IND_RLAST 0xC0159C + +#define mmPCIE_WRAP_IND_RRESP 0xC015A0 + +#define mmPCIE_WRAP_IND_RVALID 0xC015A4 + +#define mmPCIE_WRAP_IND_AWMISC_INFO 0xC015A8 + +#define mmPCIE_WRAP_IND_AWMISC_INFO_HDR_34DW_0 0xC015AC + +#define mmPCIE_WRAP_IND_AWMISC_INFO_HDR_34DW_1 0xC015B0 + +#define mmPCIE_WRAP_IND_AWMISC_INFO_P_TAG 0xC015B4 + +#define mmPCIE_WRAP_IND_AWMISC_INFO_ATU_BYPAS 0xC015B8 + +#define mmPCIE_WRAP_IND_AWMISC_INFO_FUNC_NUM 0xC015BC + +#define mmPCIE_WRAP_IND_AWMISC_INFO_VFUNC_ACT 0xC015C0 + +#define mmPCIE_WRAP_IND_AWMISC_INFO_VFUNC_NUM 0xC015C4 + +#define mmPCIE_WRAP_IND_AWMISC_INFO_TLPPRFX 0xC015C8 + +#define mmPCIE_WRAP_IND_ARMISC_INFO 0xC015CC + +#define mmPCIE_WRAP_IND_ARMISC_INFO_TLPPRFX 0xC015D0 + +#define mmPCIE_WRAP_IND_ARMISC_INFO_ATU_BYP 0xC015D4 + +#define mmPCIE_WRAP_IND_ARMISC_INFO_FUNC_NUM 0xC015D8 + +#define mmPCIE_WRAP_IND_ARMISC_INFO_VFUNC_ACT 0xC015DC + +#define mmPCIE_WRAP_IND_ARMISC_INFO_VFUNC_NUM 0xC015E0 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO 0xC01800 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_HDR_34DW_0 0xC01804 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_HDR_34DW_1 0xC01808 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_P_TAG 0xC0180C + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_ATU_BYPAS 0xC01810 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_FUNC_NUM 0xC01814 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_VFUNC_ACT 0xC01818 + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_VFUNC_NUM 0xC0181C + +#define mmPCIE_WRAP_SLV_AWMISC_INFO_TLPPRFX 0xC01820 + +#define mmPCIE_WRAP_SLV_ARMISC_INFO 0xC01824 + +#define mmPCIE_WRAP_SLV_ARMISC_INFO_TLPPRFX 0xC01828 + +#define mmPCIE_WRAP_SLV_ARMISC_INFO_ATU_BYP 0xC0182C + +#define mmPCIE_WRAP_SLV_ARMISC_INFO_FUNC_NUM 0xC01830 + +#define mmPCIE_WRAP_SLV_ARMISC_INFO_VFUNC_ACT 0xC01834 + +#define mmPCIE_WRAP_SLV_ARMISC_INFO_VFUNC_NUM 0xC01838 + +#define mmPCIE_WRAP_MAX_QID 0xC01900 + +#define mmPCIE_WRAP_DB_BASE_ADDR_L_0 0xC01910 + +#define mmPCIE_WRAP_DB_BASE_ADDR_L_1 0xC01914 + +#define mmPCIE_WRAP_DB_BASE_ADDR_L_2 0xC01918 + +#define mmPCIE_WRAP_DB_BASE_ADDR_L_3 0xC0191C + +#define mmPCIE_WRAP_DB_BASE_ADDR_H_0 0xC01920 + +#define mmPCIE_WRAP_DB_BASE_ADDR_H_1 0xC01924 + +#define mmPCIE_WRAP_DB_BASE_ADDR_H_2 0xC01928 + +#define mmPCIE_WRAP_DB_BASE_ADDR_H_3 0xC0192C + +#define mmPCIE_WRAP_DB_MASK 0xC01940 + +#define mmPCIE_WRAP_SQ_BASE_ADDR_H 0xC01A00 + +#define mmPCIE_WRAP_SQ_BASE_ADDR_L 0xC01A04 + +#define mmPCIE_WRAP_SQ_STRIDE_ACCRESS 0xC01A08 + +#define mmPCIE_WRAP_SQ_POP_CMD 0xC01A10 + +#define mmPCIE_WRAP_SQ_POP_DATA 0xC01A14 + +#define mmPCIE_WRAP_DB_INTR_0 0xC01A20 + +#define mmPCIE_WRAP_DB_INTR_1 0xC01A24 + +#define mmPCIE_WRAP_DB_INTR_2 0xC01A28 + +#define mmPCIE_WRAP_DB_INTR_3 0xC01A2C + +#define mmPCIE_WRAP_DB_INTR_4 0xC01A30 + +#define mmPCIE_WRAP_DB_INTR_5 0xC01A34 + +#define mmPCIE_WRAP_DB_INTR_6 0xC01A38 + +#define mmPCIE_WRAP_DB_INTR_7 0xC01A3C + +#define mmPCIE_WRAP_MMU_BYPASS_DMA 0xC01A80 + +#define mmPCIE_WRAP_MMU_BYPASS_NON_DMA 0xC01A84 + +#define mmPCIE_WRAP_ASID_NON_DMA 0xC01A90 + +#define mmPCIE_WRAP_ASID_DMA_0 0xC01AA0 + +#define mmPCIE_WRAP_ASID_DMA_1 0xC01AA4 + +#define mmPCIE_WRAP_ASID_DMA_2 0xC01AA8 + +#define mmPCIE_WRAP_ASID_DMA_3 0xC01AAC + +#define mmPCIE_WRAP_ASID_DMA_4 0xC01AB0 + +#define mmPCIE_WRAP_ASID_DMA_5 0xC01AB4 + +#define mmPCIE_WRAP_ASID_DMA_6 0xC01AB8 + +#define mmPCIE_WRAP_ASID_DMA_7 0xC01ABC + +#define mmPCIE_WRAP_CPU_HOT_RST 0xC01AE0 + +#define mmPCIE_WRAP_AXI_PROT_OVR 0xC01AE4 + +#define mmPCIE_WRAP_CACHE_OVR 0xC01B00 + +#define mmPCIE_WRAP_LOCK_OVR 0xC01B04 + +#define mmPCIE_WRAP_PROT_OVR 0xC01B08 + +#define mmPCIE_WRAP_ARUSER_OVR 0xC01B0C + +#define mmPCIE_WRAP_AWUSER_OVR 0xC01B10 + +#define mmPCIE_WRAP_ARUSER_OVR_EN 0xC01B14 + +#define mmPCIE_WRAP_AWUSER_OVR_EN 0xC01B18 + +#define mmPCIE_WRAP_MAX_OUTSTAND 0xC01B20 + +#define mmPCIE_WRAP_MST_IN 0xC01B24 + +#define mmPCIE_WRAP_RSP_OK 0xC01B28 + +#define mmPCIE_WRAP_LBW_CACHE_OVR 0xC01B40 + +#define mmPCIE_WRAP_LBW_LOCK_OVR 0xC01B44 + +#define mmPCIE_WRAP_LBW_PROT_OVR 0xC01B48 + +#define mmPCIE_WRAP_LBW_ARUSER_OVR 0xC01B4C + +#define mmPCIE_WRAP_LBW_AWUSER_OVR 0xC01B50 + +#define mmPCIE_WRAP_LBW_ARUSER_OVR_EN 0xC01B58 + +#define mmPCIE_WRAP_LBW_AWUSER_OVR_EN 0xC01B5C + +#define mmPCIE_WRAP_LBW_MAX_OUTSTAND 0xC01B60 + +#define mmPCIE_WRAP_LBW_MST_IN 0xC01B64 + +#define mmPCIE_WRAP_LBW_RSP_OK 0xC01B68 + +#define mmPCIE_WRAP_QUEUE_INIT 0xC01C00 + +#define mmPCIE_WRAP_AXI_SPLIT_INTR_0 0xC01C10 + +#define mmPCIE_WRAP_AXI_SPLIT_INTR_1 0xC01C14 + +#define mmPCIE_WRAP_DB_AWUSER 0xC01D00 + +#define mmPCIE_WRAP_DB_ARUSER 0xC01D04 + +#define mmPCIE_WRAP_PCIE_AWUSER 0xC01D08 + +#define mmPCIE_WRAP_PCIE_ARUSER 0xC01D0C + +#define mmPCIE_WRAP_PSOC_AWUSER 0xC01D10 + +#define mmPCIE_WRAP_PSOC_ARUSER 0xC01D14 + +#define mmPCIE_WRAP_SCH_Q_AWUSER 0xC01D18 + +#define mmPCIE_WRAP_SCH_Q_ARUSER 0xC01D1C + +#define mmPCIE_WRAP_PSOC2PCI_AWUSER 0xC01D40 + +#define mmPCIE_WRAP_PSOC2PCI_ARUSER 0xC01D44 + +#define mmPCIE_WRAP_DRAIN_TIMEOUT 0xC01D50 + +#define mmPCIE_WRAP_DRAIN_CFG 0xC01D54 + +#define mmPCIE_WRAP_DB_AXI_ERR 0xC01DE0 + +#define mmPCIE_WRAP_SPMU_INTR 0xC01DE4 + +#define mmPCIE_WRAP_AXI_INTR 0xC01DE8 + +#define mmPCIE_WRAP_E2E_CTRL 0xC01DF0 + +#endif /* ASIC_REG_PCIE_WRAP_REGS_H_ */ diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_emmc_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_emmc_pll_regs.h index 8eda4de58788..9271ea95ebe9 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_emmc_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_emmc_pll_regs.h @@ -102,4 +102,3 @@ #define mmPSOC_EMMC_PLL_FREQ_CALC_EN 0xC70440 #endif /* ASIC_REG_PSOC_EMMC_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_masks.h index d4bf0e1db4df..324266653c9a 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_masks.h @@ -444,4 +444,3 @@ #define PSOC_GLOBAL_CONF_PAD_SEL_VAL_MASK 0x3 #endif /* ASIC_REG_PSOC_GLOBAL_CONF_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_regs.h index cfbdd2c9c5c7..8141f422e712 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_global_conf_regs.h @@ -742,4 +742,3 @@ #define mmPSOC_GLOBAL_CONF_PAD_SEL_81 0xC4BA44 #endif /* ASIC_REG_PSOC_GLOBAL_CONF_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_mme_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_mme_pll_regs.h index 6723d8f76f30..4789ebb9c337 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_mme_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_mme_pll_regs.h @@ -102,4 +102,3 @@ #define mmPSOC_MME_PLL_FREQ_CALC_EN 0xC71440 #endif /* ASIC_REG_PSOC_MME_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_pci_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_pci_pll_regs.h index abcded0531c9..27a296ea6c3d 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_pci_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_pci_pll_regs.h @@ -102,4 +102,3 @@ #define mmPSOC_PCI_PLL_FREQ_CALC_EN 0xC72440 #endif /* ASIC_REG_PSOC_PCI_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_spi_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_spi_regs.h index 5925c7477c25..66aee7fa6b1e 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/psoc_spi_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/psoc_spi_regs.h @@ -140,4 +140,3 @@ #define mmPSOC_SPI_RSVD_2 0xC430FC #endif /* ASIC_REG_PSOC_SPI_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x0_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x0_rtr_regs.h index d56c9fa0e7ba..2ea1770b078f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x0_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x0_rtr_regs.h @@ -80,4 +80,3 @@ #define mmSRAM_Y0_X0_RTR_DBG_L_ARB_MAX 0x201330 #endif /* ASIC_REG_SRAM_Y0_X0_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x1_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x1_rtr_regs.h index 5624544303ca..37e0713efa73 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x1_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x1_rtr_regs.h @@ -80,4 +80,3 @@ #define mmSRAM_Y0_X1_RTR_DBG_L_ARB_MAX 0x205330 #endif /* ASIC_REG_SRAM_Y0_X1_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x2_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x2_rtr_regs.h index 3322bc0bd1df..d2572279a2b9 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x2_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x2_rtr_regs.h @@ -80,4 +80,3 @@ #define mmSRAM_Y0_X2_RTR_DBG_L_ARB_MAX 0x209330 #endif /* ASIC_REG_SRAM_Y0_X2_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x3_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x3_rtr_regs.h index 81e393db2027..68c5b402c506 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x3_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x3_rtr_regs.h @@ -80,4 +80,3 @@ #define mmSRAM_Y0_X3_RTR_DBG_L_ARB_MAX 0x20D330 #endif /* ASIC_REG_SRAM_Y0_X3_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x4_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x4_rtr_regs.h index b2e11b1de385..a42f1ba06d28 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x4_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/sram_y0_x4_rtr_regs.h @@ -80,4 +80,3 @@ #define mmSRAM_Y0_X4_RTR_DBG_L_ARB_MAX 0x211330 #endif /* ASIC_REG_SRAM_Y0_X4_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/stlb_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/stlb_masks.h index b4ea8cae2757..94f2ed4a36bd 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/stlb_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/stlb_masks.h @@ -114,4 +114,3 @@ #define STLB_SRAM_INIT_BUSY_DATA_MASK 0x10 #endif /* ASIC_REG_STLB_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/stlb_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/stlb_regs.h index 0f5281d3e65b..35013f65acd2 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/stlb_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/stlb_regs.h @@ -52,4 +52,3 @@ #define mmSTLB_SRAM_INIT 0x49004C #endif /* ASIC_REG_STLB_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_masks.h index e5587b49eecd..89c9507a512f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_masks.h @@ -1604,4 +1604,3 @@ #define TPC0_CFG_FUNC_MBIST_MEM_LAST_FAILED_PATTERN_MASK 0x70000000 #endif /* ASIC_REG_TPC0_CFG_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_regs.h index 2be28a63c50a..7d71c4b73a5e 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC0_CFG_FUNC_MBIST_MEM_9 0xE06E2C #endif /* ASIC_REG_TPC0_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_masks.h index 9aa2d8b53207..9395f2458771 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_masks.h @@ -370,4 +370,3 @@ #define TPC0_CMDQ_CQ_BUF_RDATA_VAL_MASK 0xFFFFFFFF #endif /* ASIC_REG_TPC0_CMDQ_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_regs.h index 3572752ba66e..bc51df573bf0 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC0_CMDQ_CQ_BUF_RDATA 0xE0930C #endif /* ASIC_REG_TPC0_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_masks.h index ed866d93c440..553c6b6bd5ec 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_masks.h @@ -344,4 +344,3 @@ #define TPC0_EML_CFG_DBG_INST_INSERT_CTL_INSERT_MASK 0x1 #endif /* ASIC_REG_TPC0_EML_CFG_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_regs.h index f1a1b4fa4841..8495479c3659 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_eml_cfg_regs.h @@ -310,4 +310,3 @@ #define mmTPC0_EML_CFG_DBG_INST_INSERT_CTL 0x3040334 #endif /* ASIC_REG_TPC0_EML_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_masks.h index 7f86621179a5..43fafcf01041 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_masks.h @@ -206,4 +206,3 @@ #define TPC0_NRTR_NON_LIN_SCRAMB_EN_MASK 0x1 #endif /* ASIC_REG_TPC0_NRTR_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_regs.h index dc280f4e6608..ce3346dd2042 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_nrtr_regs.h @@ -224,4 +224,3 @@ #define mmTPC0_NRTR_NON_LIN_SCRAMB 0xE00604 #endif /* ASIC_REG_TPC0_NRTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_masks.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_masks.h index 80d97ee3d8d6..2e4b45947944 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_masks.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_masks.h @@ -462,4 +462,3 @@ #define TPC0_QM_CQ_BUF_RDATA_VAL_MASK 0xFFFFFFFF #endif /* ASIC_REG_TPC0_QM_MASKS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_regs.h index 7552d4ba61fe..4fa09eb88878 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc0_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC0_QM_CQ_BUF_RDATA 0xE0830C #endif /* ASIC_REG_TPC0_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cfg_regs.h index 19894413474a..928eef1808ae 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC1_CFG_FUNC_MBIST_MEM_9 0xE46E2C #endif /* ASIC_REG_TPC1_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cmdq_regs.h index 9099ebd7ab23..30ae0f307328 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC1_CMDQ_CQ_BUF_RDATA 0xE4930C #endif /* ASIC_REG_TPC1_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_qm_regs.h index bc8b9a10391f..b95de4f95ba9 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC1_QM_CQ_BUF_RDATA 0xE4830C #endif /* ASIC_REG_TPC1_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_rtr_regs.h index ae267f8f457e..0f91e307879e 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc1_rtr_regs.h @@ -320,4 +320,3 @@ #define mmTPC1_RTR_NON_LIN_SCRAMB 0xE40604 #endif /* ASIC_REG_TPC1_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cfg_regs.h index 9c33fc039036..73421227f35b 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC2_CFG_FUNC_MBIST_MEM_9 0xE86E2C #endif /* ASIC_REG_TPC2_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cmdq_regs.h index 7a643887d6e1..27b66bf2da9f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC2_CMDQ_CQ_BUF_RDATA 0xE8930C #endif /* ASIC_REG_TPC2_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_qm_regs.h index f3e32c018064..31e5b2f53905 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC2_QM_CQ_BUF_RDATA 0xE8830C #endif /* ASIC_REG_TPC2_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_rtr_regs.h index 0eb0cd1fbd19..4eddeaa15d94 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc2_rtr_regs.h @@ -320,4 +320,3 @@ #define mmTPC2_RTR_NON_LIN_SCRAMB 0xE80604 #endif /* ASIC_REG_TPC2_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cfg_regs.h index 0baf63c69b25..ce573a1a8361 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC3_CFG_FUNC_MBIST_MEM_9 0xEC6E2C #endif /* ASIC_REG_TPC3_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cmdq_regs.h index 82a5261e852f..11d81fca0a0f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC3_CMDQ_CQ_BUF_RDATA 0xEC930C #endif /* ASIC_REG_TPC3_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_qm_regs.h index b05b1e18e664..e41595a19e69 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC3_QM_CQ_BUF_RDATA 0xEC830C #endif /* ASIC_REG_TPC3_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_rtr_regs.h index 5a2fd7652650..34a438b1efe5 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc3_rtr_regs.h @@ -320,4 +320,3 @@ #define mmTPC3_RTR_NON_LIN_SCRAMB 0xEC0604 #endif /* ASIC_REG_TPC3_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cfg_regs.h index d64a100075f2..d44caf0fc1bb 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC4_CFG_FUNC_MBIST_MEM_9 0xF06E2C #endif /* ASIC_REG_TPC4_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cmdq_regs.h index 565b42885b0d..f13a6532961f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC4_CMDQ_CQ_BUF_RDATA 0xF0930C #endif /* ASIC_REG_TPC4_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_qm_regs.h index 196da3f12710..db081fc17cfc 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC4_QM_CQ_BUF_RDATA 0xF0830C #endif /* ASIC_REG_TPC4_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_rtr_regs.h index 8b54041d144a..8c5372303b28 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc4_rtr_regs.h @@ -320,4 +320,3 @@ #define mmTPC4_RTR_NON_LIN_SCRAMB 0xF00604 #endif /* ASIC_REG_TPC4_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cfg_regs.h index 3f00954fcdba..5139fde71011 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC5_CFG_FUNC_MBIST_MEM_9 0xF46E2C #endif /* ASIC_REG_TPC5_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cmdq_regs.h index d8e72a8e18d7..1e7cd6e1e888 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC5_CMDQ_CQ_BUF_RDATA 0xF4930C #endif /* ASIC_REG_TPC5_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_qm_regs.h index be2e68624709..ac0d3820cd6b 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC5_QM_CQ_BUF_RDATA 0xF4830C #endif /* ASIC_REG_TPC5_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_rtr_regs.h index 6f301c7bbc2f..57f83bc3b17d 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc5_rtr_regs.h @@ -320,4 +320,3 @@ #define mmTPC5_RTR_NON_LIN_SCRAMB 0xF40604 #endif /* ASIC_REG_TPC5_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cfg_regs.h index 1e1168601c41..94e0191c06c1 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC6_CFG_FUNC_MBIST_MEM_9 0xF86E2C #endif /* ASIC_REG_TPC6_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cmdq_regs.h index fbca6b47284e..7a1a0e87b225 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC6_CMDQ_CQ_BUF_RDATA 0xF8930C #endif /* ASIC_REG_TPC6_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_qm_regs.h index bf32465dabcb..80fa0fe0f60f 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC6_QM_CQ_BUF_RDATA 0xF8830C #endif /* ASIC_REG_TPC6_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_rtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_rtr_regs.h index 609bb90e1046..d6cae8b8af66 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_rtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc6_rtr_regs.h @@ -320,4 +320,3 @@ #define mmTPC6_RTR_NON_LIN_SCRAMB 0xF80604 #endif /* ASIC_REG_TPC6_RTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cfg_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cfg_regs.h index bf2fd0f73906..234147adb779 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cfg_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cfg_regs.h @@ -884,4 +884,3 @@ #define mmTPC7_CFG_FUNC_MBIST_MEM_9 0xFC6E2C #endif /* ASIC_REG_TPC7_CFG_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cmdq_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cmdq_regs.h index 65d83043bf63..4c160632fe7d 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cmdq_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_cmdq_regs.h @@ -136,4 +136,3 @@ #define mmTPC7_CMDQ_CQ_BUF_RDATA 0xFC930C #endif /* ASIC_REG_TPC7_CMDQ_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_nrtr_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_nrtr_regs.h index 3d5848d87304..0c13d4d167aa 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_nrtr_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_nrtr_regs.h @@ -224,4 +224,3 @@ #define mmTPC7_NRTR_NON_LIN_SCRAMB 0xFC0604 #endif /* ASIC_REG_TPC7_NRTR_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_qm_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_qm_regs.h index 25f5095f68fb..cbe11425bfb0 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_qm_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc7_qm_regs.h @@ -176,4 +176,3 @@ #define mmTPC7_QM_CQ_BUF_RDATA 0xFC830C #endif /* ASIC_REG_TPC7_QM_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/asic_reg/tpc_pll_regs.h b/drivers/misc/habanalabs/include/goya/asic_reg/tpc_pll_regs.h index 920231d0afa5..e25e19660a9d 100644 --- a/drivers/misc/habanalabs/include/goya/asic_reg/tpc_pll_regs.h +++ b/drivers/misc/habanalabs/include/goya/asic_reg/tpc_pll_regs.h @@ -102,4 +102,3 @@ #define mmTPC_PLL_FREQ_CALC_EN 0xE01440 #endif /* ASIC_REG_TPC_PLL_REGS_H_ */ - diff --git a/drivers/misc/habanalabs/include/goya/goya.h b/drivers/misc/habanalabs/include/goya/goya.h index 614149efa412..3f02a52ba4ce 100644 --- a/drivers/misc/habanalabs/include/goya/goya.h +++ b/drivers/misc/habanalabs/include/goya/goya.h @@ -8,10 +8,6 @@ #ifndef GOYA_H #define GOYA_H -#include "asic_reg/goya_regs.h" - -#include <linux/types.h> - #define SRAM_CFG_BAR_ID 0 #define MSIX_BAR_ID 2 #define DDR_BAR_ID 4 diff --git a/drivers/misc/habanalabs/include/goya/goya_async_events.h b/drivers/misc/habanalabs/include/goya/goya_async_events.h index 497937a17ee9..bb7a1aa3279e 100644 --- a/drivers/misc/habanalabs/include/goya/goya_async_events.h +++ b/drivers/misc/habanalabs/include/goya/goya_async_events.h @@ -9,7 +9,9 @@ #define __GOYA_ASYNC_EVENTS_H_ enum goya_async_event_id { + GOYA_ASYNC_EVENT_ID_PCIE_CORE = 32, GOYA_ASYNC_EVENT_ID_PCIE_IF = 33, + GOYA_ASYNC_EVENT_ID_PCIE_PHY = 34, GOYA_ASYNC_EVENT_ID_TPC0_ECC = 36, GOYA_ASYNC_EVENT_ID_TPC1_ECC = 39, GOYA_ASYNC_EVENT_ID_TPC2_ECC = 42, @@ -23,6 +25,8 @@ enum goya_async_event_id { GOYA_ASYNC_EVENT_ID_MMU_ECC = 63, GOYA_ASYNC_EVENT_ID_DMA_MACRO = 64, GOYA_ASYNC_EVENT_ID_DMA_ECC = 66, + GOYA_ASYNC_EVENT_ID_DDR0_PARITY = 69, + GOYA_ASYNC_EVENT_ID_DDR1_PARITY = 72, GOYA_ASYNC_EVENT_ID_CPU_IF_ECC = 75, GOYA_ASYNC_EVENT_ID_PSOC_MEM = 78, GOYA_ASYNC_EVENT_ID_PSOC_CORESIGHT = 79, @@ -72,6 +76,7 @@ enum goya_async_event_id { GOYA_ASYNC_EVENT_ID_MME_WACSD = 142, GOYA_ASYNC_EVENT_ID_PLL0 = 143, GOYA_ASYNC_EVENT_ID_PLL1 = 144, + GOYA_ASYNC_EVENT_ID_PLL2 = 145, GOYA_ASYNC_EVENT_ID_PLL3 = 146, GOYA_ASYNC_EVENT_ID_PLL4 = 147, GOYA_ASYNC_EVENT_ID_PLL5 = 148, @@ -81,6 +86,7 @@ enum goya_async_event_id { GOYA_ASYNC_EVENT_ID_PSOC = 160, GOYA_ASYNC_EVENT_ID_PCIE_FLR = 171, GOYA_ASYNC_EVENT_ID_PCIE_HOT_RESET = 172, + GOYA_ASYNC_EVENT_ID_PCIE_PERST = 173, GOYA_ASYNC_EVENT_ID_PCIE_QID0_ENG0 = 174, GOYA_ASYNC_EVENT_ID_PCIE_QID0_ENG1 = 175, GOYA_ASYNC_EVENT_ID_PCIE_QID0_ENG2 = 176, @@ -144,8 +150,11 @@ enum goya_async_event_id { GOYA_ASYNC_EVENT_ID_PSOC_GPIO_U16_0 = 330, GOYA_ASYNC_EVENT_ID_PSOC_GPIO_U16_1 = 331, GOYA_ASYNC_EVENT_ID_PSOC_GPIO_U16_2 = 332, + GOYA_ASYNC_EVENT_ID_PSOC_GPIO_U16_3 = 333, + GOYA_ASYNC_EVENT_ID_PSOC_GPIO_U16_4 = 334, GOYA_ASYNC_EVENT_ID_PSOC_GPIO_05_SW_RESET = 356, GOYA_ASYNC_EVENT_ID_PSOC_GPIO_10_VRHOT_ICRIT = 361, + GOYA_ASYNC_EVENT_ID_FAN = 425, GOYA_ASYNC_EVENT_ID_TPC0_CMDQ = 430, GOYA_ASYNC_EVENT_ID_TPC1_CMDQ = 431, GOYA_ASYNC_EVENT_ID_TPC2_CMDQ = 432, diff --git a/drivers/misc/habanalabs/include/goya/goya_coresight.h b/drivers/misc/habanalabs/include/goya/goya_coresight.h new file mode 100644 index 000000000000..6e933c0ca5cd --- /dev/null +++ b/drivers/misc/habanalabs/include/goya/goya_coresight.h @@ -0,0 +1,199 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright 2016-2018 HabanaLabs, Ltd. + * All Rights Reserved. + * + */ + +#ifndef GOYA_CORESIGHT_H +#define GOYA_CORESIGHT_H + +enum goya_debug_stm_regs_index { + GOYA_STM_FIRST = 0, + GOYA_STM_CPU = GOYA_STM_FIRST, + GOYA_STM_DMA_CH_0_CS, + GOYA_STM_DMA_CH_1_CS, + GOYA_STM_DMA_CH_2_CS, + GOYA_STM_DMA_CH_3_CS, + GOYA_STM_DMA_CH_4_CS, + GOYA_STM_DMA_MACRO_CS, + GOYA_STM_MME1_SBA, + GOYA_STM_MME3_SBB, + GOYA_STM_MME4_WACS2, + GOYA_STM_MME4_WACS, + GOYA_STM_MMU_CS, + GOYA_STM_PCIE, + GOYA_STM_PSOC, + GOYA_STM_TPC0_EML, + GOYA_STM_TPC1_EML, + GOYA_STM_TPC2_EML, + GOYA_STM_TPC3_EML, + GOYA_STM_TPC4_EML, + GOYA_STM_TPC5_EML, + GOYA_STM_TPC6_EML, + GOYA_STM_TPC7_EML, + GOYA_STM_LAST = GOYA_STM_TPC7_EML +}; + +enum goya_debug_etf_regs_index { + GOYA_ETF_FIRST = 0, + GOYA_ETF_CPU_0 = GOYA_ETF_FIRST, + GOYA_ETF_CPU_1, + GOYA_ETF_CPU_TRACE, + GOYA_ETF_DMA_CH_0_CS, + GOYA_ETF_DMA_CH_1_CS, + GOYA_ETF_DMA_CH_2_CS, + GOYA_ETF_DMA_CH_3_CS, + GOYA_ETF_DMA_CH_4_CS, + GOYA_ETF_DMA_MACRO_CS, + GOYA_ETF_MME1_SBA, + GOYA_ETF_MME3_SBB, + GOYA_ETF_MME4_WACS2, + GOYA_ETF_MME4_WACS, + GOYA_ETF_MMU_CS, + GOYA_ETF_PCIE, + GOYA_ETF_PSOC, + GOYA_ETF_TPC0_EML, + GOYA_ETF_TPC1_EML, + GOYA_ETF_TPC2_EML, + GOYA_ETF_TPC3_EML, + GOYA_ETF_TPC4_EML, + GOYA_ETF_TPC5_EML, + GOYA_ETF_TPC6_EML, + GOYA_ETF_TPC7_EML, + GOYA_ETF_LAST = GOYA_ETF_TPC7_EML +}; + +enum goya_debug_funnel_regs_index { + GOYA_FUNNEL_FIRST = 0, + GOYA_FUNNEL_CPU = GOYA_FUNNEL_FIRST, + GOYA_FUNNEL_DMA_CH_6_1, + GOYA_FUNNEL_DMA_MACRO_3_1, + GOYA_FUNNEL_MME0_RTR, + GOYA_FUNNEL_MME1_RTR, + GOYA_FUNNEL_MME2_RTR, + GOYA_FUNNEL_MME3_RTR, + GOYA_FUNNEL_MME4_RTR, + GOYA_FUNNEL_MME5_RTR, + GOYA_FUNNEL_PCIE, + GOYA_FUNNEL_PSOC, + GOYA_FUNNEL_TPC0_EML, + GOYA_FUNNEL_TPC1_EML, + GOYA_FUNNEL_TPC1_RTR, + GOYA_FUNNEL_TPC2_EML, + GOYA_FUNNEL_TPC2_RTR, + GOYA_FUNNEL_TPC3_EML, + GOYA_FUNNEL_TPC3_RTR, + GOYA_FUNNEL_TPC4_EML, + GOYA_FUNNEL_TPC4_RTR, + GOYA_FUNNEL_TPC5_EML, + GOYA_FUNNEL_TPC5_RTR, + GOYA_FUNNEL_TPC6_EML, + GOYA_FUNNEL_TPC6_RTR, + GOYA_FUNNEL_TPC7_EML, + GOYA_FUNNEL_LAST = GOYA_FUNNEL_TPC7_EML +}; + +enum goya_debug_bmon_regs_index { + GOYA_BMON_FIRST = 0, + GOYA_BMON_CPU_RD = GOYA_BMON_FIRST, + GOYA_BMON_CPU_WR, + GOYA_BMON_DMA_CH_0_0, + GOYA_BMON_DMA_CH_0_1, + GOYA_BMON_DMA_CH_1_0, + GOYA_BMON_DMA_CH_1_1, + GOYA_BMON_DMA_CH_2_0, + GOYA_BMON_DMA_CH_2_1, + GOYA_BMON_DMA_CH_3_0, + GOYA_BMON_DMA_CH_3_1, + GOYA_BMON_DMA_CH_4_0, + GOYA_BMON_DMA_CH_4_1, + GOYA_BMON_DMA_MACRO_0, + GOYA_BMON_DMA_MACRO_1, + GOYA_BMON_DMA_MACRO_2, + GOYA_BMON_DMA_MACRO_3, + GOYA_BMON_DMA_MACRO_4, + GOYA_BMON_DMA_MACRO_5, + GOYA_BMON_DMA_MACRO_6, + GOYA_BMON_DMA_MACRO_7, + GOYA_BMON_MME1_SBA_0, + GOYA_BMON_MME1_SBA_1, + GOYA_BMON_MME3_SBB_0, + GOYA_BMON_MME3_SBB_1, + GOYA_BMON_MME4_WACS2_0, + GOYA_BMON_MME4_WACS2_1, + GOYA_BMON_MME4_WACS2_2, + GOYA_BMON_MME4_WACS_0, + GOYA_BMON_MME4_WACS_1, + GOYA_BMON_MME4_WACS_2, + GOYA_BMON_MME4_WACS_3, + GOYA_BMON_MME4_WACS_4, + GOYA_BMON_MME4_WACS_5, + GOYA_BMON_MME4_WACS_6, + GOYA_BMON_MMU_0, + GOYA_BMON_MMU_1, + GOYA_BMON_PCIE_MSTR_RD, + GOYA_BMON_PCIE_MSTR_WR, + GOYA_BMON_PCIE_SLV_RD, + GOYA_BMON_PCIE_SLV_WR, + GOYA_BMON_TPC0_EML_0, + GOYA_BMON_TPC0_EML_1, + GOYA_BMON_TPC0_EML_2, + GOYA_BMON_TPC0_EML_3, + GOYA_BMON_TPC1_EML_0, + GOYA_BMON_TPC1_EML_1, + GOYA_BMON_TPC1_EML_2, + GOYA_BMON_TPC1_EML_3, + GOYA_BMON_TPC2_EML_0, + GOYA_BMON_TPC2_EML_1, + GOYA_BMON_TPC2_EML_2, + GOYA_BMON_TPC2_EML_3, + GOYA_BMON_TPC3_EML_0, + GOYA_BMON_TPC3_EML_1, + GOYA_BMON_TPC3_EML_2, + GOYA_BMON_TPC3_EML_3, + GOYA_BMON_TPC4_EML_0, + GOYA_BMON_TPC4_EML_1, + GOYA_BMON_TPC4_EML_2, + GOYA_BMON_TPC4_EML_3, + GOYA_BMON_TPC5_EML_0, + GOYA_BMON_TPC5_EML_1, + GOYA_BMON_TPC5_EML_2, + GOYA_BMON_TPC5_EML_3, + GOYA_BMON_TPC6_EML_0, + GOYA_BMON_TPC6_EML_1, + GOYA_BMON_TPC6_EML_2, + GOYA_BMON_TPC6_EML_3, + GOYA_BMON_TPC7_EML_0, + GOYA_BMON_TPC7_EML_1, + GOYA_BMON_TPC7_EML_2, + GOYA_BMON_TPC7_EML_3, + GOYA_BMON_LAST = GOYA_BMON_TPC7_EML_3 +}; + +enum goya_debug_spmu_regs_index { + GOYA_SPMU_FIRST = 0, + GOYA_SPMU_DMA_CH_0_CS = GOYA_SPMU_FIRST, + GOYA_SPMU_DMA_CH_1_CS, + GOYA_SPMU_DMA_CH_2_CS, + GOYA_SPMU_DMA_CH_3_CS, + GOYA_SPMU_DMA_CH_4_CS, + GOYA_SPMU_DMA_MACRO_CS, + GOYA_SPMU_MME1_SBA, + GOYA_SPMU_MME3_SBB, + GOYA_SPMU_MME4_WACS2, + GOYA_SPMU_MME4_WACS, + GOYA_SPMU_MMU_CS, + GOYA_SPMU_PCIE, + GOYA_SPMU_TPC0_EML, + GOYA_SPMU_TPC1_EML, + GOYA_SPMU_TPC2_EML, + GOYA_SPMU_TPC3_EML, + GOYA_SPMU_TPC4_EML, + GOYA_SPMU_TPC5_EML, + GOYA_SPMU_TPC6_EML, + GOYA_SPMU_TPC7_EML, + GOYA_SPMU_LAST = GOYA_SPMU_TPC7_EML +}; + +#endif /* GOYA_CORESIGHT_H */ diff --git a/drivers/misc/habanalabs/include/goya/goya_fw_if.h b/drivers/misc/habanalabs/include/goya/goya_fw_if.h index a9920cb4a07b..0fa80fe9f6cc 100644 --- a/drivers/misc/habanalabs/include/goya/goya_fw_if.h +++ b/drivers/misc/habanalabs/include/goya/goya_fw_if.h @@ -8,6 +8,8 @@ #ifndef GOYA_FW_IF_H #define GOYA_FW_IF_H +#define GOYA_EVENT_QUEUE_MSIX_IDX 5 + #define CPU_BOOT_ADDR 0x7FF8040000ull #define UBOOT_FW_OFFSET 0x100000 /* 1MB in SRAM */ diff --git a/drivers/misc/habanalabs/include/hl_boot_if.h b/drivers/misc/habanalabs/include/hl_boot_if.h index 7475732b9996..4cd04c090285 100644 --- a/drivers/misc/habanalabs/include/hl_boot_if.h +++ b/drivers/misc/habanalabs/include/hl_boot_if.h @@ -18,7 +18,8 @@ enum cpu_boot_status { CPU_BOOT_STATUS_IN_SPL, CPU_BOOT_STATUS_IN_UBOOT, CPU_BOOT_STATUS_DRAM_INIT_FAIL, - CPU_BOOT_STATUS_FIT_CORRUPTED + CPU_BOOT_STATUS_FIT_CORRUPTED, + CPU_BOOT_STATUS_UBOOT_NOT_READY, }; enum kmd_msg { diff --git a/drivers/misc/habanalabs/include/hw_ip/mmu/mmu_general.h b/drivers/misc/habanalabs/include/hw_ip/mmu/mmu_general.h index b680052ee3f0..71ea3c3e8ba3 100644 --- a/drivers/misc/habanalabs/include/hw_ip/mmu/mmu_general.h +++ b/drivers/misc/habanalabs/include/hw_ip/mmu/mmu_general.h @@ -14,16 +14,16 @@ #define PAGE_SIZE_4KB (_AC(1, UL) << PAGE_SHIFT_4KB) #define PAGE_MASK_2MB (~(PAGE_SIZE_2MB - 1)) -#define PAGE_PRESENT_MASK 0x0000000000001 -#define SWAP_OUT_MASK 0x0000000000004 -#define LAST_MASK 0x0000000000800 -#define PHYS_ADDR_MASK 0x3FFFFFFFFF000ull +#define PAGE_PRESENT_MASK 0x0000000000001ull +#define SWAP_OUT_MASK 0x0000000000004ull +#define LAST_MASK 0x0000000000800ull +#define PHYS_ADDR_MASK 0xFFFFFFFFFFFFF000ull #define HOP0_MASK 0x3000000000000ull #define HOP1_MASK 0x0FF8000000000ull #define HOP2_MASK 0x0007FC0000000ull -#define HOP3_MASK 0x000003FE00000 -#define HOP4_MASK 0x00000001FF000 -#define OFFSET_MASK 0x0000000000FFF +#define HOP3_MASK 0x000003FE00000ull +#define HOP4_MASK 0x00000001FF000ull +#define OFFSET_MASK 0x0000000000FFFull #define HOP0_SHIFT 48 #define HOP1_SHIFT 39 @@ -32,7 +32,7 @@ #define HOP4_SHIFT 12 #define PTE_PHYS_ADDR_SHIFT 12 -#define PTE_PHYS_ADDR_MASK ~0xFFF +#define PTE_PHYS_ADDR_MASK ~OFFSET_MASK #define HL_PTE_SIZE sizeof(u64) #define HOP_TABLE_SIZE PAGE_SIZE_4KB diff --git a/drivers/misc/habanalabs/include/hw_ip/pci/pci_general.h b/drivers/misc/habanalabs/include/hw_ip/pci/pci_general.h new file mode 100644 index 000000000000..d232081d4e0f --- /dev/null +++ b/drivers/misc/habanalabs/include/hw_ip/pci/pci_general.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright 2016-2019 HabanaLabs, Ltd. + * All Rights Reserved. + * + */ + +#ifndef INCLUDE_PCI_GENERAL_H_ +#define INCLUDE_PCI_GENERAL_H_ + +/* PCI CONFIGURATION SPACE */ +#define mmPCI_CONFIG_ELBI_ADDR 0xFF0 +#define mmPCI_CONFIG_ELBI_DATA 0xFF4 +#define mmPCI_CONFIG_ELBI_CTRL 0xFF8 +#define PCI_CONFIG_ELBI_CTRL_WRITE (1 << 31) + +#define mmPCI_CONFIG_ELBI_STS 0xFFC +#define PCI_CONFIG_ELBI_STS_ERR (1 << 30) +#define PCI_CONFIG_ELBI_STS_DONE (1 << 31) +#define PCI_CONFIG_ELBI_STS_MASK (PCI_CONFIG_ELBI_STS_ERR | \ + PCI_CONFIG_ELBI_STS_DONE) + +#endif /* INCLUDE_PCI_GENERAL_H_ */ diff --git a/drivers/misc/habanalabs/irq.c b/drivers/misc/habanalabs/irq.c index e69a09c10e3f..ea9f72ff456c 100644 --- a/drivers/misc/habanalabs/irq.c +++ b/drivers/misc/habanalabs/irq.c @@ -222,7 +222,7 @@ int hl_cq_init(struct hl_device *hdev, struct hl_cq *q, u32 hw_queue_id) BUILD_BUG_ON(HL_CQ_SIZE_IN_BYTES > HL_PAGE_SIZE); - p = hdev->asic_funcs->dma_alloc_coherent(hdev, HL_CQ_SIZE_IN_BYTES, + p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, HL_CQ_SIZE_IN_BYTES, &q->bus_address, GFP_KERNEL | __GFP_ZERO); if (!p) return -ENOMEM; @@ -248,7 +248,7 @@ int hl_cq_init(struct hl_device *hdev, struct hl_cq *q, u32 hw_queue_id) */ void hl_cq_fini(struct hl_device *hdev, struct hl_cq *q) { - hdev->asic_funcs->dma_free_coherent(hdev, HL_CQ_SIZE_IN_BYTES, + hdev->asic_funcs->asic_dma_free_coherent(hdev, HL_CQ_SIZE_IN_BYTES, (void *) (uintptr_t) q->kernel_address, q->bus_address); } @@ -284,8 +284,9 @@ int hl_eq_init(struct hl_device *hdev, struct hl_eq *q) BUILD_BUG_ON(HL_EQ_SIZE_IN_BYTES > HL_PAGE_SIZE); - p = hdev->asic_funcs->dma_alloc_coherent(hdev, HL_EQ_SIZE_IN_BYTES, - &q->bus_address, GFP_KERNEL | __GFP_ZERO); + p = hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, + HL_EQ_SIZE_IN_BYTES, + &q->bus_address); if (!p) return -ENOMEM; @@ -308,8 +309,9 @@ void hl_eq_fini(struct hl_device *hdev, struct hl_eq *q) { flush_workqueue(hdev->eq_wq); - hdev->asic_funcs->dma_free_coherent(hdev, HL_EQ_SIZE_IN_BYTES, - (void *) (uintptr_t) q->kernel_address, q->bus_address); + hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, + HL_EQ_SIZE_IN_BYTES, + (void *) (uintptr_t) q->kernel_address); } void hl_eq_reset(struct hl_device *hdev, struct hl_eq *q) diff --git a/drivers/misc/habanalabs/memory.c b/drivers/misc/habanalabs/memory.c index ce1fda40a8b8..d67d24c13efd 100644 --- a/drivers/misc/habanalabs/memory.c +++ b/drivers/misc/habanalabs/memory.c @@ -109,7 +109,7 @@ static int alloc_device_memory(struct hl_ctx *ctx, struct hl_mem_in *args, page_size); if (!phys_pg_pack->pages[i]) { dev_err(hdev->dev, - "ioctl failed to allocate page\n"); + "Failed to allocate device memory (out of memory)\n"); rc = -ENOMEM; goto page_err; } @@ -759,10 +759,6 @@ static int map_phys_page_pack(struct hl_ctx *ctx, u64 vaddr, for (i = 0 ; i < phys_pg_pack->npages ; i++) { paddr = phys_pg_pack->pages[i]; - /* For accessing the host we need to turn on bit 39 */ - if (phys_pg_pack->created_from_userptr) - paddr += hdev->asic_prop.host_phys_base_address; - rc = hl_mmu_map(ctx, next_vaddr, paddr, page_size); if (rc) { dev_err(hdev->dev, @@ -1046,10 +1042,17 @@ static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr) mutex_lock(&ctx->mmu_lock); - for (i = 0 ; i < phys_pg_pack->npages ; i++, next_vaddr += page_size) + for (i = 0 ; i < phys_pg_pack->npages ; i++, next_vaddr += page_size) { if (hl_mmu_unmap(ctx, next_vaddr, page_size)) dev_warn_ratelimited(hdev->dev, - "unmap failed for vaddr: 0x%llx\n", next_vaddr); + "unmap failed for vaddr: 0x%llx\n", next_vaddr); + + /* unmapping on Palladium can be really long, so avoid a CPU + * soft lockup bug by sleeping a little between unmapping pages + */ + if (hdev->pldm) + usleep_range(500, 1000); + } hdev->asic_funcs->mmu_invalidate_cache(hdev, true); @@ -1083,6 +1086,64 @@ vm_type_err: return rc; } +static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args) +{ + struct hl_device *hdev = hpriv->hdev; + struct hl_ctx *ctx = hpriv->ctx; + u64 device_addr = 0; + u32 handle = 0; + int rc; + + switch (args->in.op) { + case HL_MEM_OP_ALLOC: + if (args->in.alloc.mem_size == 0) { + dev_err(hdev->dev, + "alloc size must be larger than 0\n"); + rc = -EINVAL; + goto out; + } + + /* Force contiguous as there are no real MMU + * translations to overcome physical memory gaps + */ + args->in.flags |= HL_MEM_CONTIGUOUS; + rc = alloc_device_memory(ctx, &args->in, &handle); + + memset(args, 0, sizeof(*args)); + args->out.handle = (__u64) handle; + break; + + case HL_MEM_OP_FREE: + rc = free_device_memory(ctx, args->in.free.handle); + break; + + case HL_MEM_OP_MAP: + if (args->in.flags & HL_MEM_USERPTR) { + device_addr = args->in.map_host.host_virt_addr; + rc = 0; + } else { + rc = get_paddr_from_handle(ctx, &args->in, + &device_addr); + } + + memset(args, 0, sizeof(*args)); + args->out.device_virt_addr = device_addr; + break; + + case HL_MEM_OP_UNMAP: + rc = 0; + break; + + default: + dev_err(hdev->dev, "Unknown opcode for memory IOCTL\n"); + rc = -ENOTTY; + break; + } + +out: + return rc; +} + int hl_mem_ioctl(struct hl_fpriv *hpriv, void *data) { union hl_mem_args *args = data; @@ -1094,104 +1155,54 @@ int hl_mem_ioctl(struct hl_fpriv *hpriv, void *data) if (hl_device_disabled_or_in_reset(hdev)) { dev_warn_ratelimited(hdev->dev, - "Device is disabled or in reset. Can't execute memory IOCTL\n"); + "Device is %s. Can't execute MEMORY IOCTL\n", + atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); return -EBUSY; } - if (hdev->mmu_enable) { - switch (args->in.op) { - case HL_MEM_OP_ALLOC: - if (!hdev->dram_supports_virtual_memory) { - dev_err(hdev->dev, - "DRAM alloc is not supported\n"); - rc = -EINVAL; - goto out; - } - if (args->in.alloc.mem_size == 0) { - dev_err(hdev->dev, - "alloc size must be larger than 0\n"); - rc = -EINVAL; - goto out; - } - rc = alloc_device_memory(ctx, &args->in, &handle); - - memset(args, 0, sizeof(*args)); - args->out.handle = (__u64) handle; - break; - - case HL_MEM_OP_FREE: - if (!hdev->dram_supports_virtual_memory) { - dev_err(hdev->dev, - "DRAM free is not supported\n"); - rc = -EINVAL; - goto out; - } - rc = free_device_memory(ctx, args->in.free.handle); - break; - - case HL_MEM_OP_MAP: - rc = map_device_va(ctx, &args->in, &device_addr); - - memset(args, 0, sizeof(*args)); - args->out.device_virt_addr = device_addr; - break; - - case HL_MEM_OP_UNMAP: - rc = unmap_device_va(ctx, - args->in.unmap.device_virt_addr); - break; + if (!hdev->mmu_enable) + return mem_ioctl_no_mmu(hpriv, args); - default: - dev_err(hdev->dev, "Unknown opcode for memory IOCTL\n"); - rc = -ENOTTY; - break; + switch (args->in.op) { + case HL_MEM_OP_ALLOC: + if (!hdev->dram_supports_virtual_memory) { + dev_err(hdev->dev, "DRAM alloc is not supported\n"); + rc = -EINVAL; + goto out; } - } else { - switch (args->in.op) { - case HL_MEM_OP_ALLOC: - if (args->in.alloc.mem_size == 0) { - dev_err(hdev->dev, - "alloc size must be larger than 0\n"); - rc = -EINVAL; - goto out; - } - /* Force contiguous as there are no real MMU - * translations to overcome physical memory gaps - */ - args->in.flags |= HL_MEM_CONTIGUOUS; - rc = alloc_device_memory(ctx, &args->in, &handle); + if (args->in.alloc.mem_size == 0) { + dev_err(hdev->dev, + "alloc size must be larger than 0\n"); + rc = -EINVAL; + goto out; + } + rc = alloc_device_memory(ctx, &args->in, &handle); - memset(args, 0, sizeof(*args)); - args->out.handle = (__u64) handle; - break; + memset(args, 0, sizeof(*args)); + args->out.handle = (__u64) handle; + break; - case HL_MEM_OP_FREE: - rc = free_device_memory(ctx, args->in.free.handle); - break; + case HL_MEM_OP_FREE: + rc = free_device_memory(ctx, args->in.free.handle); + break; - case HL_MEM_OP_MAP: - if (args->in.flags & HL_MEM_USERPTR) { - device_addr = args->in.map_host.host_virt_addr; - rc = 0; - } else { - rc = get_paddr_from_handle(ctx, &args->in, - &device_addr); - } + case HL_MEM_OP_MAP: + rc = map_device_va(ctx, &args->in, &device_addr); - memset(args, 0, sizeof(*args)); - args->out.device_virt_addr = device_addr; - break; + memset(args, 0, sizeof(*args)); + args->out.device_virt_addr = device_addr; + break; - case HL_MEM_OP_UNMAP: - rc = 0; - break; + case HL_MEM_OP_UNMAP: + rc = unmap_device_va(ctx, + args->in.unmap.device_virt_addr); + break; - default: - dev_err(hdev->dev, "Unknown opcode for memory IOCTL\n"); - rc = -ENOTTY; - break; - } + default: + dev_err(hdev->dev, "Unknown opcode for memory IOCTL\n"); + rc = -ENOTTY; + break; } out: diff --git a/drivers/misc/habanalabs/mmu.c b/drivers/misc/habanalabs/mmu.c index 3a5a2cec8305..533d9315b6fb 100644 --- a/drivers/misc/habanalabs/mmu.c +++ b/drivers/misc/habanalabs/mmu.c @@ -11,13 +11,15 @@ #include <linux/genalloc.h> #include <linux/slab.h> -static struct pgt_info *get_pgt_info(struct hl_ctx *ctx, u64 addr) +static inline u64 get_phys_addr(struct hl_ctx *ctx, u64 shadow_addr); + +static struct pgt_info *get_pgt_info(struct hl_ctx *ctx, u64 hop_addr) { struct pgt_info *pgt_info = NULL; - hash_for_each_possible(ctx->mmu_hash, pgt_info, node, - (unsigned long) addr) - if (addr == pgt_info->addr) + hash_for_each_possible(ctx->mmu_shadow_hash, pgt_info, node, + (unsigned long) hop_addr) + if (hop_addr == pgt_info->shadow_addr) break; return pgt_info; @@ -25,45 +27,109 @@ static struct pgt_info *get_pgt_info(struct hl_ctx *ctx, u64 addr) static void free_hop(struct hl_ctx *ctx, u64 hop_addr) { + struct hl_device *hdev = ctx->hdev; struct pgt_info *pgt_info = get_pgt_info(ctx, hop_addr); - gen_pool_free(pgt_info->ctx->hdev->mmu_pgt_pool, pgt_info->addr, - ctx->hdev->asic_prop.mmu_hop_table_size); + gen_pool_free(hdev->mmu_pgt_pool, pgt_info->phys_addr, + hdev->asic_prop.mmu_hop_table_size); hash_del(&pgt_info->node); - + kfree((u64 *) (uintptr_t) pgt_info->shadow_addr); kfree(pgt_info); } static u64 alloc_hop(struct hl_ctx *ctx) { struct hl_device *hdev = ctx->hdev; + struct asic_fixed_properties *prop = &hdev->asic_prop; struct pgt_info *pgt_info; - u64 addr; + u64 phys_addr, shadow_addr; pgt_info = kmalloc(sizeof(*pgt_info), GFP_KERNEL); if (!pgt_info) return ULLONG_MAX; - addr = (u64) gen_pool_alloc(hdev->mmu_pgt_pool, - hdev->asic_prop.mmu_hop_table_size); - if (!addr) { + phys_addr = (u64) gen_pool_alloc(hdev->mmu_pgt_pool, + prop->mmu_hop_table_size); + if (!phys_addr) { dev_err(hdev->dev, "failed to allocate page\n"); - kfree(pgt_info); - return ULLONG_MAX; + goto pool_add_err; } - pgt_info->addr = addr; + shadow_addr = (u64) (uintptr_t) kzalloc(prop->mmu_hop_table_size, + GFP_KERNEL); + if (!shadow_addr) + goto shadow_err; + + pgt_info->phys_addr = phys_addr; + pgt_info->shadow_addr = shadow_addr; pgt_info->ctx = ctx; pgt_info->num_of_ptes = 0; - hash_add(ctx->mmu_hash, &pgt_info->node, addr); + hash_add(ctx->mmu_shadow_hash, &pgt_info->node, shadow_addr); + + return shadow_addr; + +shadow_err: + gen_pool_free(hdev->mmu_pgt_pool, phys_addr, prop->mmu_hop_table_size); +pool_add_err: + kfree(pgt_info); + + return ULLONG_MAX; +} + +static inline u64 get_phys_hop0_addr(struct hl_ctx *ctx) +{ + return ctx->hdev->asic_prop.mmu_pgt_addr + + (ctx->asid * ctx->hdev->asic_prop.mmu_hop_table_size); +} + +static inline u64 get_hop0_addr(struct hl_ctx *ctx) +{ + return (u64) (uintptr_t) ctx->hdev->mmu_shadow_hop0 + + (ctx->asid * ctx->hdev->asic_prop.mmu_hop_table_size); +} + +static inline void flush(struct hl_ctx *ctx) +{ + /* flush all writes from all cores to reach PCI */ + mb(); + ctx->hdev->asic_funcs->read_pte(ctx->hdev, get_phys_hop0_addr(ctx)); +} + +/* transform the value to physical address when writing to H/W */ +static inline void write_pte(struct hl_ctx *ctx, u64 shadow_pte_addr, u64 val) +{ + /* + * The value to write is actually the address of the next shadow hop + + * flags at the 12 LSBs. + * Hence in order to get the value to write to the physical PTE, we + * clear the 12 LSBs and translate the shadow hop to its associated + * physical hop, and add back the original 12 LSBs. + */ + u64 phys_val = get_phys_addr(ctx, val & PTE_PHYS_ADDR_MASK) | + (val & OFFSET_MASK); + + ctx->hdev->asic_funcs->write_pte(ctx->hdev, + get_phys_addr(ctx, shadow_pte_addr), + phys_val); + + *(u64 *) (uintptr_t) shadow_pte_addr = val; +} - return addr; +/* do not transform the value to physical address when writing to H/W */ +static inline void write_final_pte(struct hl_ctx *ctx, u64 shadow_pte_addr, + u64 val) +{ + ctx->hdev->asic_funcs->write_pte(ctx->hdev, + get_phys_addr(ctx, shadow_pte_addr), + val); + *(u64 *) (uintptr_t) shadow_pte_addr = val; } -static inline void clear_pte(struct hl_device *hdev, u64 pte_addr) +/* clear the last and present bits */ +static inline void clear_pte(struct hl_ctx *ctx, u64 pte_addr) { - /* clear the last and present bits */ - hdev->asic_funcs->write_pte(hdev, pte_addr, 0); + /* no need to transform the value to physical address */ + write_final_pte(ctx, pte_addr, 0); } static inline void get_pte(struct hl_ctx *ctx, u64 hop_addr) @@ -98,12 +164,6 @@ static inline int put_pte(struct hl_ctx *ctx, u64 hop_addr) return num_of_ptes_left; } -static inline u64 get_hop0_addr(struct hl_ctx *ctx) -{ - return ctx->hdev->asic_prop.mmu_pgt_addr + - (ctx->asid * ctx->hdev->asic_prop.mmu_hop_table_size); -} - static inline u64 get_hopN_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 virt_addr, u64 mask, u64 shift) { @@ -136,7 +196,7 @@ static inline u64 get_hop4_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 vaddr) return get_hopN_pte_addr(ctx, hop_addr, vaddr, HOP4_MASK, HOP4_SHIFT); } -static inline u64 get_next_hop_addr(u64 curr_pte) +static inline u64 get_next_hop_addr(struct hl_ctx *ctx, u64 curr_pte) { if (curr_pte & PAGE_PRESENT_MASK) return curr_pte & PHYS_ADDR_MASK; @@ -147,7 +207,7 @@ static inline u64 get_next_hop_addr(u64 curr_pte) static inline u64 get_alloc_next_hop_addr(struct hl_ctx *ctx, u64 curr_pte, bool *is_new_hop) { - u64 hop_addr = get_next_hop_addr(curr_pte); + u64 hop_addr = get_next_hop_addr(ctx, curr_pte); if (hop_addr == ULLONG_MAX) { hop_addr = alloc_hop(ctx); @@ -157,106 +217,30 @@ static inline u64 get_alloc_next_hop_addr(struct hl_ctx *ctx, u64 curr_pte, return hop_addr; } -/* - * hl_mmu_init - init the mmu module - * - * @hdev: pointer to the habanalabs device structure - * - * This function does the following: - * - Allocate max_asid zeroed hop0 pgts so no mapping is available - * - Enable mmu in hw - * - Invalidate the mmu cache - * - Create a pool of pages for pgts - * - Returns 0 on success - * - * This function depends on DMA QMAN to be working! - */ -int hl_mmu_init(struct hl_device *hdev) +/* translates shadow address inside hop to a physical address */ +static inline u64 get_phys_addr(struct hl_ctx *ctx, u64 shadow_addr) { - struct asic_fixed_properties *prop = &hdev->asic_prop; - int rc; + u64 page_mask = (ctx->hdev->asic_prop.mmu_hop_table_size - 1); + u64 shadow_hop_addr = shadow_addr & ~page_mask; + u64 pte_offset = shadow_addr & page_mask; + u64 phys_hop_addr; - if (!hdev->mmu_enable) - return 0; - - /* MMU HW init was already done in device hw_init() */ - - mutex_init(&hdev->mmu_cache_lock); - - hdev->mmu_pgt_pool = - gen_pool_create(__ffs(prop->mmu_hop_table_size), -1); - - if (!hdev->mmu_pgt_pool) { - dev_err(hdev->dev, "Failed to create page gen pool\n"); - rc = -ENOMEM; - goto err_pool_create; - } - - rc = gen_pool_add(hdev->mmu_pgt_pool, prop->mmu_pgt_addr + - prop->mmu_hop0_tables_total_size, - prop->mmu_pgt_size - prop->mmu_hop0_tables_total_size, - -1); - if (rc) { - dev_err(hdev->dev, "Failed to add memory to page gen pool\n"); - goto err_pool_add; - } - - return 0; - -err_pool_add: - gen_pool_destroy(hdev->mmu_pgt_pool); -err_pool_create: - mutex_destroy(&hdev->mmu_cache_lock); + if (shadow_hop_addr != get_hop0_addr(ctx)) + phys_hop_addr = get_pgt_info(ctx, shadow_hop_addr)->phys_addr; + else + phys_hop_addr = get_phys_hop0_addr(ctx); - return rc; + return phys_hop_addr + pte_offset; } -/* - * hl_mmu_fini - release the mmu module. - * - * @hdev: pointer to the habanalabs device structure - * - * This function does the following: - * - Disable mmu in hw - * - free the pgts pool - * - * All ctxs should be freed before calling this func - */ -void hl_mmu_fini(struct hl_device *hdev) -{ - if (!hdev->mmu_enable) - return; - - gen_pool_destroy(hdev->mmu_pgt_pool); - - mutex_destroy(&hdev->mmu_cache_lock); - - /* MMU HW fini will be done in device hw_fini() */ -} - -/** - * hl_mmu_ctx_init() - initialize a context for using the MMU module. - * @ctx: pointer to the context structure to initialize. - * - * Initialize a mutex to protect the concurrent mapping flow, a hash to hold all - * page tables hops related to this context and an optional DRAM default page - * mapping. - * Return: 0 on success, non-zero otherwise. - */ -int hl_mmu_ctx_init(struct hl_ctx *ctx) +static int dram_default_mapping_init(struct hl_ctx *ctx) { struct hl_device *hdev = ctx->hdev; struct asic_fixed_properties *prop = &hdev->asic_prop; - u64 num_of_hop3, total_hops, hop1_addr, hop2_addr, hop2_pte_addr, - hop3_pte_addr, pte_val; + u64 num_of_hop3, total_hops, hop0_addr, hop1_addr, hop2_addr, + hop2_pte_addr, hop3_pte_addr, pte_val; int rc, i, j, hop3_allocated = 0; - if (!hdev->mmu_enable) - return 0; - - mutex_init(&ctx->mmu_lock); - hash_init(ctx->mmu_hash); - if (!hdev->dram_supports_virtual_memory || !hdev->dram_default_page_mapping) return 0; @@ -269,10 +253,10 @@ int hl_mmu_ctx_init(struct hl_ctx *ctx) total_hops = num_of_hop3 + 2; ctx->dram_default_hops = kzalloc(HL_PTE_SIZE * total_hops, GFP_KERNEL); - if (!ctx->dram_default_hops) { - rc = -ENOMEM; - goto alloc_err; - } + if (!ctx->dram_default_hops) + return -ENOMEM; + + hop0_addr = get_hop0_addr(ctx); hop1_addr = alloc_hop(ctx); if (hop1_addr == ULLONG_MAX) { @@ -304,17 +288,17 @@ int hl_mmu_ctx_init(struct hl_ctx *ctx) /* need only pte 0 in hops 0 and 1 */ pte_val = (hop1_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; - hdev->asic_funcs->write_pte(hdev, get_hop0_addr(ctx), pte_val); + write_pte(ctx, hop0_addr, pte_val); pte_val = (hop2_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; - hdev->asic_funcs->write_pte(hdev, hop1_addr, pte_val); + write_pte(ctx, hop1_addr, pte_val); get_pte(ctx, hop1_addr); hop2_pte_addr = hop2_addr; for (i = 0 ; i < num_of_hop3 ; i++) { pte_val = (ctx->dram_default_hops[i] & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; - hdev->asic_funcs->write_pte(hdev, hop2_pte_addr, pte_val); + write_pte(ctx, hop2_pte_addr, pte_val); get_pte(ctx, hop2_addr); hop2_pte_addr += HL_PTE_SIZE; } @@ -325,33 +309,183 @@ int hl_mmu_ctx_init(struct hl_ctx *ctx) for (i = 0 ; i < num_of_hop3 ; i++) { hop3_pte_addr = ctx->dram_default_hops[i]; for (j = 0 ; j < PTE_ENTRIES_IN_HOP ; j++) { - hdev->asic_funcs->write_pte(hdev, hop3_pte_addr, - pte_val); + write_final_pte(ctx, hop3_pte_addr, pte_val); get_pte(ctx, ctx->dram_default_hops[i]); hop3_pte_addr += HL_PTE_SIZE; } } - /* flush all writes to reach PCI */ - mb(); - hdev->asic_funcs->read_pte(hdev, hop2_addr); + flush(ctx); return 0; hop3_err: for (i = 0 ; i < hop3_allocated ; i++) free_hop(ctx, ctx->dram_default_hops[i]); + free_hop(ctx, hop2_addr); hop2_err: free_hop(ctx, hop1_addr); hop1_err: kfree(ctx->dram_default_hops); -alloc_err: - mutex_destroy(&ctx->mmu_lock); return rc; } +static void dram_default_mapping_fini(struct hl_ctx *ctx) +{ + struct hl_device *hdev = ctx->hdev; + struct asic_fixed_properties *prop = &hdev->asic_prop; + u64 num_of_hop3, total_hops, hop0_addr, hop1_addr, hop2_addr, + hop2_pte_addr, hop3_pte_addr; + int i, j; + + if (!hdev->dram_supports_virtual_memory || + !hdev->dram_default_page_mapping) + return; + + num_of_hop3 = prop->dram_size_for_default_page_mapping; + do_div(num_of_hop3, prop->dram_page_size); + do_div(num_of_hop3, PTE_ENTRIES_IN_HOP); + + hop0_addr = get_hop0_addr(ctx); + /* add hop1 and hop2 */ + total_hops = num_of_hop3 + 2; + hop1_addr = ctx->dram_default_hops[total_hops - 1]; + hop2_addr = ctx->dram_default_hops[total_hops - 2]; + + for (i = 0 ; i < num_of_hop3 ; i++) { + hop3_pte_addr = ctx->dram_default_hops[i]; + for (j = 0 ; j < PTE_ENTRIES_IN_HOP ; j++) { + clear_pte(ctx, hop3_pte_addr); + put_pte(ctx, ctx->dram_default_hops[i]); + hop3_pte_addr += HL_PTE_SIZE; + } + } + + hop2_pte_addr = hop2_addr; + hop2_pte_addr = hop2_addr; + for (i = 0 ; i < num_of_hop3 ; i++) { + clear_pte(ctx, hop2_pte_addr); + put_pte(ctx, hop2_addr); + hop2_pte_addr += HL_PTE_SIZE; + } + + clear_pte(ctx, hop1_addr); + put_pte(ctx, hop1_addr); + clear_pte(ctx, hop0_addr); + + kfree(ctx->dram_default_hops); + + flush(ctx); +} + +/** + * hl_mmu_init() - initialize the MMU module. + * @hdev: habanalabs device structure. + * + * This function does the following: + * - Allocate max_asid zeroed hop0 pgts so no mapping is available. + * - Enable MMU in H/W. + * - Invalidate the MMU cache. + * - Create a pool of pages for pgt_infos. + * + * This function depends on DMA QMAN to be working! + * + * Return: 0 for success, non-zero for failure. + */ +int hl_mmu_init(struct hl_device *hdev) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + int rc; + + if (!hdev->mmu_enable) + return 0; + + /* MMU H/W init was already done in device hw_init() */ + + mutex_init(&hdev->mmu_cache_lock); + + hdev->mmu_pgt_pool = + gen_pool_create(__ffs(prop->mmu_hop_table_size), -1); + + if (!hdev->mmu_pgt_pool) { + dev_err(hdev->dev, "Failed to create page gen pool\n"); + rc = -ENOMEM; + goto err_pool_create; + } + + rc = gen_pool_add(hdev->mmu_pgt_pool, prop->mmu_pgt_addr + + prop->mmu_hop0_tables_total_size, + prop->mmu_pgt_size - prop->mmu_hop0_tables_total_size, + -1); + if (rc) { + dev_err(hdev->dev, "Failed to add memory to page gen pool\n"); + goto err_pool_add; + } + + hdev->mmu_shadow_hop0 = kvmalloc_array(prop->max_asid, + prop->mmu_hop_table_size, + GFP_KERNEL | __GFP_ZERO); + if (!hdev->mmu_shadow_hop0) { + rc = -ENOMEM; + goto err_pool_add; + } + + return 0; + +err_pool_add: + gen_pool_destroy(hdev->mmu_pgt_pool); +err_pool_create: + mutex_destroy(&hdev->mmu_cache_lock); + + return rc; +} + +/** + * hl_mmu_fini() - release the MMU module. + * @hdev: habanalabs device structure. + * + * This function does the following: + * - Disable MMU in H/W. + * - Free the pgt_infos pool. + * + * All contexts should be freed before calling this function. + */ +void hl_mmu_fini(struct hl_device *hdev) +{ + if (!hdev->mmu_enable) + return; + + kvfree(hdev->mmu_shadow_hop0); + gen_pool_destroy(hdev->mmu_pgt_pool); + mutex_destroy(&hdev->mmu_cache_lock); + + /* MMU H/W fini will be done in device hw_fini() */ +} + +/** + * hl_mmu_ctx_init() - initialize a context for using the MMU module. + * @ctx: pointer to the context structure to initialize. + * + * Initialize a mutex to protect the concurrent mapping flow, a hash to hold all + * page tables hops related to this context. + * Return: 0 on success, non-zero otherwise. + */ +int hl_mmu_ctx_init(struct hl_ctx *ctx) +{ + struct hl_device *hdev = ctx->hdev; + + if (!hdev->mmu_enable) + return 0; + + mutex_init(&ctx->mmu_lock); + hash_init(ctx->mmu_phys_hash); + hash_init(ctx->mmu_shadow_hash); + + return dram_default_mapping_init(ctx); +} + /* * hl_mmu_ctx_fini - disable a ctx from using the mmu module * @@ -365,63 +499,23 @@ alloc_err: void hl_mmu_ctx_fini(struct hl_ctx *ctx) { struct hl_device *hdev = ctx->hdev; - struct asic_fixed_properties *prop = &hdev->asic_prop; struct pgt_info *pgt_info; struct hlist_node *tmp; - u64 num_of_hop3, total_hops, hop1_addr, hop2_addr, hop2_pte_addr, - hop3_pte_addr; - int i, j; + int i; - if (!ctx->hdev->mmu_enable) + if (!hdev->mmu_enable) return; - if (hdev->dram_supports_virtual_memory && - hdev->dram_default_page_mapping) { - - num_of_hop3 = prop->dram_size_for_default_page_mapping; - do_div(num_of_hop3, prop->dram_page_size); - do_div(num_of_hop3, PTE_ENTRIES_IN_HOP); - - /* add hop1 and hop2 */ - total_hops = num_of_hop3 + 2; - hop1_addr = ctx->dram_default_hops[total_hops - 1]; - hop2_addr = ctx->dram_default_hops[total_hops - 2]; - - for (i = 0 ; i < num_of_hop3 ; i++) { - hop3_pte_addr = ctx->dram_default_hops[i]; - for (j = 0 ; j < PTE_ENTRIES_IN_HOP ; j++) { - clear_pte(hdev, hop3_pte_addr); - put_pte(ctx, ctx->dram_default_hops[i]); - hop3_pte_addr += HL_PTE_SIZE; - } - } + dram_default_mapping_fini(ctx); - hop2_pte_addr = hop2_addr; - for (i = 0 ; i < num_of_hop3 ; i++) { - clear_pte(hdev, hop2_pte_addr); - put_pte(ctx, hop2_addr); - hop2_pte_addr += HL_PTE_SIZE; - } - - clear_pte(hdev, hop1_addr); - put_pte(ctx, hop1_addr); - clear_pte(hdev, get_hop0_addr(ctx)); - - kfree(ctx->dram_default_hops); - - /* flush all writes to reach PCI */ - mb(); - hdev->asic_funcs->read_pte(hdev, hop2_addr); - } - - if (!hash_empty(ctx->mmu_hash)) + if (!hash_empty(ctx->mmu_shadow_hash)) dev_err(hdev->dev, "ctx is freed while it has pgts in use\n"); - hash_for_each_safe(ctx->mmu_hash, i, tmp, pgt_info, node) { + hash_for_each_safe(ctx->mmu_shadow_hash, i, tmp, pgt_info, node) { dev_err(hdev->dev, "pgt_info of addr 0x%llx of asid %d was not destroyed, num_ptes: %d\n", - pgt_info->addr, ctx->asid, pgt_info->num_of_ptes); - free_hop(ctx, pgt_info->addr); + pgt_info->phys_addr, ctx->asid, pgt_info->num_of_ptes); + free_hop(ctx, pgt_info->shadow_addr); } mutex_destroy(&ctx->mmu_lock); @@ -437,45 +531,43 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr) hop3_addr = 0, hop3_pte_addr = 0, hop4_addr = 0, hop4_pte_addr = 0, curr_pte; - int clear_hop3 = 1; - bool is_dram_addr, is_huge, is_dram_default_page_mapping; + bool is_dram_addr, is_huge, clear_hop3 = true; is_dram_addr = hl_mem_area_inside_range(virt_addr, PAGE_SIZE_2MB, prop->va_space_dram_start_address, prop->va_space_dram_end_address); hop0_addr = get_hop0_addr(ctx); - hop0_pte_addr = get_hop0_pte_addr(ctx, hop0_addr, virt_addr); - curr_pte = hdev->asic_funcs->read_pte(hdev, hop0_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop0_pte_addr; - hop1_addr = get_next_hop_addr(curr_pte); + hop1_addr = get_next_hop_addr(ctx, curr_pte); if (hop1_addr == ULLONG_MAX) goto not_mapped; hop1_pte_addr = get_hop1_pte_addr(ctx, hop1_addr, virt_addr); - curr_pte = hdev->asic_funcs->read_pte(hdev, hop1_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop1_pte_addr; - hop2_addr = get_next_hop_addr(curr_pte); + hop2_addr = get_next_hop_addr(ctx, curr_pte); if (hop2_addr == ULLONG_MAX) goto not_mapped; hop2_pte_addr = get_hop2_pte_addr(ctx, hop2_addr, virt_addr); - curr_pte = hdev->asic_funcs->read_pte(hdev, hop2_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop2_pte_addr; - hop3_addr = get_next_hop_addr(curr_pte); + hop3_addr = get_next_hop_addr(ctx, curr_pte); if (hop3_addr == ULLONG_MAX) goto not_mapped; hop3_pte_addr = get_hop3_pte_addr(ctx, hop3_addr, virt_addr); - curr_pte = hdev->asic_funcs->read_pte(hdev, hop3_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop3_pte_addr; is_huge = curr_pte & LAST_MASK; @@ -485,27 +577,24 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr) return -EFAULT; } - is_dram_default_page_mapping = - hdev->dram_default_page_mapping && is_dram_addr; - if (!is_huge) { - hop4_addr = get_next_hop_addr(curr_pte); + hop4_addr = get_next_hop_addr(ctx, curr_pte); if (hop4_addr == ULLONG_MAX) goto not_mapped; hop4_pte_addr = get_hop4_pte_addr(ctx, hop4_addr, virt_addr); - curr_pte = hdev->asic_funcs->read_pte(hdev, hop4_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop4_pte_addr; - clear_hop3 = 0; + clear_hop3 = false; } - if (is_dram_default_page_mapping) { - u64 zero_pte = (prop->mmu_dram_default_page_addr & + if (hdev->dram_default_page_mapping && is_dram_addr) { + u64 default_pte = (prop->mmu_dram_default_page_addr & PTE_PHYS_ADDR_MASK) | LAST_MASK | PAGE_PRESENT_MASK; - if (curr_pte == zero_pte) { + if (curr_pte == default_pte) { dev_err(hdev->dev, "DRAM: hop3 PTE points to zero page, can't unmap, va: 0x%llx\n", virt_addr); @@ -519,40 +608,43 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr) goto not_mapped; } - hdev->asic_funcs->write_pte(hdev, hop3_pte_addr, zero_pte); + write_final_pte(ctx, hop3_pte_addr, default_pte); put_pte(ctx, hop3_addr); } else { if (!(curr_pte & PAGE_PRESENT_MASK)) goto not_mapped; - clear_pte(hdev, hop4_addr ? hop4_pte_addr : hop3_pte_addr); + if (hop4_addr) + clear_pte(ctx, hop4_pte_addr); + else + clear_pte(ctx, hop3_pte_addr); if (hop4_addr && !put_pte(ctx, hop4_addr)) - clear_hop3 = 1; + clear_hop3 = true; if (!clear_hop3) goto flush; - clear_pte(hdev, hop3_pte_addr); + + clear_pte(ctx, hop3_pte_addr); if (put_pte(ctx, hop3_addr)) goto flush; - clear_pte(hdev, hop2_pte_addr); + + clear_pte(ctx, hop2_pte_addr); if (put_pte(ctx, hop2_addr)) goto flush; - clear_pte(hdev, hop1_pte_addr); + + clear_pte(ctx, hop1_pte_addr); if (put_pte(ctx, hop1_addr)) goto flush; - clear_pte(hdev, hop0_pte_addr); + + clear_pte(ctx, hop0_pte_addr); } flush: - /* flush all writes from all cores to reach PCI */ - mb(); - - hdev->asic_funcs->read_pte(hdev, - hop4_addr ? hop4_pte_addr : hop3_pte_addr); + flush(ctx); return 0; @@ -632,8 +724,7 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, hop4_addr = 0, hop4_pte_addr = 0, curr_pte = 0; bool hop1_new = false, hop2_new = false, hop3_new = false, - hop4_new = false, is_huge, is_dram_addr, - is_dram_default_page_mapping; + hop4_new = false, is_huge, is_dram_addr; int rc = -ENOMEM; /* @@ -654,59 +745,46 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, return -EFAULT; } - is_dram_default_page_mapping = - hdev->dram_default_page_mapping && is_dram_addr; - hop0_addr = get_hop0_addr(ctx); - hop0_pte_addr = get_hop0_pte_addr(ctx, hop0_addr, virt_addr); - - curr_pte = hdev->asic_funcs->read_pte(hdev, hop0_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop0_pte_addr; hop1_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop1_new); - if (hop1_addr == ULLONG_MAX) goto err; hop1_pte_addr = get_hop1_pte_addr(ctx, hop1_addr, virt_addr); - - curr_pte = hdev->asic_funcs->read_pte(hdev, hop1_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop1_pte_addr; hop2_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop2_new); - if (hop2_addr == ULLONG_MAX) goto err; hop2_pte_addr = get_hop2_pte_addr(ctx, hop2_addr, virt_addr); - - curr_pte = hdev->asic_funcs->read_pte(hdev, hop2_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop2_pte_addr; hop3_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop3_new); - if (hop3_addr == ULLONG_MAX) goto err; hop3_pte_addr = get_hop3_pte_addr(ctx, hop3_addr, virt_addr); - - curr_pte = hdev->asic_funcs->read_pte(hdev, hop3_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop3_pte_addr; if (!is_huge) { hop4_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop4_new); - if (hop4_addr == ULLONG_MAX) goto err; hop4_pte_addr = get_hop4_pte_addr(ctx, hop4_addr, virt_addr); - - curr_pte = hdev->asic_funcs->read_pte(hdev, hop4_pte_addr); + curr_pte = *(u64 *) (uintptr_t) hop4_pte_addr; } - if (is_dram_default_page_mapping) { - u64 zero_pte = (prop->mmu_dram_default_page_addr & + if (hdev->dram_default_page_mapping && is_dram_addr) { + u64 default_pte = (prop->mmu_dram_default_page_addr & PTE_PHYS_ADDR_MASK) | LAST_MASK | PAGE_PRESENT_MASK; - if (curr_pte != zero_pte) { + if (curr_pte != default_pte) { dev_err(hdev->dev, "DRAM: mapping already exists for virt_addr 0x%llx\n", virt_addr); @@ -722,27 +800,22 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, } } else if (curr_pte & PAGE_PRESENT_MASK) { dev_err(hdev->dev, - "mapping already exists for virt_addr 0x%llx\n", - virt_addr); + "mapping already exists for virt_addr 0x%llx\n", + virt_addr); dev_dbg(hdev->dev, "hop0 pte: 0x%llx (0x%llx)\n", - hdev->asic_funcs->read_pte(hdev, hop0_pte_addr), - hop0_pte_addr); + *(u64 *) (uintptr_t) hop0_pte_addr, hop0_pte_addr); dev_dbg(hdev->dev, "hop1 pte: 0x%llx (0x%llx)\n", - hdev->asic_funcs->read_pte(hdev, hop1_pte_addr), - hop1_pte_addr); + *(u64 *) (uintptr_t) hop1_pte_addr, hop1_pte_addr); dev_dbg(hdev->dev, "hop2 pte: 0x%llx (0x%llx)\n", - hdev->asic_funcs->read_pte(hdev, hop2_pte_addr), - hop2_pte_addr); + *(u64 *) (uintptr_t) hop2_pte_addr, hop2_pte_addr); dev_dbg(hdev->dev, "hop3 pte: 0x%llx (0x%llx)\n", - hdev->asic_funcs->read_pte(hdev, hop3_pte_addr), - hop3_pte_addr); + *(u64 *) (uintptr_t) hop3_pte_addr, hop3_pte_addr); if (!is_huge) dev_dbg(hdev->dev, "hop4 pte: 0x%llx (0x%llx)\n", - hdev->asic_funcs->read_pte(hdev, - hop4_pte_addr), - hop4_pte_addr); + *(u64 *) (uintptr_t) hop4_pte_addr, + hop4_pte_addr); rc = -EINVAL; goto err; @@ -751,28 +824,26 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, curr_pte = (phys_addr & PTE_PHYS_ADDR_MASK) | LAST_MASK | PAGE_PRESENT_MASK; - hdev->asic_funcs->write_pte(hdev, - is_huge ? hop3_pte_addr : hop4_pte_addr, - curr_pte); + if (is_huge) + write_final_pte(ctx, hop3_pte_addr, curr_pte); + else + write_final_pte(ctx, hop4_pte_addr, curr_pte); if (hop1_new) { - curr_pte = (hop1_addr & PTE_PHYS_ADDR_MASK) | - PAGE_PRESENT_MASK; - ctx->hdev->asic_funcs->write_pte(ctx->hdev, hop0_pte_addr, - curr_pte); + curr_pte = + (hop1_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; + write_pte(ctx, hop0_pte_addr, curr_pte); } if (hop2_new) { - curr_pte = (hop2_addr & PTE_PHYS_ADDR_MASK) | - PAGE_PRESENT_MASK; - ctx->hdev->asic_funcs->write_pte(ctx->hdev, hop1_pte_addr, - curr_pte); + curr_pte = + (hop2_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; + write_pte(ctx, hop1_pte_addr, curr_pte); get_pte(ctx, hop1_addr); } if (hop3_new) { - curr_pte = (hop3_addr & PTE_PHYS_ADDR_MASK) | - PAGE_PRESENT_MASK; - ctx->hdev->asic_funcs->write_pte(ctx->hdev, hop2_pte_addr, - curr_pte); + curr_pte = + (hop3_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; + write_pte(ctx, hop2_pte_addr, curr_pte); get_pte(ctx, hop2_addr); } @@ -780,8 +851,7 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, if (hop4_new) { curr_pte = (hop4_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK; - ctx->hdev->asic_funcs->write_pte(ctx->hdev, - hop3_pte_addr, curr_pte); + write_pte(ctx, hop3_pte_addr, curr_pte); get_pte(ctx, hop3_addr); } @@ -790,11 +860,7 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, get_pte(ctx, hop3_addr); } - /* flush all writes from all cores to reach PCI */ - mb(); - - hdev->asic_funcs->read_pte(hdev, - is_huge ? hop3_pte_addr : hop4_pte_addr); + flush(ctx); return 0; diff --git a/drivers/misc/habanalabs/pci.c b/drivers/misc/habanalabs/pci.c new file mode 100644 index 000000000000..0e78a04d63f4 --- /dev/null +++ b/drivers/misc/habanalabs/pci.c @@ -0,0 +1,408 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright 2016-2019 HabanaLabs, Ltd. + * All Rights Reserved. + */ + +#include "habanalabs.h" +#include "include/hw_ip/pci/pci_general.h" + +#include <linux/pci.h> + +/** + * hl_pci_bars_map() - Map PCI BARs. + * @hdev: Pointer to hl_device structure. + * @bar_name: Array of BAR names. + * @is_wc: Array with flag per BAR whether a write-combined mapping is needed. + * + * Request PCI regions and map them to kernel virtual addresses. + * + * Return: 0 on success, non-zero for failure. + */ +int hl_pci_bars_map(struct hl_device *hdev, const char * const name[3], + bool is_wc[3]) +{ + struct pci_dev *pdev = hdev->pdev; + int rc, i, bar; + + rc = pci_request_regions(pdev, HL_NAME); + if (rc) { + dev_err(hdev->dev, "Cannot obtain PCI resources\n"); + return rc; + } + + for (i = 0 ; i < 3 ; i++) { + bar = i * 2; /* 64-bit BARs */ + hdev->pcie_bar[bar] = is_wc[i] ? + pci_ioremap_wc_bar(pdev, bar) : + pci_ioremap_bar(pdev, bar); + if (!hdev->pcie_bar[bar]) { + dev_err(hdev->dev, "pci_ioremap%s_bar failed for %s\n", + is_wc[i] ? "_wc" : "", name[i]); + rc = -ENODEV; + goto err; + } + } + + return 0; + +err: + for (i = 2 ; i >= 0 ; i--) { + bar = i * 2; /* 64-bit BARs */ + if (hdev->pcie_bar[bar]) + iounmap(hdev->pcie_bar[bar]); + } + + pci_release_regions(pdev); + + return rc; +} + +/* + * hl_pci_bars_unmap() - Unmap PCI BARS. + * @hdev: Pointer to hl_device structure. + * + * Release all PCI BARs and unmap their virtual addresses. + */ +static void hl_pci_bars_unmap(struct hl_device *hdev) +{ + struct pci_dev *pdev = hdev->pdev; + int i, bar; + + for (i = 2 ; i >= 0 ; i--) { + bar = i * 2; /* 64-bit BARs */ + iounmap(hdev->pcie_bar[bar]); + } + + pci_release_regions(pdev); +} + +/* + * hl_pci_elbi_write() - Write through the ELBI interface. + * @hdev: Pointer to hl_device structure. + * + * Return: 0 on success, negative value for failure. + */ +static int hl_pci_elbi_write(struct hl_device *hdev, u64 addr, u32 data) +{ + struct pci_dev *pdev = hdev->pdev; + ktime_t timeout; + u32 val; + + /* Clear previous status */ + pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, 0); + + pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_ADDR, (u32) addr); + pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_DATA, data); + pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_CTRL, + PCI_CONFIG_ELBI_CTRL_WRITE); + + timeout = ktime_add_ms(ktime_get(), 10); + for (;;) { + pci_read_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, &val); + if (val & PCI_CONFIG_ELBI_STS_MASK) + break; + if (ktime_compare(ktime_get(), timeout) > 0) { + pci_read_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, + &val); + break; + } + + usleep_range(300, 500); + } + + if ((val & PCI_CONFIG_ELBI_STS_MASK) == PCI_CONFIG_ELBI_STS_DONE) + return 0; + + if (val & PCI_CONFIG_ELBI_STS_ERR) { + dev_err(hdev->dev, "Error writing to ELBI\n"); + return -EIO; + } + + if (!(val & PCI_CONFIG_ELBI_STS_MASK)) { + dev_err(hdev->dev, "ELBI write didn't finish in time\n"); + return -EIO; + } + + dev_err(hdev->dev, "ELBI write has undefined bits in status\n"); + return -EIO; +} + +/** + * hl_pci_iatu_write() - iatu write routine. + * @hdev: Pointer to hl_device structure. + * + * Return: 0 on success, negative value for failure. + */ +int hl_pci_iatu_write(struct hl_device *hdev, u32 addr, u32 data) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + u32 dbi_offset; + int rc; + + dbi_offset = addr & 0xFFF; + + rc = hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0x00300000); + rc |= hl_pci_elbi_write(hdev, prop->pcie_dbi_base_address + dbi_offset, + data); + + if (rc) + return -EIO; + + return 0; +} + +/* + * hl_pci_reset_link_through_bridge() - Reset PCI link. + * @hdev: Pointer to hl_device structure. + */ +static void hl_pci_reset_link_through_bridge(struct hl_device *hdev) +{ + struct pci_dev *pdev = hdev->pdev; + struct pci_dev *parent_port; + u16 val; + + parent_port = pdev->bus->self; + pci_read_config_word(parent_port, PCI_BRIDGE_CONTROL, &val); + val |= PCI_BRIDGE_CTL_BUS_RESET; + pci_write_config_word(parent_port, PCI_BRIDGE_CONTROL, val); + ssleep(1); + + val &= ~(PCI_BRIDGE_CTL_BUS_RESET); + pci_write_config_word(parent_port, PCI_BRIDGE_CONTROL, val); + ssleep(3); +} + +/** + * hl_pci_set_dram_bar_base() - Set DDR BAR to map specific device address. + * @hdev: Pointer to hl_device structure. + * @inbound_region: Inbound region number. + * @bar: PCI BAR number. + * @addr: Address in DRAM. Must be aligned to DRAM bar size. + * + * Configure the iATU so that the DRAM bar will start at the specified address. + * + * Return: 0 on success, negative value for failure. + */ +int hl_pci_set_dram_bar_base(struct hl_device *hdev, u8 inbound_region, u8 bar, + u64 addr) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + u32 offset; + int rc; + + switch (inbound_region) { + case 0: + offset = 0x100; + break; + case 1: + offset = 0x300; + break; + case 2: + offset = 0x500; + break; + default: + dev_err(hdev->dev, "Invalid inbound region %d\n", + inbound_region); + return -EINVAL; + } + + if (bar != 0 && bar != 2 && bar != 4) { + dev_err(hdev->dev, "Invalid PCI BAR %d\n", bar); + return -EINVAL; + } + + /* Point to the specified address */ + rc = hl_pci_iatu_write(hdev, offset + 0x14, lower_32_bits(addr)); + rc |= hl_pci_iatu_write(hdev, offset + 0x18, upper_32_bits(addr)); + rc |= hl_pci_iatu_write(hdev, offset + 0x0, 0); + /* Enable + BAR match + match enable + BAR number */ + rc |= hl_pci_iatu_write(hdev, offset + 0x4, 0xC0080000 | (bar << 8)); + + /* Return the DBI window to the default location */ + rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0); + rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr + 4, 0); + + if (rc) + dev_err(hdev->dev, "failed to map DRAM bar to 0x%08llx\n", + addr); + + return rc; +} + +/** + * hl_pci_init_iatu() - Initialize the iATU unit inside the PCI controller. + * @hdev: Pointer to hl_device structure. + * @sram_base_address: SRAM base address. + * @dram_base_address: DRAM base address. + * @host_phys_base_address: Base physical address of host memory for device + * transactions. + * @host_phys_size: Size of host memory for device transactions. + * + * This is needed in case the firmware doesn't initialize the iATU. + * + * Return: 0 on success, negative value for failure. + */ +int hl_pci_init_iatu(struct hl_device *hdev, u64 sram_base_address, + u64 dram_base_address, u64 host_phys_base_address, + u64 host_phys_size) +{ + struct asic_fixed_properties *prop = &hdev->asic_prop; + u64 host_phys_end_addr; + int rc = 0; + + /* Inbound Region 0 - Bar 0 - Point to SRAM base address */ + rc = hl_pci_iatu_write(hdev, 0x114, lower_32_bits(sram_base_address)); + rc |= hl_pci_iatu_write(hdev, 0x118, upper_32_bits(sram_base_address)); + rc |= hl_pci_iatu_write(hdev, 0x100, 0); + /* Enable + Bar match + match enable */ + rc |= hl_pci_iatu_write(hdev, 0x104, 0xC0080000); + + /* Point to DRAM */ + if (!hdev->asic_funcs->set_dram_bar_base) + return -EINVAL; + if (hdev->asic_funcs->set_dram_bar_base(hdev, dram_base_address) == + U64_MAX) + return -EIO; + + + /* Outbound Region 0 - Point to Host */ + host_phys_end_addr = host_phys_base_address + host_phys_size - 1; + rc |= hl_pci_iatu_write(hdev, 0x008, + lower_32_bits(host_phys_base_address)); + rc |= hl_pci_iatu_write(hdev, 0x00C, + upper_32_bits(host_phys_base_address)); + rc |= hl_pci_iatu_write(hdev, 0x010, lower_32_bits(host_phys_end_addr)); + rc |= hl_pci_iatu_write(hdev, 0x014, 0); + rc |= hl_pci_iatu_write(hdev, 0x018, 0); + rc |= hl_pci_iatu_write(hdev, 0x020, upper_32_bits(host_phys_end_addr)); + /* Increase region size */ + rc |= hl_pci_iatu_write(hdev, 0x000, 0x00002000); + /* Enable */ + rc |= hl_pci_iatu_write(hdev, 0x004, 0x80000000); + + /* Return the DBI window to the default location */ + rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0); + rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr + 4, 0); + + if (rc) + return -EIO; + + return 0; +} + +/** + * hl_pci_set_dma_mask() - Set DMA masks for the device. + * @hdev: Pointer to hl_device structure. + * @dma_mask: number of bits for the requested dma mask. + * + * This function sets the DMA masks (regular and consistent) for a specified + * value. If it doesn't succeed, it tries to set it to a fall-back value + * + * Return: 0 on success, non-zero for failure. + */ +int hl_pci_set_dma_mask(struct hl_device *hdev, u8 dma_mask) +{ + struct pci_dev *pdev = hdev->pdev; + int rc; + + /* set DMA mask */ + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_mask)); + if (rc) { + dev_warn(hdev->dev, + "Failed to set pci dma mask to %d bits, error %d\n", + dma_mask, rc); + + dma_mask = hdev->dma_mask; + + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_mask)); + if (rc) { + dev_err(hdev->dev, + "Failed to set pci dma mask to %d bits, error %d\n", + dma_mask, rc); + return rc; + } + } + + /* + * We managed to set the dma mask, so update the dma mask field. If + * the set to the coherent mask will fail with that mask, we will + * fail the entire function + */ + hdev->dma_mask = dma_mask; + + rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(dma_mask)); + if (rc) { + dev_err(hdev->dev, + "Failed to set pci consistent dma mask to %d bits, error %d\n", + dma_mask, rc); + return rc; + } + + return 0; +} + +/** + * hl_pci_init() - PCI initialization code. + * @hdev: Pointer to hl_device structure. + * @dma_mask: number of bits for the requested dma mask. + * + * Set DMA masks, initialize the PCI controller and map the PCI BARs. + * + * Return: 0 on success, non-zero for failure. + */ +int hl_pci_init(struct hl_device *hdev, u8 dma_mask) +{ + struct pci_dev *pdev = hdev->pdev; + int rc; + + rc = hl_pci_set_dma_mask(hdev, dma_mask); + if (rc) + return rc; + + if (hdev->reset_pcilink) + hl_pci_reset_link_through_bridge(hdev); + + rc = pci_enable_device_mem(pdev); + if (rc) { + dev_err(hdev->dev, "can't enable PCI device\n"); + return rc; + } + + pci_set_master(pdev); + + rc = hdev->asic_funcs->init_iatu(hdev); + if (rc) { + dev_err(hdev->dev, "Failed to initialize iATU\n"); + goto disable_device; + } + + rc = hdev->asic_funcs->pci_bars_map(hdev); + if (rc) { + dev_err(hdev->dev, "Failed to initialize PCI BARs\n"); + goto disable_device; + } + + return 0; + +disable_device: + pci_clear_master(pdev); + pci_disable_device(pdev); + + return rc; +} + +/** + * hl_fw_fini() - PCI finalization code. + * @hdev: Pointer to hl_device structure + * + * Unmap PCI bars and disable PCI device. + */ +void hl_pci_fini(struct hl_device *hdev) +{ + hl_pci_bars_unmap(hdev); + + pci_clear_master(hdev->pdev); + pci_disable_device(hdev->pdev); +} diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c index de20bdaa148d..8b01257783dd 100644 --- a/drivers/misc/kgdbts.c +++ b/drivers/misc/kgdbts.c @@ -1135,7 +1135,7 @@ static void kgdbts_put_char(u8 chr) static int param_set_kgdbts_var(const char *kmessage, const struct kernel_param *kp) { - int len = strlen(kmessage); + size_t len = strlen(kmessage); if (len >= MAX_CONFIG_LEN) { printk(KERN_ERR "kgdbts: config string too long\n"); @@ -1155,7 +1155,7 @@ static int param_set_kgdbts_var(const char *kmessage, strcpy(config, kmessage); /* Chop out \n char as a result of echo */ - if (config[len - 1] == '\n') + if (len && config[len - 1] == '\n') config[len - 1] = '\0'; /* Go and configure with the new params. */ diff --git a/drivers/misc/mei/Kconfig b/drivers/misc/mei/Kconfig index 74e2c667dce0..9d7b3719bfa0 100644 --- a/drivers/misc/mei/Kconfig +++ b/drivers/misc/mei/Kconfig @@ -1,3 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2003-2019, Intel Corporation. All rights reserved. config INTEL_MEI tristate "Intel Management Engine Interface" depends on X86 && PCI @@ -44,12 +46,4 @@ config INTEL_MEI_TXE Supported SoCs: Intel Bay Trail -config INTEL_MEI_HDCP - tristate "Intel HDCP2.2 services of ME Interface" - select INTEL_MEI_ME - depends on DRM_I915 - help - MEI Support for HDCP2.2 Services on Intel platforms. - - Enables the ME FW services required for HDCP2.2 support through - I915 display driver of Intel. +source "drivers/misc/mei/hdcp/Kconfig" diff --git a/drivers/misc/mei/Makefile b/drivers/misc/mei/Makefile index 8c2d9565a4cb..f1c76f7ee804 100644 --- a/drivers/misc/mei/Makefile +++ b/drivers/misc/mei/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 # +# Copyright (c) 2010-2019, Intel Corporation. All rights reserved. # Makefile - Intel Management Engine Interface (Intel MEI) Linux driver -# Copyright (c) 2010-2014, Intel Corporation. # obj-$(CONFIG_INTEL_MEI) += mei.o mei-objs := init.o diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c index 5fcac02233af..32e9b1aed2ca 100644 --- a/drivers/misc/mei/bus-fixup.c +++ b/drivers/misc/mei/bus-fixup.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2013-2019, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2018, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/kernel.h> diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c index 65bec998eb6e..985bd4fd3328 100644 --- a/drivers/misc/mei/bus.c +++ b/drivers/misc/mei/bus.c @@ -1,16 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* + * Copyright (c) 2012-2019, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2012-2013, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/module.h> diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c index ca4c9cc218a2..1e3edbbacb1e 100644 --- a/drivers/misc/mei/client.c +++ b/drivers/misc/mei/client.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2003-2019, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/sched/signal.h> @@ -679,7 +669,7 @@ int mei_cl_unlink(struct mei_cl *cl) void mei_host_client_init(struct mei_device *dev) { - dev->dev_state = MEI_DEV_ENABLED; + mei_set_devstate(dev, MEI_DEV_ENABLED); dev->reset_count = 0; schedule_work(&dev->bus_rescan_work); diff --git a/drivers/misc/mei/client.h b/drivers/misc/mei/client.h index 64e318f589b4..c1f9e810cf81 100644 --- a/drivers/misc/mei/client.h +++ b/drivers/misc/mei/client.h @@ -1,17 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #ifndef _MEI_CLIENT_H_ diff --git a/drivers/misc/mei/debugfs.c b/drivers/misc/mei/debugfs.c index 7b5df8fd6c5a..0970142bcace 100644 --- a/drivers/misc/mei/debugfs.c +++ b/drivers/misc/mei/debugfs.c @@ -1,18 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2012-2016, Intel Corporation. All rights reserved * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2012-2013, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ + #include <linux/slab.h> #include <linux/kernel.h> #include <linux/device.h> diff --git a/drivers/misc/mei/dma-ring.c b/drivers/misc/mei/dma-ring.c index 795641b82181..ef56f849b251 100644 --- a/drivers/misc/mei/dma-ring.c +++ b/drivers/misc/mei/dma-ring.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. + * Copyright(c) 2016-2018 Intel Corporation. All rights reserved. */ #include <linux/dma-mapping.h> #include <linux/mei.h> diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c index e6207f614816..a44094cdbc36 100644 --- a/drivers/misc/mei/hbm.c +++ b/drivers/misc/mei/hbm.c @@ -1,19 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2003-2019, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ - #include <linux/export.h> #include <linux/sched.h> #include <linux/wait.h> diff --git a/drivers/misc/mei/hbm.h b/drivers/misc/mei/hbm.h index 0171a7e79bab..5aa58cffdd2e 100644 --- a/drivers/misc/mei/hbm.h +++ b/drivers/misc/mei/hbm.h @@ -1,17 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #ifndef _MEI_HBM_H_ diff --git a/drivers/misc/mei/hdcp/Kconfig b/drivers/misc/mei/hdcp/Kconfig new file mode 100644 index 000000000000..95b2d6d37f10 --- /dev/null +++ b/drivers/misc/mei/hdcp/Kconfig @@ -0,0 +1,13 @@ + +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2019, Intel Corporation. All rights reserved. +# +config INTEL_MEI_HDCP + tristate "Intel HDCP2.2 services of ME Interface" + select INTEL_MEI_ME + depends on DRM_I915 + help + MEI Support for HDCP2.2 Services on Intel platforms. + + Enables the ME FW services required for HDCP2.2 support through + I915 display driver of Intel. diff --git a/drivers/misc/mei/hdcp/Makefile b/drivers/misc/mei/hdcp/Makefile index adbe7506282d..3fbb56485ce8 100644 --- a/drivers/misc/mei/hdcp/Makefile +++ b/drivers/misc/mei/hdcp/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 # -# Copyright (c) 2019, Intel Corporation. +# Copyright (c) 2019, Intel Corporation. All rights reserved. # # Makefile - HDCP client driver for Intel MEI Bus Driver. diff --git a/drivers/misc/mei/hdcp/mei_hdcp.c b/drivers/misc/mei/hdcp/mei_hdcp.c index 90b6ae8e9dae..b07000202d4a 100644 --- a/drivers/misc/mei/hdcp/mei_hdcp.c +++ b/drivers/misc/mei/hdcp/mei_hdcp.c @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: (GPL-2.0) +// SPDX-License-Identifier: GPL-2.0 /* * Copyright © 2019 Intel Corporation * diff --git a/drivers/misc/mei/hdcp/mei_hdcp.h b/drivers/misc/mei/hdcp/mei_hdcp.h index 5f74b908e486..e4b1cd54c853 100644 --- a/drivers/misc/mei/hdcp/mei_hdcp.h +++ b/drivers/misc/mei/hdcp/mei_hdcp.h @@ -1,4 +1,4 @@ -/* SPDX-License-Identifier: (GPL-2.0+) */ +/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright © 2019 Intel Corporation * diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h index bb1ee9834a02..d74b182e19f3 100644 --- a/drivers/misc/mei/hw-me-regs.h +++ b/drivers/misc/mei/hw-me-regs.h @@ -1,68 +1,8 @@ -/****************************************************************************** +/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ +/* + * Copyright (c) 2003-2019, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Intel MEI Interface Header - * - * This file is provided under a dual BSD/GPLv2 license. When using or - * redistributing this file, you may do so under either license. - * - * GPL LICENSE SUMMARY - * - * Copyright(c) 2003 - 2012 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of version 2 of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, - * USA - * - * The full GNU General Public License is included in this distribution - * in the file called LICENSE.GPL. - * - * Contact Information: - * Intel Corporation. - * linux-mei@linux.intel.com - * http://www.intel.com - * - * BSD LICENSE - * - * Copyright(c) 2003 - 2012 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - *****************************************************************************/ + */ #ifndef _MEI_HW_MEI_REGS_H_ #define _MEI_HW_MEI_REGS_H_ diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c index 8a47a6fc3fc7..abe1b1f4362f 100644 --- a/drivers/misc/mei/hw-me.c +++ b/drivers/misc/mei/hw-me.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/pci.h> diff --git a/drivers/misc/mei/hw-me.h b/drivers/misc/mei/hw-me.h index bbcc5fc106cd..08c84a0de4a8 100644 --- a/drivers/misc/mei/hw-me.h +++ b/drivers/misc/mei/hw-me.h @@ -1,21 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2012-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ - - #ifndef _MEI_INTERFACE_H_ #define _MEI_INTERFACE_H_ diff --git a/drivers/misc/mei/hw-txe-regs.h b/drivers/misc/mei/hw-txe-regs.h index f19229c4e655..a92b306dac8b 100644 --- a/drivers/misc/mei/hw-txe-regs.h +++ b/drivers/misc/mei/hw-txe-regs.h @@ -1,63 +1,8 @@ -/****************************************************************************** +/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ +/* + * Copyright (c) 2013-2014, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Intel MEI Interface Header - * - * This file is provided under a dual BSD/GPLv2 license. When using or - * redistributing this file, you may do so under either license. - * - * GPL LICENSE SUMMARY - * - * Copyright(c) 2013 - 2014 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of version 2 of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * The full GNU General Public License is included in this distribution - * in the file called COPYING - * - * Contact Information: - * Intel Corporation. - * linux-mei@linux.intel.com - * http://www.intel.com - * - * BSD LICENSE - * - * Copyright(c) 2013 - 2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - *****************************************************************************/ + */ #ifndef _MEI_HW_TXE_REGS_H_ #define _MEI_HW_TXE_REGS_H_ diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c index 8449fe0367ff..5e58656b8e19 100644 --- a/drivers/misc/mei/hw-txe.c +++ b/drivers/misc/mei/hw-txe.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2013-2014, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2013-2014, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/pci.h> diff --git a/drivers/misc/mei/hw-txe.h b/drivers/misc/mei/hw-txe.h index e1e8b66d7648..96511b04bf88 100644 --- a/drivers/misc/mei/hw-txe.h +++ b/drivers/misc/mei/hw-txe.h @@ -1,17 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2013-2016, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2013-2014, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #ifndef _MEI_HW_TXE_H_ diff --git a/drivers/misc/mei/hw.h b/drivers/misc/mei/hw.h index b7d2487b8409..d025a5f8317e 100644 --- a/drivers/misc/mei/hw.h +++ b/drivers/misc/mei/hw.h @@ -1,17 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #ifndef _MEI_HW_TYPES_H_ diff --git a/drivers/misc/mei/init.c b/drivers/misc/mei/init.c index eb026e2a0537..b9fef773e71b 100644 --- a/drivers/misc/mei/init.c +++ b/drivers/misc/mei/init.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2012-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/export.h> @@ -133,12 +123,12 @@ int mei_reset(struct mei_device *dev) /* enter reset flow */ interrupts_enabled = state != MEI_DEV_POWER_DOWN; - dev->dev_state = MEI_DEV_RESETTING; + mei_set_devstate(dev, MEI_DEV_RESETTING); dev->reset_count++; if (dev->reset_count > MEI_MAX_CONSEC_RESET) { dev_err(dev->dev, "reset: reached maximal consecutive resets: disabling the device\n"); - dev->dev_state = MEI_DEV_DISABLED; + mei_set_devstate(dev, MEI_DEV_DISABLED); return -ENODEV; } @@ -160,7 +150,7 @@ int mei_reset(struct mei_device *dev) if (state == MEI_DEV_POWER_DOWN) { dev_dbg(dev->dev, "powering down: end of reset\n"); - dev->dev_state = MEI_DEV_DISABLED; + mei_set_devstate(dev, MEI_DEV_DISABLED); return 0; } @@ -172,11 +162,11 @@ int mei_reset(struct mei_device *dev) dev_dbg(dev->dev, "link is established start sending messages.\n"); - dev->dev_state = MEI_DEV_INIT_CLIENTS; + mei_set_devstate(dev, MEI_DEV_INIT_CLIENTS); ret = mei_hbm_start_req(dev); if (ret) { dev_err(dev->dev, "hbm_start failed ret = %d\n", ret); - dev->dev_state = MEI_DEV_RESETTING; + mei_set_devstate(dev, MEI_DEV_RESETTING); return ret; } @@ -206,7 +196,7 @@ int mei_start(struct mei_device *dev) dev->reset_count = 0; do { - dev->dev_state = MEI_DEV_INITIALIZING; + mei_set_devstate(dev, MEI_DEV_INITIALIZING); ret = mei_reset(dev); if (ret == -ENODEV || dev->dev_state == MEI_DEV_DISABLED) { @@ -241,7 +231,7 @@ int mei_start(struct mei_device *dev) return 0; err: dev_err(dev->dev, "link layer initialization failed.\n"); - dev->dev_state = MEI_DEV_DISABLED; + mei_set_devstate(dev, MEI_DEV_DISABLED); mutex_unlock(&dev->device_lock); return -ENODEV; } @@ -260,7 +250,7 @@ int mei_restart(struct mei_device *dev) mutex_lock(&dev->device_lock); - dev->dev_state = MEI_DEV_POWER_UP; + mei_set_devstate(dev, MEI_DEV_POWER_UP); dev->reset_count = 0; err = mei_reset(dev); @@ -311,7 +301,7 @@ void mei_stop(struct mei_device *dev) dev_dbg(dev->dev, "stopping the device.\n"); mutex_lock(&dev->device_lock); - dev->dev_state = MEI_DEV_POWER_DOWN; + mei_set_devstate(dev, MEI_DEV_POWER_DOWN); mutex_unlock(&dev->device_lock); mei_cl_bus_remove_devices(dev); @@ -324,7 +314,7 @@ void mei_stop(struct mei_device *dev) mei_reset(dev); /* move device to disabled state unconditionally */ - dev->dev_state = MEI_DEV_DISABLED; + mei_set_devstate(dev, MEI_DEV_DISABLED); mutex_unlock(&dev->device_lock); } diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c index 055c2d89b310..c70a8c74cc57 100644 --- a/drivers/misc/mei/interrupt.c +++ b/drivers/misc/mei/interrupt.c @@ -1,20 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ - #include <linux/export.h> #include <linux/kthread.h> #include <linux/interrupt.h> diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c index 87281b3695e6..ad02097d7fee 100644 --- a/drivers/misc/mei/main.c +++ b/drivers/misc/mei/main.c @@ -1,18 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2018, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ + #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/kernel.h> @@ -37,6 +28,12 @@ #include "mei_dev.h" #include "client.h" +static struct class *mei_class; +static dev_t mei_devt; +#define MEI_MAX_DEVS MINORMASK +static DEFINE_MUTEX(mei_minor_lock); +static DEFINE_IDR(mei_idr); + /** * mei_open - the open function * @@ -838,12 +835,65 @@ static ssize_t fw_ver_show(struct device *device, } static DEVICE_ATTR_RO(fw_ver); +/** + * dev_state_show - display device state + * + * @device: device pointer + * @attr: attribute pointer + * @buf: char out buffer + * + * Return: number of the bytes printed into buf or error + */ +static ssize_t dev_state_show(struct device *device, + struct device_attribute *attr, char *buf) +{ + struct mei_device *dev = dev_get_drvdata(device); + enum mei_dev_state dev_state; + + mutex_lock(&dev->device_lock); + dev_state = dev->dev_state; + mutex_unlock(&dev->device_lock); + + return sprintf(buf, "%s", mei_dev_state_str(dev_state)); +} +static DEVICE_ATTR_RO(dev_state); + +static int match_devt(struct device *dev, const void *data) +{ + const dev_t *devt = data; + + return dev->devt == *devt; +} + +/** + * dev_set_devstate: set to new device state and notify sysfs file. + * + * @dev: mei_device + * @state: new device state + */ +void mei_set_devstate(struct mei_device *dev, enum mei_dev_state state) +{ + struct device *clsdev; + + if (dev->dev_state == state) + return; + + dev->dev_state = state; + + clsdev = class_find_device(mei_class, NULL, &dev->cdev.dev, match_devt); + if (clsdev) { + sysfs_notify(&clsdev->kobj, NULL, "dev_state"); + put_device(clsdev); + } +} + static struct attribute *mei_attrs[] = { &dev_attr_fw_status.attr, &dev_attr_hbm_ver.attr, &dev_attr_hbm_ver_drv.attr, &dev_attr_tx_queue_limit.attr, &dev_attr_fw_ver.attr, + &dev_attr_dev_state.attr, NULL }; ATTRIBUTE_GROUPS(mei); @@ -867,12 +917,6 @@ static const struct file_operations mei_fops = { .llseek = no_llseek }; -static struct class *mei_class; -static dev_t mei_devt; -#define MEI_MAX_DEVS MINORMASK -static DEFINE_MUTEX(mei_minor_lock); -static DEFINE_IDR(mei_idr); - /** * mei_minor_get - obtain next free device minor number * diff --git a/drivers/misc/mei/mei-trace.c b/drivers/misc/mei/mei-trace.c index 374edde72a14..48d4c4fcefd2 100644 --- a/drivers/misc/mei/mei-trace.c +++ b/drivers/misc/mei/mei-trace.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2015-2016, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2015, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/module.h> diff --git a/drivers/misc/mei/mei-trace.h b/drivers/misc/mei/mei-trace.h index b52e9b97a7c0..df758033dc93 100644 --- a/drivers/misc/mei/mei-trace.h +++ b/drivers/misc/mei/mei-trace.h @@ -1,17 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2015-2016, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2015, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #if !defined(_MEI_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ) diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h index 685b78ce30a5..fca832fcac57 100644 --- a/drivers/misc/mei/mei_dev.h +++ b/drivers/misc/mei/mei_dev.h @@ -1,17 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * + * Copyright (c) 2003-2018, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2018, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #ifndef _MEI_DEV_H_ @@ -535,7 +525,6 @@ struct mei_device { struct dentry *dbgfs_dir; #endif /* CONFIG_DEBUG_FS */ - const struct mei_hw_ops *ops; char hw[0] __aligned(sizeof(void *)); }; @@ -594,6 +583,8 @@ int mei_restart(struct mei_device *dev); void mei_stop(struct mei_device *dev); void mei_cancel_work(struct mei_device *dev); +void mei_set_devstate(struct mei_device *dev, enum mei_dev_state state); + int mei_dmam_ring_alloc(struct mei_device *dev); void mei_dmam_ring_free(struct mei_device *dev); bool mei_dma_ring_is_allocated(struct mei_device *dev); diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c index 3ab946ad3257..7a2b3545a7f9 100644 --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c @@ -1,18 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2003-2019, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2003-2012, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ + #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/kernel.h> diff --git a/drivers/misc/mei/pci-txe.c b/drivers/misc/mei/pci-txe.c index e1b909123fb0..2e37fc2e0fa8 100644 --- a/drivers/misc/mei/pci-txe.c +++ b/drivers/misc/mei/pci-txe.c @@ -1,17 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * + * Copyright (c) 2013-2017, Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver - * Copyright (c) 2013-2014, Intel Corporation. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * */ #include <linux/module.h> diff --git a/drivers/misc/sgi-xp/xpc_uv.c b/drivers/misc/sgi-xp/xpc_uv.c index 9e443df44b3b..0c6de97dd347 100644 --- a/drivers/misc/sgi-xp/xpc_uv.c +++ b/drivers/misc/sgi-xp/xpc_uv.c @@ -572,6 +572,7 @@ xpc_handle_activate_mq_msg_uv(struct xpc_partition *part, xpc_wakeup_channel_mgr(part); } + /* fall through */ case XPC_ACTIVATE_MQ_MSG_MARK_ENGAGED_UV: spin_lock_irqsave(&part_uv->flags_lock, irq_flags); part_uv->flags |= XPC_P_ENGAGED_UV; diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c index c48c3a1eb1f8..fcf31335a8b6 100644 --- a/drivers/net/thunderbolt.c +++ b/drivers/net/thunderbolt.c @@ -1282,6 +1282,7 @@ static int __maybe_unused tbnet_suspend(struct device *dev) tbnet_tear_down(net, true); } + tb_unregister_protocol_handler(&net->handler); return 0; } @@ -1290,6 +1291,8 @@ static int __maybe_unused tbnet_resume(struct device *dev) struct tb_service *svc = tb_to_service(dev); struct tbnet *net = tb_service_get_drvdata(svc); + tb_register_protocol_handler(&net->handler); + netif_carrier_off(net->dev); if (netif_running(net->dev)) { netif_device_attach(net->dev); diff --git a/drivers/nfc/mei_phy.c b/drivers/nfc/mei_phy.c index 8a04c5e02999..0f43bb389566 100644 --- a/drivers/nfc/mei_phy.c +++ b/drivers/nfc/mei_phy.c @@ -1,21 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * MEI Library for mei bus nfc device access - * - * Copyright (C) 2013 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. + * Copyright (c) 2013, Intel Corporation. * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, see <http://www.gnu.org/licenses/>. + * MEI Library for mei bus nfc device access */ - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include <linux/module.h> diff --git a/drivers/nfc/microread/mei.c b/drivers/nfc/microread/mei.c index eb5eddf1794e..5dad8847a9b3 100644 --- a/drivers/nfc/microread/mei.c +++ b/drivers/nfc/microread/mei.c @@ -1,19 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * HCI based Driver for Inside Secure microread NFC Chip - * - * Copyright (C) 2013 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. + * Copyright (C) 2013 Intel Corporation. All rights reserved. * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, see <http://www.gnu.org/licenses/>. + * HCI based Driver for Inside Secure microread NFC Chip */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt diff --git a/drivers/nfc/pn544/mei.c b/drivers/nfc/pn544/mei.c index ad57a8ec00d6..579bc599f545 100644 --- a/drivers/nfc/pn544/mei.c +++ b/drivers/nfc/pn544/mei.c @@ -1,19 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 /* - * HCI based Driver for NXP pn544 NFC Chip - * * Copyright (C) 2013 Intel Corporation. All rights reserved. * - * This program is free software; you can redistribute it and/or modify it - * under the terms and conditions of the GNU General Public License, - * version 2, as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, see <http://www.gnu.org/licenses/>. + * HCI based Driver for NXP pn544 NFC Chip */ #include <linux/module.h> diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig index 530d570724c9..6b2c4254c2fb 100644 --- a/drivers/nvmem/Kconfig +++ b/drivers/nvmem/Kconfig @@ -13,6 +13,16 @@ menuconfig NVMEM if NVMEM +config NVMEM_SYSFS + bool "/sys/bus/nvmem/devices/*/nvmem (sysfs interface)" + depends on SYSFS + default y + help + Say Y here to add a sysfs interface for NVMEM. + + This interface is mostly used by userspace applications to + read/write directly into nvmem. + config NVMEM_IMX_IIM tristate "i.MX IC Identification Module support" depends on ARCH_MXC || COMPILE_TEST @@ -25,8 +35,8 @@ config NVMEM_IMX_IIM will be called nvmem-imx-iim. config NVMEM_IMX_OCOTP - tristate "i.MX6 On-Chip OTP Controller support" - depends on SOC_IMX6 || SOC_IMX7D || COMPILE_TEST + tristate "i.MX 6/7/8 On-Chip OTP Controller support" + depends on ARCH_MXC || COMPILE_TEST depends on HAS_IOMEM help This is a driver for the On-Chip OTP Controller (OCOTP) available on @@ -113,6 +123,16 @@ config NVMEM_BCM_OCOTP This driver can also be built as a module. If so, the module will be called nvmem-bcm-ocotp. +config NVMEM_STM32_ROMEM + tristate "STMicroelectronics STM32 factory-programmed memory support" + depends on ARCH_STM32 || COMPILE_TEST + help + Say y here to enable read-only access for STMicroelectronics STM32 + factory-programmed memory area. + + This driver can also be built as a module. If so, the module + will be called nvmem-stm32-romem. + config NVMEM_SUNXI_SID tristate "Allwinner SoCs SID support" depends on ARCH_SUNXI diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile index 2ece8ffffdda..c1fe4768dfef 100644 --- a/drivers/nvmem/Makefile +++ b/drivers/nvmem/Makefile @@ -6,6 +6,9 @@ obj-$(CONFIG_NVMEM) += nvmem_core.o nvmem_core-y := core.o +obj-$(CONFIG_NVMEM_SYSFS) += nvmem_sysfs.o +nvmem_sysfs-y := nvmem-sysfs.o + # Devices obj-$(CONFIG_NVMEM_BCM_OCOTP) += nvmem-bcm-ocotp.o nvmem-bcm-ocotp-y := bcm-ocotp.o @@ -26,6 +29,8 @@ nvmem_qfprom-y := qfprom.o obj-$(CONFIG_ROCKCHIP_EFUSE) += nvmem_rockchip_efuse.o nvmem_rockchip_efuse-y := rockchip-efuse.o obj-$(CONFIG_NVMEM_SUNXI_SID) += nvmem_sunxi_sid.o +nvmem_stm32_romem-y := stm32-romem.o +obj-$(CONFIG_NVMEM_STM32_ROMEM) += nvmem_stm32_romem.o nvmem_sunxi_sid-y := sunxi_sid.o obj-$(CONFIG_UNIPHIER_EFUSE) += nvmem-uniphier-efuse.o nvmem-uniphier-efuse-y := uniphier-efuse.o diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index f24008b66826..c7892c3da91f 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -17,27 +17,7 @@ #include <linux/nvmem-provider.h> #include <linux/of.h> #include <linux/slab.h> - -struct nvmem_device { - struct module *owner; - struct device dev; - int stride; - int word_size; - int id; - struct kref refcnt; - size_t size; - bool read_only; - int flags; - enum nvmem_type type; - struct bin_attribute eeprom; - struct device *base_dev; - struct list_head cells; - nvmem_reg_read_t reg_read; - nvmem_reg_write_t reg_write; - void *priv; -}; - -#define FLAG_COMPAT BIT(0) +#include "nvmem.h" struct nvmem_cell { const char *name; @@ -61,18 +41,7 @@ static LIST_HEAD(nvmem_lookup_list); static BLOCKING_NOTIFIER_HEAD(nvmem_notifier); -static const char * const nvmem_type_str[] = { - [NVMEM_TYPE_UNKNOWN] = "Unknown", - [NVMEM_TYPE_EEPROM] = "EEPROM", - [NVMEM_TYPE_OTP] = "OTP", - [NVMEM_TYPE_BATTERY_BACKED] = "Battery backed", -}; -#ifdef CONFIG_DEBUG_LOCK_ALLOC -static struct lock_class_key eeprom_lock_key; -#endif - -#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev) static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset, void *val, size_t bytes) { @@ -91,187 +60,6 @@ static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset, return -EINVAL; } -static ssize_t type_show(struct device *dev, - struct device_attribute *attr, char *buf) -{ - struct nvmem_device *nvmem = to_nvmem_device(dev); - - return sprintf(buf, "%s\n", nvmem_type_str[nvmem->type]); -} - -static DEVICE_ATTR_RO(type); - -static struct attribute *nvmem_attrs[] = { - &dev_attr_type.attr, - NULL, -}; - -static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj, - struct bin_attribute *attr, - char *buf, loff_t pos, size_t count) -{ - struct device *dev; - struct nvmem_device *nvmem; - int rc; - - if (attr->private) - dev = attr->private; - else - dev = container_of(kobj, struct device, kobj); - nvmem = to_nvmem_device(dev); - - /* Stop the user from reading */ - if (pos >= nvmem->size) - return 0; - - if (count < nvmem->word_size) - return -EINVAL; - - if (pos + count > nvmem->size) - count = nvmem->size - pos; - - count = round_down(count, nvmem->word_size); - - rc = nvmem_reg_read(nvmem, pos, buf, count); - - if (rc) - return rc; - - return count; -} - -static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj, - struct bin_attribute *attr, - char *buf, loff_t pos, size_t count) -{ - struct device *dev; - struct nvmem_device *nvmem; - int rc; - - if (attr->private) - dev = attr->private; - else - dev = container_of(kobj, struct device, kobj); - nvmem = to_nvmem_device(dev); - - /* Stop the user from writing */ - if (pos >= nvmem->size) - return -EFBIG; - - if (count < nvmem->word_size) - return -EINVAL; - - if (pos + count > nvmem->size) - count = nvmem->size - pos; - - count = round_down(count, nvmem->word_size); - - rc = nvmem_reg_write(nvmem, pos, buf, count); - - if (rc) - return rc; - - return count; -} - -/* default read/write permissions */ -static struct bin_attribute bin_attr_rw_nvmem = { - .attr = { - .name = "nvmem", - .mode = 0644, - }, - .read = bin_attr_nvmem_read, - .write = bin_attr_nvmem_write, -}; - -static struct bin_attribute *nvmem_bin_rw_attributes[] = { - &bin_attr_rw_nvmem, - NULL, -}; - -static const struct attribute_group nvmem_bin_rw_group = { - .bin_attrs = nvmem_bin_rw_attributes, - .attrs = nvmem_attrs, -}; - -static const struct attribute_group *nvmem_rw_dev_groups[] = { - &nvmem_bin_rw_group, - NULL, -}; - -/* read only permission */ -static struct bin_attribute bin_attr_ro_nvmem = { - .attr = { - .name = "nvmem", - .mode = 0444, - }, - .read = bin_attr_nvmem_read, -}; - -static struct bin_attribute *nvmem_bin_ro_attributes[] = { - &bin_attr_ro_nvmem, - NULL, -}; - -static const struct attribute_group nvmem_bin_ro_group = { - .bin_attrs = nvmem_bin_ro_attributes, - .attrs = nvmem_attrs, -}; - -static const struct attribute_group *nvmem_ro_dev_groups[] = { - &nvmem_bin_ro_group, - NULL, -}; - -/* default read/write permissions, root only */ -static struct bin_attribute bin_attr_rw_root_nvmem = { - .attr = { - .name = "nvmem", - .mode = 0600, - }, - .read = bin_attr_nvmem_read, - .write = bin_attr_nvmem_write, -}; - -static struct bin_attribute *nvmem_bin_rw_root_attributes[] = { - &bin_attr_rw_root_nvmem, - NULL, -}; - -static const struct attribute_group nvmem_bin_rw_root_group = { - .bin_attrs = nvmem_bin_rw_root_attributes, - .attrs = nvmem_attrs, -}; - -static const struct attribute_group *nvmem_rw_root_dev_groups[] = { - &nvmem_bin_rw_root_group, - NULL, -}; - -/* read only permission, root only */ -static struct bin_attribute bin_attr_ro_root_nvmem = { - .attr = { - .name = "nvmem", - .mode = 0400, - }, - .read = bin_attr_nvmem_read, -}; - -static struct bin_attribute *nvmem_bin_ro_root_attributes[] = { - &bin_attr_ro_root_nvmem, - NULL, -}; - -static const struct attribute_group nvmem_bin_ro_root_group = { - .bin_attrs = nvmem_bin_ro_root_attributes, - .attrs = nvmem_attrs, -}; - -static const struct attribute_group *nvmem_ro_root_dev_groups[] = { - &nvmem_bin_ro_root_group, - NULL, -}; - static void nvmem_release(struct device *dev) { struct nvmem_device *nvmem = to_nvmem_device(dev); @@ -422,43 +210,6 @@ err: return rval; } -/* - * nvmem_setup_compat() - Create an additional binary entry in - * drivers sys directory, to be backwards compatible with the older - * drivers/misc/eeprom drivers. - */ -static int nvmem_setup_compat(struct nvmem_device *nvmem, - const struct nvmem_config *config) -{ - int rval; - - if (!config->base_dev) - return -EINVAL; - - if (nvmem->read_only) - nvmem->eeprom = bin_attr_ro_root_nvmem; - else - nvmem->eeprom = bin_attr_rw_root_nvmem; - nvmem->eeprom.attr.name = "eeprom"; - nvmem->eeprom.size = nvmem->size; -#ifdef CONFIG_DEBUG_LOCK_ALLOC - nvmem->eeprom.attr.key = &eeprom_lock_key; -#endif - nvmem->eeprom.private = &nvmem->dev; - nvmem->base_dev = config->base_dev; - - rval = device_create_bin_file(nvmem->base_dev, &nvmem->eeprom); - if (rval) { - dev_err(&nvmem->dev, - "Failed to create eeprom binary file %d\n", rval); - return rval; - } - - nvmem->flags |= FLAG_COMPAT; - - return 0; -} - /** * nvmem_register_notifier() - Register a notifier block for nvmem events. * @@ -651,14 +402,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) nvmem->read_only = device_property_present(config->dev, "read-only") || config->read_only || !nvmem->reg_write; - if (config->root_only) - nvmem->dev.groups = nvmem->read_only ? - nvmem_ro_root_dev_groups : - nvmem_rw_root_dev_groups; - else - nvmem->dev.groups = nvmem->read_only ? - nvmem_ro_dev_groups : - nvmem_rw_dev_groups; + nvmem->dev.groups = nvmem_sysfs_get_groups(nvmem, config); device_initialize(&nvmem->dev); @@ -669,7 +413,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) goto err_put_device; if (config->compat) { - rval = nvmem_setup_compat(nvmem, config); + rval = nvmem_sysfs_setup_compat(nvmem, config); if (rval) goto err_device_del; } @@ -696,7 +440,7 @@ err_remove_cells: nvmem_device_remove_all_cells(nvmem); err_teardown_compat: if (config->compat) - device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom); + nvmem_sysfs_remove_compat(nvmem, config); err_device_del: device_del(&nvmem->dev); err_put_device: @@ -1166,7 +910,7 @@ EXPORT_SYMBOL_GPL(nvmem_cell_put); static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf) { u8 *p, *b; - int i, bit_offset = cell->bit_offset; + int i, extra, bit_offset = cell->bit_offset; p = b = buf; if (bit_offset) { @@ -1181,11 +925,16 @@ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf) p = b; *b++ >>= bit_offset; } - - /* result fits in less bytes */ - if (cell->bytes != DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE)) - *p-- = 0; + } else { + /* point to the msb */ + p += cell->bytes - 1; } + + /* result fits in less bytes */ + extra = cell->bytes - DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE); + while (--extra >= 0) + *p-- = 0; + /* clear msb bits if any leftover in the last byte */ *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0); } @@ -1335,6 +1084,43 @@ int nvmem_cell_write(struct nvmem_cell *cell, void *buf, size_t len) EXPORT_SYMBOL_GPL(nvmem_cell_write); /** + * nvmem_cell_read_u16() - Read a cell value as an u16 + * + * @dev: Device that requests the nvmem cell. + * @cell_id: Name of nvmem cell to read. + * @val: pointer to output value. + * + * Return: 0 on success or negative errno. + */ +int nvmem_cell_read_u16(struct device *dev, const char *cell_id, u16 *val) +{ + struct nvmem_cell *cell; + void *buf; + size_t len; + + cell = nvmem_cell_get(dev, cell_id); + if (IS_ERR(cell)) + return PTR_ERR(cell); + + buf = nvmem_cell_read(cell, &len); + if (IS_ERR(buf)) { + nvmem_cell_put(cell); + return PTR_ERR(buf); + } + if (len != sizeof(*val)) { + kfree(buf); + nvmem_cell_put(cell); + return -EINVAL; + } + memcpy(val, buf, sizeof(*val)); + kfree(buf); + nvmem_cell_put(cell); + + return 0; +} +EXPORT_SYMBOL_GPL(nvmem_cell_read_u16); + +/** * nvmem_cell_read_u32() - Read a cell value as an u32 * * @dev: Device that requests the nvmem cell. diff --git a/drivers/nvmem/imx-iim.c b/drivers/nvmem/imx-iim.c index 6651e4cdc002..34582293b985 100644 --- a/drivers/nvmem/imx-iim.c +++ b/drivers/nvmem/imx-iim.c @@ -104,7 +104,6 @@ static int imx_iim_probe(struct platform_device *pdev) { const struct of_device_id *of_id; struct device *dev = &pdev->dev; - struct resource *res; struct iim_priv *iim; struct nvmem_device *nvmem; struct nvmem_config cfg = {}; @@ -114,8 +113,7 @@ static int imx_iim_probe(struct platform_device *pdev) if (!iim) return -ENOMEM; - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - iim->base = devm_ioremap_resource(dev, res); + iim->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(iim->base)) return PTR_ERR(iim->base); diff --git a/drivers/nvmem/imx-ocotp.c b/drivers/nvmem/imx-ocotp.c index 08a9b1ef8ae4..4cf7b61e4bf5 100644 --- a/drivers/nvmem/imx-ocotp.c +++ b/drivers/nvmem/imx-ocotp.c @@ -444,6 +444,12 @@ static const struct ocotp_params imx7ulp_params = { .bank_address_words = 0, }; +static const struct ocotp_params imx8mq_params = { + .nregs = 256, + .bank_address_words = 4, + .set_timing = imx_ocotp_set_imx7_timing, +}; + static const struct of_device_id imx_ocotp_dt_ids[] = { { .compatible = "fsl,imx6q-ocotp", .data = &imx6q_params }, { .compatible = "fsl,imx6sl-ocotp", .data = &imx6sl_params }, @@ -453,6 +459,7 @@ static const struct of_device_id imx_ocotp_dt_ids[] = { { .compatible = "fsl,imx7d-ocotp", .data = &imx7d_params }, { .compatible = "fsl,imx6sll-ocotp", .data = &imx6sll_params }, { .compatible = "fsl,imx7ulp-ocotp", .data = &imx7ulp_params }, + { .compatible = "fsl,imx8mq-ocotp", .data = &imx8mq_params }, { }, }; MODULE_DEVICE_TABLE(of, imx_ocotp_dt_ids); @@ -460,7 +467,6 @@ MODULE_DEVICE_TABLE(of, imx_ocotp_dt_ids); static int imx_ocotp_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; - struct resource *res; struct ocotp_priv *priv; struct nvmem_device *nvmem; @@ -470,8 +476,7 @@ static int imx_ocotp_probe(struct platform_device *pdev) priv->dev = dev; - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - priv->base = devm_ioremap_resource(dev, res); + priv->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(priv->base)) return PTR_ERR(priv->base); diff --git a/drivers/nvmem/mxs-ocotp.c b/drivers/nvmem/mxs-ocotp.c index 53122f59c4b2..fbb7db6ee1f5 100644 --- a/drivers/nvmem/mxs-ocotp.c +++ b/drivers/nvmem/mxs-ocotp.c @@ -145,7 +145,6 @@ static int mxs_ocotp_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; const struct mxs_data *data; struct mxs_ocotp *otp; - struct resource *res; const struct of_device_id *match; int ret; @@ -157,8 +156,7 @@ static int mxs_ocotp_probe(struct platform_device *pdev) if (!otp) return -ENOMEM; - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - otp->base = devm_ioremap_resource(dev, res); + otp->base = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(otp->base)) return PTR_ERR(otp->base); diff --git a/drivers/nvmem/nvmem-sysfs.c b/drivers/nvmem/nvmem-sysfs.c new file mode 100644 index 000000000000..6f303b91f6e7 --- /dev/null +++ b/drivers/nvmem/nvmem-sysfs.c @@ -0,0 +1,256 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2019, Linaro Limited + */ +#include "nvmem.h" + +static const char * const nvmem_type_str[] = { + [NVMEM_TYPE_UNKNOWN] = "Unknown", + [NVMEM_TYPE_EEPROM] = "EEPROM", + [NVMEM_TYPE_OTP] = "OTP", + [NVMEM_TYPE_BATTERY_BACKED] = "Battery backed", +}; + +#ifdef CONFIG_DEBUG_LOCK_ALLOC +static struct lock_class_key eeprom_lock_key; +#endif + +static ssize_t type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct nvmem_device *nvmem = to_nvmem_device(dev); + + return sprintf(buf, "%s\n", nvmem_type_str[nvmem->type]); +} + +static DEVICE_ATTR_RO(type); + +static struct attribute *nvmem_attrs[] = { + &dev_attr_type.attr, + NULL, +}; + +static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj, + struct bin_attribute *attr, + char *buf, loff_t pos, size_t count) +{ + struct device *dev; + struct nvmem_device *nvmem; + int rc; + + if (attr->private) + dev = attr->private; + else + dev = container_of(kobj, struct device, kobj); + nvmem = to_nvmem_device(dev); + + /* Stop the user from reading */ + if (pos >= nvmem->size) + return 0; + + if (count < nvmem->word_size) + return -EINVAL; + + if (pos + count > nvmem->size) + count = nvmem->size - pos; + + count = round_down(count, nvmem->word_size); + + rc = nvmem->reg_read(nvmem->priv, pos, buf, count); + + if (rc) + return rc; + + return count; +} + +static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj, + struct bin_attribute *attr, + char *buf, loff_t pos, size_t count) +{ + struct device *dev; + struct nvmem_device *nvmem; + int rc; + + if (attr->private) + dev = attr->private; + else + dev = container_of(kobj, struct device, kobj); + nvmem = to_nvmem_device(dev); + + /* Stop the user from writing */ + if (pos >= nvmem->size) + return -EFBIG; + + if (count < nvmem->word_size) + return -EINVAL; + + if (pos + count > nvmem->size) + count = nvmem->size - pos; + + count = round_down(count, nvmem->word_size); + + rc = nvmem->reg_write(nvmem->priv, pos, buf, count); + + if (rc) + return rc; + + return count; +} + +/* default read/write permissions */ +static struct bin_attribute bin_attr_rw_nvmem = { + .attr = { + .name = "nvmem", + .mode = 0644, + }, + .read = bin_attr_nvmem_read, + .write = bin_attr_nvmem_write, +}; + +static struct bin_attribute *nvmem_bin_rw_attributes[] = { + &bin_attr_rw_nvmem, + NULL, +}; + +static const struct attribute_group nvmem_bin_rw_group = { + .bin_attrs = nvmem_bin_rw_attributes, + .attrs = nvmem_attrs, +}; + +static const struct attribute_group *nvmem_rw_dev_groups[] = { + &nvmem_bin_rw_group, + NULL, +}; + +/* read only permission */ +static struct bin_attribute bin_attr_ro_nvmem = { + .attr = { + .name = "nvmem", + .mode = 0444, + }, + .read = bin_attr_nvmem_read, +}; + +static struct bin_attribute *nvmem_bin_ro_attributes[] = { + &bin_attr_ro_nvmem, + NULL, +}; + +static const struct attribute_group nvmem_bin_ro_group = { + .bin_attrs = nvmem_bin_ro_attributes, + .attrs = nvmem_attrs, +}; + +static const struct attribute_group *nvmem_ro_dev_groups[] = { + &nvmem_bin_ro_group, + NULL, +}; + +/* default read/write permissions, root only */ +static struct bin_attribute bin_attr_rw_root_nvmem = { + .attr = { + .name = "nvmem", + .mode = 0600, + }, + .read = bin_attr_nvmem_read, + .write = bin_attr_nvmem_write, +}; + +static struct bin_attribute *nvmem_bin_rw_root_attributes[] = { + &bin_attr_rw_root_nvmem, + NULL, +}; + +static const struct attribute_group nvmem_bin_rw_root_group = { + .bin_attrs = nvmem_bin_rw_root_attributes, + .attrs = nvmem_attrs, +}; + +static const struct attribute_group *nvmem_rw_root_dev_groups[] = { + &nvmem_bin_rw_root_group, + NULL, +}; + +/* read only permission, root only */ +static struct bin_attribute bin_attr_ro_root_nvmem = { + .attr = { + .name = "nvmem", + .mode = 0400, + }, + .read = bin_attr_nvmem_read, +}; + +static struct bin_attribute *nvmem_bin_ro_root_attributes[] = { + &bin_attr_ro_root_nvmem, + NULL, +}; + +static const struct attribute_group nvmem_bin_ro_root_group = { + .bin_attrs = nvmem_bin_ro_root_attributes, + .attrs = nvmem_attrs, +}; + +static const struct attribute_group *nvmem_ro_root_dev_groups[] = { + &nvmem_bin_ro_root_group, + NULL, +}; + +const struct attribute_group **nvmem_sysfs_get_groups( + struct nvmem_device *nvmem, + const struct nvmem_config *config) +{ + if (config->root_only) + return nvmem->read_only ? + nvmem_ro_root_dev_groups : + nvmem_rw_root_dev_groups; + + return nvmem->read_only ? nvmem_ro_dev_groups : nvmem_rw_dev_groups; +} + +/* + * nvmem_setup_compat() - Create an additional binary entry in + * drivers sys directory, to be backwards compatible with the older + * drivers/misc/eeprom drivers. + */ +int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem, + const struct nvmem_config *config) +{ + int rval; + + if (!config->compat) + return 0; + + if (!config->base_dev) + return -EINVAL; + + if (nvmem->read_only) + nvmem->eeprom = bin_attr_ro_root_nvmem; + else + nvmem->eeprom = bin_attr_rw_root_nvmem; + nvmem->eeprom.attr.name = "eeprom"; + nvmem->eeprom.size = nvmem->size; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + nvmem->eeprom.attr.key = &eeprom_lock_key; +#endif + nvmem->eeprom.private = &nvmem->dev; + nvmem->base_dev = config->base_dev; + + rval = device_create_bin_file(nvmem->base_dev, &nvmem->eeprom); + if (rval) { + dev_err(&nvmem->dev, + "Failed to create eeprom binary file %d\n", rval); + return rval; + } + + nvmem->flags |= FLAG_COMPAT; + + return 0; +} + +void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem, + const struct nvmem_config *config) +{ + if (config->compat) + device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom); +} diff --git a/drivers/nvmem/nvmem.h b/drivers/nvmem/nvmem.h new file mode 100644 index 000000000000..eb8ed7121fa3 --- /dev/null +++ b/drivers/nvmem/nvmem.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _DRIVERS_NVMEM_H +#define _DRIVERS_NVMEM_H + +#include <linux/device.h> +#include <linux/fs.h> +#include <linux/kref.h> +#include <linux/list.h> +#include <linux/nvmem-consumer.h> +#include <linux/nvmem-provider.h> + +struct nvmem_device { + struct module *owner; + struct device dev; + int stride; + int word_size; + int id; + struct kref refcnt; + size_t size; + bool read_only; + int flags; + enum nvmem_type type; + struct bin_attribute eeprom; + struct device *base_dev; + struct list_head cells; + nvmem_reg_read_t reg_read; + nvmem_reg_write_t reg_write; + void *priv; +}; + +#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev) +#define FLAG_COMPAT BIT(0) + +#ifdef CONFIG_NVMEM_SYSFS +const struct attribute_group **nvmem_sysfs_get_groups( + struct nvmem_device *nvmem, + const struct nvmem_config *config); +int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem, + const struct nvmem_config *config); +void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem, + const struct nvmem_config *config); +#else +static inline const struct attribute_group **nvmem_sysfs_get_groups( + struct nvmem_device *nvmem, + const struct nvmem_config *config) +{ + return NULL; +} + +static inline int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem, + const struct nvmem_config *config) +{ + return -ENOSYS; +} +static inline void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem, + const struct nvmem_config *config) +{ +} +#endif /* CONFIG_NVMEM_SYSFS */ + +#endif /* _DRIVERS_NVMEM_H */ diff --git a/drivers/nvmem/stm32-romem.c b/drivers/nvmem/stm32-romem.c new file mode 100644 index 000000000000..354be526897f --- /dev/null +++ b/drivers/nvmem/stm32-romem.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * STM32 Factory-programmed memory read access driver + * + * Copyright (C) 2017, STMicroelectronics - All Rights Reserved + * Author: Fabrice Gasnier <fabrice.gasnier@st.com> for STMicroelectronics. + */ + +#include <linux/arm-smccc.h> +#include <linux/io.h> +#include <linux/module.h> +#include <linux/nvmem-provider.h> +#include <linux/of_device.h> + +/* BSEC secure service access from non-secure */ +#define STM32_SMC_BSEC 0x82001003 +#define STM32_SMC_READ_SHADOW 0x01 +#define STM32_SMC_PROG_OTP 0x02 +#define STM32_SMC_WRITE_SHADOW 0x03 +#define STM32_SMC_READ_OTP 0x04 + +/* shadow registers offest */ +#define STM32MP15_BSEC_DATA0 0x200 + +/* 32 (x 32-bits) lower shadow registers */ +#define STM32MP15_BSEC_NUM_LOWER 32 + +struct stm32_romem_cfg { + int size; +}; + +struct stm32_romem_priv { + void __iomem *base; + struct nvmem_config cfg; +}; + +static int stm32_romem_read(void *context, unsigned int offset, void *buf, + size_t bytes) +{ + struct stm32_romem_priv *priv = context; + u8 *buf8 = buf; + int i; + + for (i = offset; i < offset + bytes; i++) + *buf8++ = readb_relaxed(priv->base + i); + + return 0; +} + +static int stm32_bsec_smc(u8 op, u32 otp, u32 data, u32 *result) +{ +#if IS_ENABLED(CONFIG_HAVE_ARM_SMCCC) + struct arm_smccc_res res; + + arm_smccc_smc(STM32_SMC_BSEC, op, otp, data, 0, 0, 0, 0, &res); + if (res.a0) + return -EIO; + + if (result) + *result = (u32)res.a1; + + return 0; +#else + return -ENXIO; +#endif +} + +static int stm32_bsec_read(void *context, unsigned int offset, void *buf, + size_t bytes) +{ + struct stm32_romem_priv *priv = context; + struct device *dev = priv->cfg.dev; + u32 roffset, rbytes, val; + u8 *buf8 = buf, *val8 = (u8 *)&val; + int i, j = 0, ret, skip_bytes, size; + + /* Round unaligned access to 32-bits */ + roffset = rounddown(offset, 4); + skip_bytes = offset & 0x3; + rbytes = roundup(bytes + skip_bytes, 4); + + if (roffset + rbytes > priv->cfg.size) + return -EINVAL; + + for (i = roffset; (i < roffset + rbytes); i += 4) { + u32 otp = i >> 2; + + if (otp < STM32MP15_BSEC_NUM_LOWER) { + /* read lower data from shadow registers */ + val = readl_relaxed( + priv->base + STM32MP15_BSEC_DATA0 + i); + } else { + ret = stm32_bsec_smc(STM32_SMC_READ_SHADOW, otp, 0, + &val); + if (ret) { + dev_err(dev, "Can't read data%d (%d)\n", otp, + ret); + return ret; + } + } + /* skip first bytes in case of unaligned read */ + if (skip_bytes) + size = min(bytes, (size_t)(4 - skip_bytes)); + else + size = min(bytes, (size_t)4); + memcpy(&buf8[j], &val8[skip_bytes], size); + bytes -= size; + j += size; + skip_bytes = 0; + } + + return 0; +} + +static int stm32_bsec_write(void *context, unsigned int offset, void *buf, + size_t bytes) +{ + struct stm32_romem_priv *priv = context; + struct device *dev = priv->cfg.dev; + u32 *buf32 = buf; + int ret, i; + + /* Allow only writing complete 32-bits aligned words */ + if ((bytes % 4) || (offset % 4)) + return -EINVAL; + + for (i = offset; i < offset + bytes; i += 4) { + ret = stm32_bsec_smc(STM32_SMC_PROG_OTP, i >> 2, *buf32++, + NULL); + if (ret) { + dev_err(dev, "Can't write data%d (%d)\n", i >> 2, ret); + return ret; + } + } + + return 0; +} + +static int stm32_romem_probe(struct platform_device *pdev) +{ + const struct stm32_romem_cfg *cfg; + struct device *dev = &pdev->dev; + struct stm32_romem_priv *priv; + struct resource *res; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->base = devm_ioremap_resource(dev, res); + if (IS_ERR(priv->base)) + return PTR_ERR(priv->base); + + priv->cfg.name = "stm32-romem"; + priv->cfg.word_size = 1; + priv->cfg.stride = 1; + priv->cfg.dev = dev; + priv->cfg.priv = priv; + priv->cfg.owner = THIS_MODULE; + + cfg = (const struct stm32_romem_cfg *) + of_match_device(dev->driver->of_match_table, dev)->data; + if (!cfg) { + priv->cfg.read_only = true; + priv->cfg.size = resource_size(res); + priv->cfg.reg_read = stm32_romem_read; + } else { + priv->cfg.size = cfg->size; + priv->cfg.reg_read = stm32_bsec_read; + priv->cfg.reg_write = stm32_bsec_write; + } + + return PTR_ERR_OR_ZERO(devm_nvmem_register(dev, &priv->cfg)); +} + +static const struct stm32_romem_cfg stm32mp15_bsec_cfg = { + .size = 384, /* 96 x 32-bits data words */ +}; + +static const struct of_device_id stm32_romem_of_match[] = { + { .compatible = "st,stm32f4-otp", }, { + .compatible = "st,stm32mp15-bsec", + .data = (void *)&stm32mp15_bsec_cfg, + }, { + }, +}; +MODULE_DEVICE_TABLE(of, stm32_romem_of_match); + +static struct platform_driver stm32_romem_driver = { + .probe = stm32_romem_probe, + .driver = { + .name = "stm32-romem", + .of_match_table = of_match_ptr(stm32_romem_of_match), + }, +}; +module_platform_driver(stm32_romem_driver); + +MODULE_AUTHOR("Fabrice Gasnier <fabrice.gasnier@st.com>"); +MODULE_DESCRIPTION("STMicroelectronics STM32 RO-MEM"); +MODULE_ALIAS("platform:nvmem-stm32-romem"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c index 570a2e354f30..a079a80ddf2c 100644 --- a/drivers/nvmem/sunxi_sid.c +++ b/drivers/nvmem/sunxi_sid.c @@ -1,18 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ /* * Allwinner sunXi SoCs Security ID support. * * Copyright (c) 2013 Oliver Schinagl <oliver@schinagl.nl> * Copyright (C) 2014 Maxime Ripard <maxime.ripard@free-electrons.com> - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. */ #include <linux/device.h> @@ -35,13 +26,6 @@ #define SUN8I_SID_OP_LOCK (0xAC << 8) #define SUN8I_SID_READ BIT(1) -static struct nvmem_config econfig = { - .name = "sunxi-sid", - .read_only = true, - .stride = 4, - .word_size = 1, -}; - struct sunxi_sid_cfg { u32 value_offset; u32 size; @@ -53,33 +37,12 @@ struct sunxi_sid { u32 value_offset; }; -/* We read the entire key, due to a 32 bit read alignment requirement. Since we - * want to return the requested byte, this results in somewhat slower code and - * uses 4 times more reads as needed but keeps code simpler. Since the SID is - * only very rarely probed, this is not really an issue. - */ -static u8 sunxi_sid_read_byte(const struct sunxi_sid *sid, - const unsigned int offset) -{ - u32 sid_key; - - sid_key = ioread32be(sid->base + round_down(offset, 4)); - sid_key >>= (offset % 4) * 8; - - return sid_key; /* Only return the last byte */ -} - static int sunxi_sid_read(void *context, unsigned int offset, void *val, size_t bytes) { struct sunxi_sid *sid = context; - u8 *buf = val; - /* Offset the read operation to the real position of SID */ - offset += sid->value_offset; - - while (bytes--) - *buf++ = sunxi_sid_read_byte(sid, offset++); + memcpy_fromio(val, sid->base + sid->value_offset + offset, bytes); return 0; } @@ -115,36 +78,34 @@ static int sun8i_sid_register_readout(const struct sunxi_sid *sid, * to be not reliable at all. * Read by the registers instead. */ -static int sun8i_sid_read_byte_by_reg(const struct sunxi_sid *sid, - const unsigned int offset, - u8 *out) -{ - u32 word; - int ret; - - ret = sun8i_sid_register_readout(sid, offset & ~0x03, &word); - - if (ret) - return ret; - - *out = (word >> ((offset & 0x3) * 8)) & 0xff; - - return 0; -} - static int sun8i_sid_read_by_reg(void *context, unsigned int offset, void *val, size_t bytes) { struct sunxi_sid *sid = context; - u8 *buf = val; + u32 word; int ret; - while (bytes--) { - ret = sun8i_sid_read_byte_by_reg(sid, offset++, buf++); + /* .stride = 4 so offset is guaranteed to be aligned */ + while (bytes >= 4) { + ret = sun8i_sid_register_readout(sid, offset, val); if (ret) return ret; + + val += 4; + offset += 4; + bytes -= 4; } + if (!bytes) + return 0; + + /* Handle any trailing bytes */ + ret = sun8i_sid_register_readout(sid, offset, &word); + if (ret) + return ret; + + memcpy(val, &word, bytes); + return 0; } @@ -152,9 +113,10 @@ static int sunxi_sid_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct resource *res; + struct nvmem_config *nvmem_cfg; struct nvmem_device *nvmem; struct sunxi_sid *sid; - int i, size; + int size; char *randomness; const struct sunxi_sid_cfg *cfg; @@ -174,14 +136,23 @@ static int sunxi_sid_probe(struct platform_device *pdev) size = cfg->size; - econfig.size = size; - econfig.dev = dev; + nvmem_cfg = devm_kzalloc(dev, sizeof(*nvmem_cfg), GFP_KERNEL); + if (!nvmem_cfg) + return -ENOMEM; + + nvmem_cfg->dev = dev; + nvmem_cfg->name = "sunxi-sid"; + nvmem_cfg->read_only = true; + nvmem_cfg->size = cfg->size; + nvmem_cfg->word_size = 1; + nvmem_cfg->stride = 4; + nvmem_cfg->priv = sid; if (cfg->need_register_readout) - econfig.reg_read = sun8i_sid_read_by_reg; + nvmem_cfg->reg_read = sun8i_sid_read_by_reg; else - econfig.reg_read = sunxi_sid_read; - econfig.priv = sid; - nvmem = devm_nvmem_register(dev, &econfig); + nvmem_cfg->reg_read = sunxi_sid_read; + + nvmem = devm_nvmem_register(dev, nvmem_cfg); if (IS_ERR(nvmem)) return PTR_ERR(nvmem); @@ -189,9 +160,7 @@ static int sunxi_sid_probe(struct platform_device *pdev) if (!randomness) return -ENOMEM; - for (i = 0; i < size; i++) - econfig.reg_read(sid, i, &randomness[i], 1); - + nvmem_cfg->reg_read(sid, 0, randomness, size); add_device_randomness(randomness, size); kfree(randomness); @@ -219,11 +188,19 @@ static const struct sunxi_sid_cfg sun50i_a64_cfg = { .size = 0x100, }; +static const struct sunxi_sid_cfg sun50i_h6_cfg = { + .value_offset = 0x200, + .size = 0x200, +}; + static const struct of_device_id sunxi_sid_of_match[] = { { .compatible = "allwinner,sun4i-a10-sid", .data = &sun4i_a10_cfg }, { .compatible = "allwinner,sun7i-a20-sid", .data = &sun7i_a20_cfg }, + { .compatible = "allwinner,sun8i-a83t-sid", .data = &sun50i_a64_cfg }, { .compatible = "allwinner,sun8i-h3-sid", .data = &sun8i_h3_cfg }, { .compatible = "allwinner,sun50i-a64-sid", .data = &sun50i_a64_cfg }, + { .compatible = "allwinner,sun50i-h5-sid", .data = &sun50i_a64_cfg }, + { .compatible = "allwinner,sun50i-h6-sid", .data = &sun50i_h6_cfg }, {/* sentinel */}, }; MODULE_DEVICE_TABLE(of, sunxi_sid_of_match); diff --git a/drivers/parport/ieee1284.c b/drivers/parport/ieee1284.c index f12b9da69255..90fb73575495 100644 --- a/drivers/parport/ieee1284.c +++ b/drivers/parport/ieee1284.c @@ -722,7 +722,7 @@ ssize_t parport_read (struct parport *port, void *buffer, size_t len) if (parport_negotiate (port, IEEE1284_MODE_NIBBLE)) { return -EIO; } - /* fall through to NIBBLE */ + /* fall through - to NIBBLE */ case IEEE1284_MODE_NIBBLE: DPRINTK (KERN_DEBUG "%s: Using nibble mode\n", port->name); fn = port->ops->nibble_read_data; diff --git a/drivers/parport/parport_cs.c b/drivers/parport/parport_cs.c index e9b52e4a4648..e77044c2bf62 100644 --- a/drivers/parport/parport_cs.c +++ b/drivers/parport/parport_cs.c @@ -158,8 +158,9 @@ static int parport_config(struct pcmcia_device *link) return 0; failed: - parport_cs_release(link); - return -ENODEV; + parport_cs_release(link); + kfree(link->priv); + return -ENODEV; } /* parport_config */ static void parport_cs_release(struct pcmcia_device *link) diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c index 71f094c9ec68..f3585777324c 100644 --- a/drivers/slimbus/qcom-ngd-ctrl.c +++ b/drivers/slimbus/qcom-ngd-ctrl.c @@ -1342,6 +1342,10 @@ static int of_qcom_slim_ngd_register(struct device *parent, return -ENOMEM; ngd->pdev = platform_device_alloc(QCOM_SLIM_NGD_DRV_NAME, id); + if (!ngd->pdev) { + kfree(ngd); + return -ENOMEM; + } ngd->id = id; ngd->pdev->dev.parent = parent; ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME; diff --git a/drivers/soundwire/Kconfig b/drivers/soundwire/Kconfig index 19c8efb9a5ee..53b55b79c4af 100644 --- a/drivers/soundwire/Kconfig +++ b/drivers/soundwire/Kconfig @@ -4,7 +4,7 @@ menuconfig SOUNDWIRE bool "SoundWire support" - ---help--- + help SoundWire is a 2-Pin interface with data and clock line ratified by the MIPI Alliance. SoundWire is used for transporting data typically related to audio functions. SoundWire interface is @@ -28,7 +28,7 @@ config SOUNDWIRE_INTEL select SOUNDWIRE_CADENCE select SOUNDWIRE_BUS depends on X86 && ACPI && SND_SOC - ---help--- + help SoundWire Intel Master driver. If you have an Intel platform which has a SoundWire Master then enable this config option to get the SoundWire support for that diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c index 1cbfedfc20ef..aac35fc3cf22 100644 --- a/drivers/soundwire/bus.c +++ b/drivers/soundwire/bus.c @@ -21,12 +21,12 @@ int sdw_add_bus_master(struct sdw_bus *bus) int ret; if (!bus->dev) { - pr_err("SoundWire bus has no device"); + pr_err("SoundWire bus has no device\n"); return -ENODEV; } if (!bus->ops) { - dev_err(bus->dev, "SoundWire Bus ops are not set"); + dev_err(bus->dev, "SoundWire Bus ops are not set\n"); return -EINVAL; } @@ -43,13 +43,14 @@ int sdw_add_bus_master(struct sdw_bus *bus) if (bus->ops->read_prop) { ret = bus->ops->read_prop(bus); if (ret < 0) { - dev_err(bus->dev, "Bus read properties failed:%d", ret); + dev_err(bus->dev, + "Bus read properties failed:%d\n", ret); return ret; } } /* - * Device numbers in SoundWire are 0 thru 15. Enumeration device + * Device numbers in SoundWire are 0 through 15. Enumeration device * number (0), Broadcast device number (15), Group numbers (12 and * 13) and Master device number (14) are not used for assignment so * mask these and other higher bits. @@ -172,7 +173,8 @@ static inline int do_transfer(struct sdw_bus *bus, struct sdw_msg *msg) } static inline int do_transfer_defer(struct sdw_bus *bus, - struct sdw_msg *msg, struct sdw_defer *defer) + struct sdw_msg *msg, + struct sdw_defer *defer) { int retry = bus->prop.err_threshold; enum sdw_command_response resp; @@ -224,7 +226,7 @@ int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg) ret = do_transfer(bus, msg); if (ret != 0 && ret != -ENODATA) dev_err(bus->dev, "trf on Slave %d failed:%d\n", - msg->dev_num, ret); + msg->dev_num, ret); if (msg->page) sdw_reset_page(bus, msg->dev_num); @@ -243,7 +245,7 @@ int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg) * Caller needs to hold the msg_lock lock while calling this */ int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, - struct sdw_defer *defer) + struct sdw_defer *defer) { int ret; @@ -253,7 +255,7 @@ int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, ret = do_transfer_defer(bus, msg, defer); if (ret != 0 && ret != -ENODATA) dev_err(bus->dev, "Defer trf on Slave %d failed:%d\n", - msg->dev_num, ret); + msg->dev_num, ret); if (msg->page) sdw_reset_page(bus, msg->dev_num); @@ -261,9 +263,8 @@ int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, return ret; } - int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, - u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf) + u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf) { memset(msg, 0, sizeof(*msg)); msg->addr = addr; /* addr is 16 bit and truncated here */ @@ -271,8 +272,6 @@ int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, msg->dev_num = dev_num; msg->flags = flags; msg->buf = buf; - msg->ssp_sync = false; - msg->page = false; if (addr < SDW_REG_NO_PAGE) { /* no paging area */ return 0; @@ -284,7 +283,7 @@ int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, if (addr < SDW_REG_OPTIONAL_PAGE) { /* 32k but no page */ if (slave && !slave->prop.paging_support) return 0; - /* no need for else as that will fall thru to paging */ + /* no need for else as that will fall-through to paging */ } /* paging mandatory */ @@ -298,7 +297,7 @@ int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, return -EINVAL; } else if (!slave->prop.paging_support) { dev_err(&slave->dev, - "address %x needs paging but no support", addr); + "address %x needs paging but no support\n", addr); return -EINVAL; } @@ -323,7 +322,7 @@ int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) int ret; ret = sdw_fill_msg(&msg, slave, addr, count, - slave->dev_num, SDW_MSG_FLAG_READ, val); + slave->dev_num, SDW_MSG_FLAG_READ, val); if (ret < 0) return ret; @@ -351,7 +350,7 @@ int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) int ret; ret = sdw_fill_msg(&msg, slave, addr, count, - slave->dev_num, SDW_MSG_FLAG_WRITE, val); + slave->dev_num, SDW_MSG_FLAG_WRITE, val); if (ret < 0) return ret; @@ -393,7 +392,6 @@ EXPORT_SYMBOL(sdw_read); int sdw_write(struct sdw_slave *slave, u32 addr, u8 value) { return sdw_nwrite(slave, addr, 1, &value); - } EXPORT_SYMBOL(sdw_write); @@ -416,11 +414,10 @@ static struct sdw_slave *sdw_get_slave(struct sdw_bus *bus, int i) static int sdw_compare_devid(struct sdw_slave *slave, struct sdw_slave_id id) { - - if ((slave->id.unique_id != id.unique_id) || - (slave->id.mfg_id != id.mfg_id) || - (slave->id.part_id != id.part_id) || - (slave->id.class_id != id.class_id)) + if (slave->id.unique_id != id.unique_id || + slave->id.mfg_id != id.mfg_id || + slave->id.part_id != id.part_id || + slave->id.class_id != id.class_id) return -ENODEV; return 0; @@ -457,24 +454,23 @@ static int sdw_assign_device_num(struct sdw_slave *slave) dev_num = sdw_get_device_num(slave); mutex_unlock(&slave->bus->bus_lock); if (dev_num < 0) { - dev_err(slave->bus->dev, "Get dev_num failed: %d", - dev_num); + dev_err(slave->bus->dev, "Get dev_num failed: %d\n", + dev_num); return dev_num; } } else { dev_info(slave->bus->dev, - "Slave already registered dev_num:%d", - slave->dev_num); + "Slave already registered dev_num:%d\n", + slave->dev_num); /* Clear the slave->dev_num to transfer message on device 0 */ dev_num = slave->dev_num; slave->dev_num = 0; - } ret = sdw_write(slave, SDW_SCP_DEVNUMBER, dev_num); if (ret < 0) { - dev_err(&slave->dev, "Program device_num failed: %d", ret); + dev_err(&slave->dev, "Program device_num failed: %d\n", ret); return ret; } @@ -485,9 +481,9 @@ static int sdw_assign_device_num(struct sdw_slave *slave) } void sdw_extract_slave_id(struct sdw_bus *bus, - u64 addr, struct sdw_slave_id *id) + u64 addr, struct sdw_slave_id *id) { - dev_dbg(bus->dev, "SDW Slave Addr: %llx", addr); + dev_dbg(bus->dev, "SDW Slave Addr: %llx\n", addr); /* * Spec definition @@ -507,10 +503,9 @@ void sdw_extract_slave_id(struct sdw_bus *bus, id->class_id = addr & GENMASK(7, 0); dev_dbg(bus->dev, - "SDW Slave class_id %x, part_id %x, mfg_id %x, unique_id %x, version %x", + "SDW Slave class_id %x, part_id %x, mfg_id %x, unique_id %x, version %x\n", id->class_id, id->part_id, id->mfg_id, id->unique_id, id->sdw_version); - } static int sdw_program_device_num(struct sdw_bus *bus) @@ -525,7 +520,7 @@ static int sdw_program_device_num(struct sdw_bus *bus) /* No Slave, so use raw xfer api */ ret = sdw_fill_msg(&msg, NULL, SDW_SCP_DEVID_0, - SDW_NUM_DEV_ID_REGISTERS, 0, SDW_MSG_FLAG_READ, buf); + SDW_NUM_DEV_ID_REGISTERS, 0, SDW_MSG_FLAG_READ, buf); if (ret < 0) return ret; @@ -564,7 +559,7 @@ static int sdw_program_device_num(struct sdw_bus *bus) ret = sdw_assign_device_num(slave); if (ret) { dev_err(slave->bus->dev, - "Assign dev_num failed:%d", + "Assign dev_num failed:%d\n", ret); return ret; } @@ -573,9 +568,9 @@ static int sdw_program_device_num(struct sdw_bus *bus) } } - if (found == false) { + if (!found) { /* TODO: Park this device in Group 13 */ - dev_err(bus->dev, "Slave Entry not found"); + dev_err(bus->dev, "Slave Entry not found\n"); } count++; @@ -592,7 +587,7 @@ static int sdw_program_device_num(struct sdw_bus *bus) } static void sdw_modify_slave_status(struct sdw_slave *slave, - enum sdw_slave_status status) + enum sdw_slave_status status) { mutex_lock(&slave->bus->bus_lock); slave->status = status; @@ -600,7 +595,7 @@ static void sdw_modify_slave_status(struct sdw_slave *slave, } int sdw_configure_dpn_intr(struct sdw_slave *slave, - int port, bool enable, int mask) + int port, bool enable, int mask) { u32 addr; int ret; @@ -620,7 +615,7 @@ int sdw_configure_dpn_intr(struct sdw_slave *slave, ret = sdw_update(slave, addr, (mask | SDW_DPN_INT_PORT_READY), val); if (ret < 0) dev_err(slave->bus->dev, - "SDW_DPN_INTMASK write failed:%d", val); + "SDW_DPN_INTMASK write failed:%d\n", val); return ret; } @@ -644,7 +639,7 @@ static int sdw_initialize_slave(struct sdw_slave *slave) ret = sdw_update(slave, SDW_SCP_INTMASK1, val, val); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_SCP_INTMASK1 write failed:%d", ret); + "SDW_SCP_INTMASK1 write failed:%d\n", ret); return ret; } @@ -659,7 +654,7 @@ static int sdw_initialize_slave(struct sdw_slave *slave) ret = sdw_update(slave, SDW_DP0_INTMASK, val, val); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_DP0_INTMASK read failed:%d", ret); + "SDW_DP0_INTMASK read failed:%d\n", ret); return val; } @@ -674,14 +669,13 @@ static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) status = sdw_read(slave, SDW_DP0_INT); if (status < 0) { dev_err(slave->bus->dev, - "SDW_DP0_INT read failed:%d", status); + "SDW_DP0_INT read failed:%d\n", status); return status; } do { - if (status & SDW_DP0_INT_TEST_FAIL) { - dev_err(&slave->dev, "Test fail for port 0"); + dev_err(&slave->dev, "Test fail for port 0\n"); clear |= SDW_DP0_INT_TEST_FAIL; } @@ -696,7 +690,7 @@ static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) } if (status & SDW_DP0_INT_BRA_FAILURE) { - dev_err(&slave->dev, "BRA failed"); + dev_err(&slave->dev, "BRA failed\n"); clear |= SDW_DP0_INT_BRA_FAILURE; } @@ -712,7 +706,7 @@ static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) ret = sdw_write(slave, SDW_DP0_INT, clear); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_DP0_INT write failed:%d", ret); + "SDW_DP0_INT write failed:%d\n", ret); return ret; } @@ -720,7 +714,7 @@ static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) status2 = sdw_read(slave, SDW_DP0_INT); if (status2 < 0) { dev_err(slave->bus->dev, - "SDW_DP0_INT read failed:%d", status2); + "SDW_DP0_INT read failed:%d\n", status2); return status2; } status &= status2; @@ -731,13 +725,13 @@ static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) } while (status != 0 && count < SDW_READ_INTR_CLEAR_RETRY); if (count == SDW_READ_INTR_CLEAR_RETRY) - dev_warn(slave->bus->dev, "Reached MAX_RETRY on DP0 read"); + dev_warn(slave->bus->dev, "Reached MAX_RETRY on DP0 read\n"); return ret; } static int sdw_handle_port_interrupt(struct sdw_slave *slave, - int port, u8 *slave_status) + int port, u8 *slave_status) { u8 clear = 0, impl_int_mask; int status, status2, ret, count = 0; @@ -750,15 +744,14 @@ static int sdw_handle_port_interrupt(struct sdw_slave *slave, status = sdw_read(slave, addr); if (status < 0) { dev_err(slave->bus->dev, - "SDW_DPN_INT read failed:%d", status); + "SDW_DPN_INT read failed:%d\n", status); return status; } do { - if (status & SDW_DPN_INT_TEST_FAIL) { - dev_err(&slave->dev, "Test fail for port:%d", port); + dev_err(&slave->dev, "Test fail for port:%d\n", port); clear |= SDW_DPN_INT_TEST_FAIL; } @@ -774,7 +767,6 @@ static int sdw_handle_port_interrupt(struct sdw_slave *slave, impl_int_mask = SDW_DPN_INT_IMPDEF1 | SDW_DPN_INT_IMPDEF2 | SDW_DPN_INT_IMPDEF3; - if (status & impl_int_mask) { clear |= impl_int_mask; *slave_status = clear; @@ -784,7 +776,7 @@ static int sdw_handle_port_interrupt(struct sdw_slave *slave, ret = sdw_write(slave, addr, clear); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_DPN_INT write failed:%d", ret); + "SDW_DPN_INT write failed:%d\n", ret); return ret; } @@ -792,7 +784,7 @@ static int sdw_handle_port_interrupt(struct sdw_slave *slave, status2 = sdw_read(slave, addr); if (status2 < 0) { dev_err(slave->bus->dev, - "SDW_DPN_INT read failed:%d", status2); + "SDW_DPN_INT read failed:%d\n", status2); return status2; } status &= status2; @@ -820,17 +812,18 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) sdw_modify_slave_status(slave, SDW_SLAVE_ALERT); /* Read Instat 1, Instat 2 and Instat 3 registers */ - buf = ret = sdw_read(slave, SDW_SCP_INT1); + ret = sdw_read(slave, SDW_SCP_INT1); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_SCP_INT1 read failed:%d", ret); + "SDW_SCP_INT1 read failed:%d\n", ret); return ret; } + buf = ret; ret = sdw_nread(slave, SDW_SCP_INTSTAT2, 2, buf2); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_SCP_INT2/3 read failed:%d", ret); + "SDW_SCP_INT2/3 read failed:%d\n", ret); return ret; } @@ -840,12 +833,12 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) * interrupt */ if (buf & SDW_SCP_INT1_PARITY) { - dev_err(&slave->dev, "Parity error detected"); + dev_err(&slave->dev, "Parity error detected\n"); clear |= SDW_SCP_INT1_PARITY; } if (buf & SDW_SCP_INT1_BUS_CLASH) { - dev_err(&slave->dev, "Bus clash error detected"); + dev_err(&slave->dev, "Bus clash error detected\n"); clear |= SDW_SCP_INT1_BUS_CLASH; } @@ -869,8 +862,7 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) port = port >> SDW_REG_SHIFT(SDW_SCP_INT1_PORT0_3); for_each_set_bit(bit, &port, 8) { sdw_handle_port_interrupt(slave, bit, - &port_status[bit]); - + &port_status[bit]); } /* Check if cascade 2 interrupt is present */ @@ -898,11 +890,11 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) } /* Update the Slave driver */ - if (slave_notify && (slave->ops) && - (slave->ops->interrupt_callback)) { + if (slave_notify && slave->ops && + slave->ops->interrupt_callback) { slave_intr.control_port = clear; memcpy(slave_intr.port, &port_status, - sizeof(slave_intr.port)); + sizeof(slave_intr.port)); slave->ops->interrupt_callback(slave, &slave_intr); } @@ -911,7 +903,7 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) ret = sdw_write(slave, SDW_SCP_INT1, clear); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_SCP_INT1 write failed:%d", ret); + "SDW_SCP_INT1 write failed:%d\n", ret); return ret; } @@ -919,17 +911,18 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) * Read status again to ensure no new interrupts arrived * while servicing interrupts. */ - _buf = ret = sdw_read(slave, SDW_SCP_INT1); + ret = sdw_read(slave, SDW_SCP_INT1); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_SCP_INT1 read failed:%d", ret); + "SDW_SCP_INT1 read failed:%d\n", ret); return ret; } + _buf = ret; ret = sdw_nread(slave, SDW_SCP_INTSTAT2, 2, _buf2); if (ret < 0) { dev_err(slave->bus->dev, - "SDW_SCP_INT2/3 read failed:%d", ret); + "SDW_SCP_INT2/3 read failed:%d\n", ret); return ret; } @@ -949,15 +942,15 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) } while (stat != 0 && count < SDW_READ_INTR_CLEAR_RETRY); if (count == SDW_READ_INTR_CLEAR_RETRY) - dev_warn(slave->bus->dev, "Reached MAX_RETRY on alert read"); + dev_warn(slave->bus->dev, "Reached MAX_RETRY on alert read\n"); return ret; } static int sdw_update_slave_status(struct sdw_slave *slave, - enum sdw_slave_status status) + enum sdw_slave_status status) { - if ((slave->ops) && (slave->ops->update_status)) + if (slave->ops && slave->ops->update_status) return slave->ops->update_status(slave, status); return 0; @@ -969,7 +962,7 @@ static int sdw_update_slave_status(struct sdw_slave *slave, * @status: Status for all Slave(s) */ int sdw_handle_slave_status(struct sdw_bus *bus, - enum sdw_slave_status status[]) + enum sdw_slave_status status[]) { enum sdw_slave_status prev_status; struct sdw_slave *slave; @@ -978,7 +971,7 @@ int sdw_handle_slave_status(struct sdw_bus *bus, if (status[0] == SDW_SLAVE_ATTACHED) { ret = sdw_program_device_num(bus); if (ret) - dev_err(bus->dev, "Slave attach failed: %d", ret); + dev_err(bus->dev, "Slave attach failed: %d\n", ret); } /* Continue to check other slave statuses */ @@ -1006,7 +999,7 @@ int sdw_handle_slave_status(struct sdw_bus *bus, ret = sdw_handle_slave_alerts(slave); if (ret) dev_err(bus->dev, - "Slave %d alert handling failed: %d", + "Slave %d alert handling failed: %d\n", i, ret); break; @@ -1023,22 +1016,21 @@ int sdw_handle_slave_status(struct sdw_bus *bus, ret = sdw_initialize_slave(slave); if (ret) dev_err(bus->dev, - "Slave %d initialization failed: %d", + "Slave %d initialization failed: %d\n", i, ret); break; default: - dev_err(bus->dev, "Invalid slave %d status:%d", - i, status[i]); + dev_err(bus->dev, "Invalid slave %d status:%d\n", + i, status[i]); break; } ret = sdw_update_slave_status(slave, status[i]); if (ret) dev_err(slave->bus->dev, - "Update Slave status failed:%d", ret); - + "Update Slave status failed:%d\n", ret); } return ret; diff --git a/drivers/soundwire/bus.h b/drivers/soundwire/bus.h index c77de05b8100..3048ca153f22 100644 --- a/drivers/soundwire/bus.h +++ b/drivers/soundwire/bus.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* Copyright(c) 2015-17 Intel Corporation. */ #ifndef __SDW_BUS_H #define __SDW_BUS_H @@ -16,7 +16,7 @@ static inline int sdw_acpi_find_slaves(struct sdw_bus *bus) #endif void sdw_extract_slave_id(struct sdw_bus *bus, - u64 addr, struct sdw_slave_id *id); + u64 addr, struct sdw_slave_id *id); enum { SDW_MSG_FLAG_READ = 0, @@ -116,19 +116,19 @@ struct sdw_master_runtime { }; struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave, - enum sdw_data_direction direction, - unsigned int port_num); + enum sdw_data_direction direction, + unsigned int port_num); int sdw_configure_dpn_intr(struct sdw_slave *slave, int port, - bool enable, int mask); + bool enable, int mask); int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg); int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, - struct sdw_defer *defer); + struct sdw_defer *defer); #define SDW_READ_INTR_CLEAR_RETRY 10 int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, - u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf); + u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf); /* Read-Modify-Write Slave register */ static inline int diff --git a/drivers/soundwire/bus_type.c b/drivers/soundwire/bus_type.c index 283b2832728e..2655602f0cfb 100644 --- a/drivers/soundwire/bus_type.c +++ b/drivers/soundwire/bus_type.c @@ -107,7 +107,7 @@ static int sdw_drv_probe(struct device *dev) slave->prop.clk_stop_timeout = 300; slave->bus->clk_stop_timeout = max_t(u32, slave->bus->clk_stop_timeout, - slave->prop.clk_stop_timeout); + slave->prop.clk_stop_timeout); return 0; } @@ -148,7 +148,7 @@ int __sdw_register_driver(struct sdw_driver *drv, struct module *owner) if (!drv->probe) { pr_err("driver %s didn't provide SDW probe routine\n", - drv->name); + drv->name); return -EINVAL; } diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c index cb6a331f448a..682789bb8ab3 100644 --- a/drivers/soundwire/cadence_master.c +++ b/drivers/soundwire/cadence_master.c @@ -42,7 +42,6 @@ #define CDNS_MCP_CONTROL_CMD_ACCEPT BIT(1) #define CDNS_MCP_CONTROL_BLOCK_WAKEUP BIT(0) - #define CDNS_MCP_CMDCTRL 0x8 #define CDNS_MCP_SSPSTAT 0xC #define CDNS_MCP_FRAME_SHAPE 0x10 @@ -226,9 +225,9 @@ static int cdns_clear_bit(struct sdw_cdns *cdns, int offset, u32 value) /* * IO Calls */ -static enum sdw_command_response cdns_fill_msg_resp( - struct sdw_cdns *cdns, - struct sdw_msg *msg, int count, int offset) +static enum sdw_command_response +cdns_fill_msg_resp(struct sdw_cdns *cdns, + struct sdw_msg *msg, int count, int offset) { int nack = 0, no_ack = 0; int i; @@ -263,7 +262,7 @@ static enum sdw_command_response cdns_fill_msg_resp( static enum sdw_command_response _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd, - int offset, int count, bool defer) + int offset, int count, bool defer) { unsigned long time; u32 base, i, data; @@ -296,7 +295,7 @@ _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd, /* wait for timeout or response */ time = wait_for_completion_timeout(&cdns->tx_complete, - msecs_to_jiffies(CDNS_TX_TIMEOUT)); + msecs_to_jiffies(CDNS_TX_TIMEOUT)); if (!time) { dev_err(cdns->dev, "IO transfer timed out\n"); msg->len = 0; @@ -306,8 +305,8 @@ _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd, return cdns_fill_msg_resp(cdns, msg, count, offset); } -static enum sdw_command_response cdns_program_scp_addr( - struct sdw_cdns *cdns, struct sdw_msg *msg) +static enum sdw_command_response +cdns_program_scp_addr(struct sdw_cdns *cdns, struct sdw_msg *msg) { int nack = 0, no_ack = 0; unsigned long time; @@ -336,7 +335,7 @@ static enum sdw_command_response cdns_program_scp_addr( cdns_writel(cdns, base, data[1]); time = wait_for_completion_timeout(&cdns->tx_complete, - msecs_to_jiffies(CDNS_TX_TIMEOUT)); + msecs_to_jiffies(CDNS_TX_TIMEOUT)); if (!time) { dev_err(cdns->dev, "SCP Msg trf timed out\n"); msg->len = 0; @@ -347,10 +346,10 @@ static enum sdw_command_response cdns_program_scp_addr( for (i = 0; i < 2; i++) { if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { no_ack = 1; - dev_err(cdns->dev, "Program SCP Ack not received"); + dev_err(cdns->dev, "Program SCP Ack not received\n"); if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { nack = 1; - dev_err(cdns->dev, "Program SCP NACK received"); + dev_err(cdns->dev, "Program SCP NACK received\n"); } } } @@ -358,11 +357,11 @@ static enum sdw_command_response cdns_program_scp_addr( /* For NACK, NO ack, don't return err if we are in Broadcast mode */ if (nack) { dev_err(cdns->dev, - "SCP_addrpage NACKed for Slave %d", msg->dev_num); + "SCP_addrpage NACKed for Slave %d\n", msg->dev_num); return SDW_CMD_FAIL; } else if (no_ack) { dev_dbg(cdns->dev, - "SCP_addrpage ignored for Slave %d", msg->dev_num); + "SCP_addrpage ignored for Slave %d\n", msg->dev_num); return SDW_CMD_IGNORED; } @@ -410,7 +409,7 @@ cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg) for (i = 0; i < msg->len / CDNS_MCP_CMD_LEN; i++) { ret = _cdns_xfer_msg(cdns, msg, cmd, i * CDNS_MCP_CMD_LEN, - CDNS_MCP_CMD_LEN, false); + CDNS_MCP_CMD_LEN, false); if (ret < 0) goto exit; } @@ -419,7 +418,7 @@ cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg) goto exit; ret = _cdns_xfer_msg(cdns, msg, cmd, i * CDNS_MCP_CMD_LEN, - msg->len % CDNS_MCP_CMD_LEN, false); + msg->len % CDNS_MCP_CMD_LEN, false); exit: return ret; @@ -428,7 +427,7 @@ EXPORT_SYMBOL(cdns_xfer_msg); enum sdw_command_response cdns_xfer_msg_defer(struct sdw_bus *bus, - struct sdw_msg *msg, struct sdw_defer *defer) + struct sdw_msg *msg, struct sdw_defer *defer) { struct sdw_cdns *cdns = bus_to_cdns(bus); int cmd = 0, ret; @@ -483,7 +482,7 @@ static void cdns_read_response(struct sdw_cdns *cdns) } static int cdns_update_slave_status(struct sdw_cdns *cdns, - u32 slave0, u32 slave1) + u32 slave0, u32 slave1) { enum sdw_slave_status status[SDW_MAX_DEVICES + 1]; bool is_slave = false; @@ -526,8 +525,8 @@ static int cdns_update_slave_status(struct sdw_cdns *cdns, /* first check if Slave reported multiple status */ if (set_status > 1) { dev_warn(cdns->dev, - "Slave reported multiple Status: %d\n", - status[i]); + "Slave reported multiple Status: %d\n", + status[i]); /* * TODO: we need to reread the status here by * issuing a PING cmd @@ -566,15 +565,15 @@ irqreturn_t sdw_cdns_irq(int irq, void *dev_id) if (cdns->defer) { cdns_fill_msg_resp(cdns, cdns->defer->msg, - cdns->defer->length, 0); + cdns->defer->length, 0); complete(&cdns->defer->complete); cdns->defer = NULL; - } else + } else { complete(&cdns->tx_complete); + } } if (int_status & CDNS_MCP_INT_CTRL_CLASH) { - /* Slave is driving bit slot during control word */ dev_err_ratelimited(cdns->dev, "Bus clash for control word\n"); int_status |= CDNS_MCP_INT_CTRL_CLASH; @@ -592,7 +591,7 @@ irqreturn_t sdw_cdns_irq(int irq, void *dev_id) if (int_status & CDNS_MCP_INT_SLAVE_MASK) { /* Mask the Slave interrupt and wake thread */ cdns_updatel(cdns, CDNS_MCP_INTMASK, - CDNS_MCP_INT_SLAVE_MASK, 0); + CDNS_MCP_INT_SLAVE_MASK, 0); int_status &= ~CDNS_MCP_INT_SLAVE_MASK; ret = IRQ_WAKE_THREAD; @@ -625,7 +624,7 @@ irqreturn_t sdw_cdns_thread(int irq, void *dev_id) /* clear and unmask Slave interrupt now */ cdns_writel(cdns, CDNS_MCP_INTSTAT, CDNS_MCP_INT_SLAVE_MASK); cdns_updatel(cdns, CDNS_MCP_INTMASK, - CDNS_MCP_INT_SLAVE_MASK, CDNS_MCP_INT_SLAVE_MASK); + CDNS_MCP_INT_SLAVE_MASK, CDNS_MCP_INT_SLAVE_MASK); return IRQ_HANDLED; } @@ -639,9 +638,9 @@ static int _cdns_enable_interrupt(struct sdw_cdns *cdns) u32 mask; cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK0, - CDNS_MCP_SLAVE_INTMASK0_MASK); + CDNS_MCP_SLAVE_INTMASK0_MASK); cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK1, - CDNS_MCP_SLAVE_INTMASK1_MASK); + CDNS_MCP_SLAVE_INTMASK1_MASK); mask = CDNS_MCP_INT_SLAVE_RSVD | CDNS_MCP_INT_SLAVE_ALERT | CDNS_MCP_INT_SLAVE_ATTACH | CDNS_MCP_INT_SLAVE_NATTACH | @@ -663,17 +662,17 @@ int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns) _cdns_enable_interrupt(cdns); ret = cdns_clear_bit(cdns, CDNS_MCP_CONFIG_UPDATE, - CDNS_MCP_CONFIG_UPDATE_BIT); + CDNS_MCP_CONFIG_UPDATE_BIT); if (ret < 0) - dev_err(cdns->dev, "Config update timedout"); + dev_err(cdns->dev, "Config update timedout\n"); return ret; } EXPORT_SYMBOL(sdw_cdns_enable_interrupt); static int cdns_allocate_pdi(struct sdw_cdns *cdns, - struct sdw_cdns_pdi **stream, - u32 num, u32 pdi_offset) + struct sdw_cdns_pdi **stream, + u32 num, u32 pdi_offset) { struct sdw_cdns_pdi *pdi; int i; @@ -701,7 +700,7 @@ static int cdns_allocate_pdi(struct sdw_cdns *cdns, * @config: Stream configurations */ int sdw_cdns_pdi_init(struct sdw_cdns *cdns, - struct sdw_cdns_stream_config config) + struct sdw_cdns_stream_config config) { struct sdw_cdns_streams *stream; int offset, i, ret; @@ -770,7 +769,7 @@ int sdw_cdns_pdi_init(struct sdw_cdns *cdns, cdns->num_ports += stream->num_pdi; cdns->ports = devm_kcalloc(cdns->dev, cdns->num_ports, - sizeof(*cdns->ports), GFP_KERNEL); + sizeof(*cdns->ports), GFP_KERNEL); if (!cdns->ports) { ret = -ENOMEM; return ret; @@ -796,7 +795,7 @@ int sdw_cdns_init(struct sdw_cdns *cdns) /* Exit clock stop */ ret = cdns_clear_bit(cdns, CDNS_MCP_CONTROL, - CDNS_MCP_CONTROL_CLK_STOP_CLR); + CDNS_MCP_CONTROL_CLK_STOP_CLR); if (ret < 0) { dev_err(cdns->dev, "Couldn't exit from clock stop\n"); return ret; @@ -816,7 +815,7 @@ int sdw_cdns_init(struct sdw_cdns *cdns) /* Set cmd accept mode */ cdns_updatel(cdns, CDNS_MCP_CONTROL, CDNS_MCP_CONTROL_CMD_ACCEPT, - CDNS_MCP_CONTROL_CMD_ACCEPT); + CDNS_MCP_CONTROL_CMD_ACCEPT); /* Configure mcp config */ val = cdns_readl(cdns, CDNS_MCP_CONFIG); @@ -853,7 +852,7 @@ int cdns_bus_conf(struct sdw_bus *bus, struct sdw_bus_params *params) int divider; if (!params->curr_dr_freq) { - dev_err(cdns->dev, "NULL curr_dr_freq"); + dev_err(cdns->dev, "NULL curr_dr_freq\n"); return -EINVAL; } @@ -873,7 +872,7 @@ int cdns_bus_conf(struct sdw_bus *bus, struct sdw_bus_params *params) EXPORT_SYMBOL(cdns_bus_conf); static int cdns_port_params(struct sdw_bus *bus, - struct sdw_port_params *p_params, unsigned int bank) + struct sdw_port_params *p_params, unsigned int bank) { struct sdw_cdns *cdns = bus_to_cdns(bus); int dpn_config = 0, dpn_config_off; @@ -898,8 +897,8 @@ static int cdns_port_params(struct sdw_bus *bus, } static int cdns_transport_params(struct sdw_bus *bus, - struct sdw_transport_params *t_params, - enum sdw_reg_bank bank) + struct sdw_transport_params *t_params, + enum sdw_reg_bank bank) { struct sdw_cdns *cdns = bus_to_cdns(bus); int dpn_offsetctrl = 0, dpn_offsetctrl_off; @@ -952,7 +951,7 @@ static int cdns_transport_params(struct sdw_bus *bus, } static int cdns_port_enable(struct sdw_bus *bus, - struct sdw_enable_ch *enable_ch, unsigned int bank) + struct sdw_enable_ch *enable_ch, unsigned int bank) { struct sdw_cdns *cdns = bus_to_cdns(bus); int dpn_chnen_off, ch_mask; @@ -988,7 +987,7 @@ int sdw_cdns_probe(struct sdw_cdns *cdns) EXPORT_SYMBOL(sdw_cdns_probe); int cdns_set_sdw_stream(struct snd_soc_dai *dai, - void *stream, bool pcm, int direction) + void *stream, bool pcm, int direction) { struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); struct sdw_cdns_dma_data *dma; @@ -1026,12 +1025,13 @@ EXPORT_SYMBOL(cdns_set_sdw_stream); * Find and return a free PDI for a given PDI array */ static struct sdw_cdns_pdi *cdns_find_pdi(struct sdw_cdns *cdns, - unsigned int num, struct sdw_cdns_pdi *pdi) + unsigned int num, + struct sdw_cdns_pdi *pdi) { int i; for (i = 0; i < num; i++) { - if (pdi[i].assigned == true) + if (pdi[i].assigned) continue; pdi[i].assigned = true; return &pdi[i]; @@ -1050,8 +1050,8 @@ static struct sdw_cdns_pdi *cdns_find_pdi(struct sdw_cdns *cdns, * @pdi: PDI to be used */ void sdw_cdns_config_stream(struct sdw_cdns *cdns, - struct sdw_cdns_port *port, - u32 ch, u32 dir, struct sdw_cdns_pdi *pdi) + struct sdw_cdns_port *port, + u32 ch, u32 dir, struct sdw_cdns_pdi *pdi) { u32 offset, val = 0; @@ -1076,13 +1076,13 @@ EXPORT_SYMBOL(sdw_cdns_config_stream); * @ch_count: Channel count */ static int cdns_get_num_pdi(struct sdw_cdns *cdns, - struct sdw_cdns_pdi *pdi, - unsigned int num, u32 ch_count) + struct sdw_cdns_pdi *pdi, + unsigned int num, u32 ch_count) { int i, pdis = 0; for (i = 0; i < num; i++) { - if (pdi[i].assigned == true) + if (pdi[i].assigned) continue; if (pdi[i].ch_count < ch_count) @@ -1139,8 +1139,8 @@ EXPORT_SYMBOL(sdw_cdns_get_stream); * @dir: Data direction */ int sdw_cdns_alloc_stream(struct sdw_cdns *cdns, - struct sdw_cdns_streams *stream, - struct sdw_cdns_port *port, u32 ch, u32 dir) + struct sdw_cdns_streams *stream, + struct sdw_cdns_port *port, u32 ch, u32 dir) { struct sdw_cdns_pdi *pdi = NULL; @@ -1167,7 +1167,7 @@ int sdw_cdns_alloc_stream(struct sdw_cdns *cdns, EXPORT_SYMBOL(sdw_cdns_alloc_stream); void sdw_cdns_shutdown(struct snd_pcm_substream *substream, - struct snd_soc_dai *dai) + struct snd_soc_dai *dai) { struct sdw_cdns_dma_data *dma; diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h index eb902b19c5a4..fe2af62958b1 100644 --- a/drivers/soundwire/cadence_master.h +++ b/drivers/soundwire/cadence_master.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* Copyright(c) 2015-17 Intel Corporation. */ #include <sound/soc.h> #ifndef __SDW_CADENCE_H @@ -160,24 +160,24 @@ irqreturn_t sdw_cdns_thread(int irq, void *dev_id); int sdw_cdns_init(struct sdw_cdns *cdns); int sdw_cdns_pdi_init(struct sdw_cdns *cdns, - struct sdw_cdns_stream_config config); + struct sdw_cdns_stream_config config); int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns); int sdw_cdns_get_stream(struct sdw_cdns *cdns, struct sdw_cdns_streams *stream, u32 ch, u32 dir); int sdw_cdns_alloc_stream(struct sdw_cdns *cdns, - struct sdw_cdns_streams *stream, - struct sdw_cdns_port *port, u32 ch, u32 dir); + struct sdw_cdns_streams *stream, + struct sdw_cdns_port *port, u32 ch, u32 dir); void sdw_cdns_config_stream(struct sdw_cdns *cdns, struct sdw_cdns_port *port, - u32 ch, u32 dir, struct sdw_cdns_pdi *pdi); + u32 ch, u32 dir, struct sdw_cdns_pdi *pdi); void sdw_cdns_shutdown(struct snd_pcm_substream *substream, - struct snd_soc_dai *dai); + struct snd_soc_dai *dai); int sdw_cdns_pcm_set_stream(struct snd_soc_dai *dai, - void *stream, int direction); + void *stream, int direction); int sdw_cdns_pdm_set_stream(struct snd_soc_dai *dai, - void *stream, int direction); + void *stream, int direction); enum sdw_command_response cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num); @@ -187,7 +187,7 @@ cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg); enum sdw_command_response cdns_xfer_msg_defer(struct sdw_bus *bus, - struct sdw_msg *msg, struct sdw_defer *defer); + struct sdw_msg *msg, struct sdw_defer *defer); enum sdw_command_response cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num); @@ -195,5 +195,5 @@ cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num); int cdns_bus_conf(struct sdw_bus *bus, struct sdw_bus_params *params); int cdns_set_sdw_stream(struct snd_soc_dai *dai, - void *stream, bool pcm, int direction); + void *stream, bool pcm, int direction); #endif /* __SDW_CADENCE_H */ diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c index fd8d034cfec1..31336b0271b0 100644 --- a/drivers/soundwire/intel.c +++ b/drivers/soundwire/intel.c @@ -7,6 +7,7 @@ #include <linux/acpi.h> #include <linux/delay.h> +#include <linux/module.h> #include <linux/interrupt.h> #include <linux/platform_device.h> #include <sound/pcm_params.h> @@ -23,18 +24,18 @@ #define SDW_SHIM_IPPTR 0x8 #define SDW_SHIM_SYNC 0xC -#define SDW_SHIM_CTLSCAP(x) (0x010 + 0x60 * x) -#define SDW_SHIM_CTLS0CM(x) (0x012 + 0x60 * x) -#define SDW_SHIM_CTLS1CM(x) (0x014 + 0x60 * x) -#define SDW_SHIM_CTLS2CM(x) (0x016 + 0x60 * x) -#define SDW_SHIM_CTLS3CM(x) (0x018 + 0x60 * x) -#define SDW_SHIM_PCMSCAP(x) (0x020 + 0x60 * x) +#define SDW_SHIM_CTLSCAP(x) (0x010 + 0x60 * (x)) +#define SDW_SHIM_CTLS0CM(x) (0x012 + 0x60 * (x)) +#define SDW_SHIM_CTLS1CM(x) (0x014 + 0x60 * (x)) +#define SDW_SHIM_CTLS2CM(x) (0x016 + 0x60 * (x)) +#define SDW_SHIM_CTLS3CM(x) (0x018 + 0x60 * (x)) +#define SDW_SHIM_PCMSCAP(x) (0x020 + 0x60 * (x)) -#define SDW_SHIM_PCMSYCHM(x, y) (0x022 + (0x60 * x) + (0x2 * y)) -#define SDW_SHIM_PCMSYCHC(x, y) (0x042 + (0x60 * x) + (0x2 * y)) -#define SDW_SHIM_PDMSCAP(x) (0x062 + 0x60 * x) -#define SDW_SHIM_IOCTL(x) (0x06C + 0x60 * x) -#define SDW_SHIM_CTMCTL(x) (0x06E + 0x60 * x) +#define SDW_SHIM_PCMSYCHM(x, y) (0x022 + (0x60 * (x)) + (0x2 * (y))) +#define SDW_SHIM_PCMSYCHC(x, y) (0x042 + (0x60 * (x)) + (0x2 * (y))) +#define SDW_SHIM_PDMSCAP(x) (0x062 + 0x60 * (x)) +#define SDW_SHIM_IOCTL(x) (0x06C + 0x60 * (x)) +#define SDW_SHIM_CTMCTL(x) (0x06E + 0x60 * (x)) #define SDW_SHIM_WAKEEN 0x190 #define SDW_SHIM_WAKESTS 0x192 @@ -81,7 +82,7 @@ #define SDW_SHIM_WAKESTS_STATUS BIT(0) /* Intel ALH Register definitions */ -#define SDW_ALH_STRMZCFG(x) (0x000 + (0x4 * x)) +#define SDW_ALH_STRMZCFG(x) (0x000 + (0x4 * (x))) #define SDW_ALH_STRMZCFG_DMAT_VAL 0x3 #define SDW_ALH_STRMZCFG_DMAT GENMASK(7, 0) @@ -235,9 +236,9 @@ static int intel_shim_init(struct sdw_intel *sdw) /* Set SyncCPU bit */ sync_reg |= SDW_SHIM_SYNC_SYNCCPU; ret = intel_clear_bit(shim, SDW_SHIM_SYNC, sync_reg, - SDW_SHIM_SYNC_SYNCCPU); + SDW_SHIM_SYNC_SYNCCPU); if (ret < 0) - dev_err(sdw->cdns.dev, "Failed to set sync period: %d", ret); + dev_err(sdw->cdns.dev, "Failed to set sync period: %d\n", ret); return ret; } @@ -246,7 +247,7 @@ static int intel_shim_init(struct sdw_intel *sdw) * PDI routines */ static void intel_pdi_init(struct sdw_intel *sdw, - struct sdw_cdns_stream_config *config) + struct sdw_cdns_stream_config *config) { void __iomem *shim = sdw->res->shim; unsigned int link_id = sdw->instance; @@ -295,9 +296,9 @@ intel_pdi_get_ch_cap(struct sdw_intel *sdw, unsigned int pdi_num, bool pcm) } static int intel_pdi_get_ch_update(struct sdw_intel *sdw, - struct sdw_cdns_pdi *pdi, - unsigned int num_pdi, - unsigned int *num_ch, bool pcm) + struct sdw_cdns_pdi *pdi, + unsigned int num_pdi, + unsigned int *num_ch, bool pcm) { int i, ch_count = 0; @@ -312,16 +313,16 @@ static int intel_pdi_get_ch_update(struct sdw_intel *sdw, } static int intel_pdi_stream_ch_update(struct sdw_intel *sdw, - struct sdw_cdns_streams *stream, bool pcm) + struct sdw_cdns_streams *stream, bool pcm) { intel_pdi_get_ch_update(sdw, stream->bd, stream->num_bd, - &stream->num_ch_bd, pcm); + &stream->num_ch_bd, pcm); intel_pdi_get_ch_update(sdw, stream->in, stream->num_in, - &stream->num_ch_in, pcm); + &stream->num_ch_in, pcm); intel_pdi_get_ch_update(sdw, stream->out, stream->num_out, - &stream->num_ch_out, pcm); + &stream->num_ch_out, pcm); return 0; } @@ -386,9 +387,9 @@ intel_pdi_alh_configure(struct sdw_intel *sdw, struct sdw_cdns_pdi *pdi) } static int intel_config_stream(struct sdw_intel *sdw, - struct snd_pcm_substream *substream, - struct snd_soc_dai *dai, - struct snd_pcm_hw_params *hw_params, int link_id) + struct snd_pcm_substream *substream, + struct snd_soc_dai *dai, + struct snd_pcm_hw_params *hw_params, int link_id) { if (sdw->res->ops && sdw->res->ops->config_stream) return sdw->res->ops->config_stream(sdw->res->arg, @@ -453,9 +454,9 @@ static int intel_post_bank_switch(struct sdw_bus *bus) sync_reg |= SDW_SHIM_SYNC_SYNCGO; ret = intel_clear_bit(shim, SDW_SHIM_SYNC, sync_reg, - SDW_SHIM_SYNC_SYNCGO); + SDW_SHIM_SYNC_SYNCGO); if (ret < 0) - dev_err(sdw->cdns.dev, "Post bank switch failed: %d", ret); + dev_err(sdw->cdns.dev, "Post bank switch failed: %d\n", ret); return ret; } @@ -465,14 +466,14 @@ static int intel_post_bank_switch(struct sdw_bus *bus) */ static struct sdw_cdns_port *intel_alloc_port(struct sdw_intel *sdw, - u32 ch, u32 dir, bool pcm) + u32 ch, u32 dir, bool pcm) { struct sdw_cdns *cdns = &sdw->cdns; struct sdw_cdns_port *port = NULL; int i, ret = 0; for (i = 0; i < cdns->num_ports; i++) { - if (cdns->ports[i].assigned == true) + if (cdns->ports[i].assigned) continue; port = &cdns->ports[i]; @@ -525,8 +526,8 @@ static void intel_port_cleanup(struct sdw_cdns_dma_data *dma) } static int intel_hw_params(struct snd_pcm_substream *substream, - struct snd_pcm_hw_params *params, - struct snd_soc_dai *dai) + struct snd_pcm_hw_params *params, + struct snd_soc_dai *dai) { struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); struct sdw_intel *sdw = cdns_to_intel(cdns); @@ -555,7 +556,7 @@ static int intel_hw_params(struct snd_pcm_substream *substream, } if (!dma->nr_ports) { - dev_err(dai->dev, "ports/resources not available"); + dev_err(dai->dev, "ports/resources not available\n"); return -EINVAL; } @@ -574,7 +575,7 @@ static int intel_hw_params(struct snd_pcm_substream *substream, /* Inform DSP about PDI stream number */ for (i = 0; i < dma->nr_ports; i++) { ret = intel_config_stream(sdw, substream, dai, params, - dma->port[i]->pdi->intel_alh_id); + dma->port[i]->pdi->intel_alh_id); if (ret) goto port_error; } @@ -604,9 +605,9 @@ static int intel_hw_params(struct snd_pcm_substream *substream, } ret = sdw_stream_add_master(&cdns->bus, &sconfig, - pconfig, dma->nr_ports, dma->stream); + pconfig, dma->nr_ports, dma->stream); if (ret) { - dev_err(cdns->dev, "add master to stream failed:%d", ret); + dev_err(cdns->dev, "add master to stream failed:%d\n", ret); goto stream_error; } @@ -634,8 +635,8 @@ intel_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) ret = sdw_stream_remove_master(&cdns->bus, dma->stream); if (ret < 0) - dev_err(dai->dev, "remove master from stream %s failed: %d", - dma->stream->name, ret); + dev_err(dai->dev, "remove master from stream %s failed: %d\n", + dma->stream->name, ret); intel_port_cleanup(dma); kfree(dma->port); @@ -643,13 +644,13 @@ intel_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) } static int intel_pcm_set_sdw_stream(struct snd_soc_dai *dai, - void *stream, int direction) + void *stream, int direction) { return cdns_set_sdw_stream(dai, stream, true, direction); } static int intel_pdm_set_sdw_stream(struct snd_soc_dai *dai, - void *stream, int direction) + void *stream, int direction) { return cdns_set_sdw_stream(dai, stream, false, direction); } @@ -673,9 +674,9 @@ static const struct snd_soc_component_driver dai_component = { }; static int intel_create_dai(struct sdw_cdns *cdns, - struct snd_soc_dai_driver *dais, - enum intel_pdi_type type, - u32 num, u32 off, u32 max_ch, bool pcm) + struct snd_soc_dai_driver *dais, + enum intel_pdi_type type, + u32 num, u32 off, u32 max_ch, bool pcm) { int i; @@ -685,14 +686,14 @@ static int intel_create_dai(struct sdw_cdns *cdns, /* TODO: Read supported rates/formats from hardware */ for (i = off; i < (off + num); i++) { dais[i].name = kasprintf(GFP_KERNEL, "SDW%d Pin%d", - cdns->instance, i); + cdns->instance, i); if (!dais[i].name) return -ENOMEM; if (type == INTEL_PDI_BD || type == INTEL_PDI_OUT) { - dais[i].playback.stream_name = kasprintf(GFP_KERNEL, - "SDW%d Tx%d", - cdns->instance, i); + dais[i].playback.stream_name = + kasprintf(GFP_KERNEL, "SDW%d Tx%d", + cdns->instance, i); if (!dais[i].playback.stream_name) { kfree(dais[i].name); return -ENOMEM; @@ -705,9 +706,9 @@ static int intel_create_dai(struct sdw_cdns *cdns, } if (type == INTEL_PDI_BD || type == INTEL_PDI_IN) { - dais[i].capture.stream_name = kasprintf(GFP_KERNEL, - "SDW%d Rx%d", - cdns->instance, i); + dais[i].capture.stream_name = + kasprintf(GFP_KERNEL, "SDW%d Rx%d", + cdns->instance, i); if (!dais[i].capture.stream_name) { kfree(dais[i].name); kfree(dais[i].playback.stream_name); @@ -748,45 +749,45 @@ static int intel_register_dai(struct sdw_intel *sdw) /* Create PCM DAIs */ stream = &cdns->pcm; - ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, - stream->num_in, off, stream->num_ch_in, true); + ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, stream->num_in, + off, stream->num_ch_in, true); if (ret) return ret; off += cdns->pcm.num_in; - ret = intel_create_dai(cdns, dais, INTEL_PDI_OUT, - cdns->pcm.num_out, off, stream->num_ch_out, true); + ret = intel_create_dai(cdns, dais, INTEL_PDI_OUT, cdns->pcm.num_out, + off, stream->num_ch_out, true); if (ret) return ret; off += cdns->pcm.num_out; - ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, - cdns->pcm.num_bd, off, stream->num_ch_bd, true); + ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, cdns->pcm.num_bd, + off, stream->num_ch_bd, true); if (ret) return ret; /* Create PDM DAIs */ stream = &cdns->pdm; off += cdns->pcm.num_bd; - ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, - cdns->pdm.num_in, off, stream->num_ch_in, false); + ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, cdns->pdm.num_in, + off, stream->num_ch_in, false); if (ret) return ret; off += cdns->pdm.num_in; - ret = intel_create_dai(cdns, dais, INTEL_PDI_OUT, - cdns->pdm.num_out, off, stream->num_ch_out, false); + ret = intel_create_dai(cdns, dais, INTEL_PDI_OUT, cdns->pdm.num_out, + off, stream->num_ch_out, false); if (ret) return ret; off += cdns->pdm.num_bd; - ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, - cdns->pdm.num_bd, off, stream->num_ch_bd, false); + ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, cdns->pdm.num_bd, + off, stream->num_ch_bd, false); if (ret) return ret; return snd_soc_register_component(cdns->dev, &dai_component, - dais, num_dai); + dais, num_dai); } static int intel_prop_read(struct sdw_bus *bus) @@ -796,8 +797,8 @@ static int intel_prop_read(struct sdw_bus *bus) /* BIOS is not giving some values correctly. So, lets override them */ bus->prop.num_freq = 1; - bus->prop.freq = devm_kcalloc(bus->dev, sizeof(*bus->prop.freq), - bus->prop.num_freq, GFP_KERNEL); + bus->prop.freq = devm_kcalloc(bus->dev, bus->prop.num_freq, + sizeof(*bus->prop.freq), GFP_KERNEL); if (!bus->prop.freq) return -ENOMEM; @@ -872,19 +873,18 @@ static int intel_probe(struct platform_device *pdev) intel_pdi_ch_update(sdw); /* Acquire IRQ */ - ret = request_threaded_irq(sdw->res->irq, sdw_cdns_irq, - sdw_cdns_thread, IRQF_SHARED, KBUILD_MODNAME, - &sdw->cdns); + ret = request_threaded_irq(sdw->res->irq, sdw_cdns_irq, sdw_cdns_thread, + IRQF_SHARED, KBUILD_MODNAME, &sdw->cdns); if (ret < 0) { dev_err(sdw->cdns.dev, "unable to grab IRQ %d, disabling device\n", - sdw->res->irq); + sdw->res->irq); goto err_init; } /* Register DAIs */ ret = intel_register_dai(sdw); if (ret) { - dev_err(sdw->cdns.dev, "DAI registration failed: %d", ret); + dev_err(sdw->cdns.dev, "DAI registration failed: %d\n", ret); snd_soc_unregister_component(sdw->cdns.dev); goto err_dai; } diff --git a/drivers/soundwire/intel.h b/drivers/soundwire/intel.h index c1a5bac6212e..71050e5f643d 100644 --- a/drivers/soundwire/intel.h +++ b/drivers/soundwire/intel.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* Copyright(c) 2015-17 Intel Corporation. */ #ifndef __SDW_INTEL_LOCAL_H #define __SDW_INTEL_LOCAL_H diff --git a/drivers/soundwire/intel_init.c b/drivers/soundwire/intel_init.c index 5c8a20d99878..d3d6b54c5791 100644 --- a/drivers/soundwire/intel_init.c +++ b/drivers/soundwire/intel_init.c @@ -8,6 +8,8 @@ */ #include <linux/acpi.h> +#include <linux/export.h> +#include <linux/module.h> #include <linux/platform_device.h> #include <linux/soundwire/sdw_intel.h> #include "intel.h" @@ -67,7 +69,7 @@ static struct sdw_intel_ctx /* Found controller, find links supported */ count = 0; ret = fwnode_property_read_u8_array(acpi_fwnode_handle(adev), - "mipi-sdw-master-count", &count, 1); + "mipi-sdw-master-count", &count, 1); /* Don't fail on error, continue and use hw value */ if (ret) { @@ -85,7 +87,7 @@ static struct sdw_intel_ctx /* Check count is within bounds */ if (count > SDW_MAX_LINKS) { dev_err(&adev->dev, "Link count %d exceeds max %d\n", - count, SDW_MAX_LINKS); + count, SDW_MAX_LINKS); return NULL; } @@ -104,7 +106,6 @@ static struct sdw_intel_ctx /* Create SDW Master devices */ for (i = 0; i < count; i++) { - link->res.irq = res->irq; link->res.registers = res->mmio_base + SDW_LINK_BASE + (SDW_LINK_SIZE * i); @@ -145,7 +146,7 @@ link_err: } static acpi_status sdw_intel_acpi_cb(acpi_handle handle, u32 level, - void *cdata, void **return_value) + void *cdata, void **return_value) { struct sdw_intel_res *res = cdata; struct acpi_device *adev; @@ -172,9 +173,9 @@ void *sdw_intel_init(acpi_handle *parent_handle, struct sdw_intel_res *res) acpi_status status; status = acpi_walk_namespace(ACPI_TYPE_DEVICE, - parent_handle, 1, - sdw_intel_acpi_cb, - NULL, res, NULL); + parent_handle, 1, + sdw_intel_acpi_cb, + NULL, res, NULL); if (ACPI_FAILURE(status)) return NULL; diff --git a/drivers/soundwire/mipi_disco.c b/drivers/soundwire/mipi_disco.c index fdeba0c3b589..c1f51d6a23d2 100644 --- a/drivers/soundwire/mipi_disco.c +++ b/drivers/soundwire/mipi_disco.c @@ -35,11 +35,12 @@ int sdw_master_read_prop(struct sdw_bus *bus) int nval, i; device_property_read_u32(bus->dev, - "mipi-sdw-sw-interface-revision", &prop->revision); + "mipi-sdw-sw-interface-revision", + &prop->revision); /* Find master handle */ snprintf(name, sizeof(name), - "mipi-sdw-master-%d-subproperties", bus->link_id); + "mipi-sdw-master-%d-subproperties", bus->link_id); link = device_get_named_child_node(bus->dev, name); if (!link) { @@ -48,23 +49,23 @@ int sdw_master_read_prop(struct sdw_bus *bus) } if (fwnode_property_read_bool(link, - "mipi-sdw-clock-stop-mode0-supported") == true) + "mipi-sdw-clock-stop-mode0-supported")) prop->clk_stop_mode = SDW_CLK_STOP_MODE0; if (fwnode_property_read_bool(link, - "mipi-sdw-clock-stop-mode1-supported") == true) + "mipi-sdw-clock-stop-mode1-supported")) prop->clk_stop_mode |= SDW_CLK_STOP_MODE1; fwnode_property_read_u32(link, - "mipi-sdw-max-clock-frequency", &prop->max_freq); + "mipi-sdw-max-clock-frequency", + &prop->max_freq); nval = fwnode_property_read_u32_array(link, "mipi-sdw-clock-frequencies-supported", NULL, 0); if (nval > 0) { - prop->num_freq = nval; prop->freq = devm_kcalloc(bus->dev, prop->num_freq, - sizeof(*prop->freq), GFP_KERNEL); + sizeof(*prop->freq), GFP_KERNEL); if (!prop->freq) return -ENOMEM; @@ -88,47 +89,49 @@ int sdw_master_read_prop(struct sdw_bus *bus) nval = fwnode_property_read_u32_array(link, "mipi-sdw-supported-clock-gears", NULL, 0); if (nval > 0) { - prop->num_clk_gears = nval; prop->clk_gears = devm_kcalloc(bus->dev, prop->num_clk_gears, - sizeof(*prop->clk_gears), GFP_KERNEL); + sizeof(*prop->clk_gears), + GFP_KERNEL); if (!prop->clk_gears) return -ENOMEM; fwnode_property_read_u32_array(link, - "mipi-sdw-supported-clock-gears", - prop->clk_gears, prop->num_clk_gears); + "mipi-sdw-supported-clock-gears", + prop->clk_gears, + prop->num_clk_gears); } fwnode_property_read_u32(link, "mipi-sdw-default-frame-rate", - &prop->default_frame_rate); + &prop->default_frame_rate); fwnode_property_read_u32(link, "mipi-sdw-default-frame-row-size", - &prop->default_row); + &prop->default_row); fwnode_property_read_u32(link, "mipi-sdw-default-frame-col-size", - &prop->default_col); + &prop->default_col); prop->dynamic_frame = fwnode_property_read_bool(link, "mipi-sdw-dynamic-frame-shape"); fwnode_property_read_u32(link, "mipi-sdw-command-error-threshold", - &prop->err_threshold); + &prop->err_threshold); return 0; } EXPORT_SYMBOL(sdw_master_read_prop); static int sdw_slave_read_dp0(struct sdw_slave *slave, - struct fwnode_handle *port, struct sdw_dp0_prop *dp0) + struct fwnode_handle *port, + struct sdw_dp0_prop *dp0) { int nval; fwnode_property_read_u32(port, "mipi-sdw-port-max-wordlength", - &dp0->max_word); + &dp0->max_word); fwnode_property_read_u32(port, "mipi-sdw-port-min-wordlength", - &dp0->min_word); + &dp0->min_word); nval = fwnode_property_read_u32_array(port, "mipi-sdw-port-wordlength-configs", NULL, 0); @@ -136,8 +139,8 @@ static int sdw_slave_read_dp0(struct sdw_slave *slave, dp0->num_words = nval; dp0->words = devm_kcalloc(&slave->dev, - dp0->num_words, sizeof(*dp0->words), - GFP_KERNEL); + dp0->num_words, sizeof(*dp0->words), + GFP_KERNEL); if (!dp0->words) return -ENOMEM; @@ -146,20 +149,21 @@ static int sdw_slave_read_dp0(struct sdw_slave *slave, dp0->words, dp0->num_words); } - dp0->flow_controlled = fwnode_property_read_bool( - port, "mipi-sdw-bra-flow-controlled"); + dp0->flow_controlled = fwnode_property_read_bool(port, + "mipi-sdw-bra-flow-controlled"); - dp0->simple_ch_prep_sm = fwnode_property_read_bool( - port, "mipi-sdw-simplified-channel-prepare-sm"); + dp0->simple_ch_prep_sm = fwnode_property_read_bool(port, + "mipi-sdw-simplified-channel-prepare-sm"); - dp0->device_interrupts = fwnode_property_read_bool( - port, "mipi-sdw-imp-def-dp0-interrupts-supported"); + dp0->device_interrupts = fwnode_property_read_bool(port, + "mipi-sdw-imp-def-dp0-interrupts-supported"); return 0; } static int sdw_slave_read_dpn(struct sdw_slave *slave, - struct sdw_dpn_prop *dpn, int count, int ports, char *type) + struct sdw_dpn_prop *dpn, int count, int ports, + char *type) { struct fwnode_handle *node; u32 bit, i = 0; @@ -173,7 +177,7 @@ static int sdw_slave_read_dpn(struct sdw_slave *slave, for_each_set_bit(bit, &addr, 32) { snprintf(name, sizeof(name), - "mipi-sdw-dp-%d-%s-subproperties", bit, type); + "mipi-sdw-dp-%d-%s-subproperties", bit, type); dpn[i].num = bit; @@ -184,18 +188,18 @@ static int sdw_slave_read_dpn(struct sdw_slave *slave, } fwnode_property_read_u32(node, "mipi-sdw-port-max-wordlength", - &dpn[i].max_word); + &dpn[i].max_word); fwnode_property_read_u32(node, "mipi-sdw-port-min-wordlength", - &dpn[i].min_word); + &dpn[i].min_word); nval = fwnode_property_read_u32_array(node, "mipi-sdw-port-wordlength-configs", NULL, 0); if (nval > 0) { - dpn[i].num_words = nval; dpn[i].words = devm_kcalloc(&slave->dev, - dpn[i].num_words, - sizeof(*dpn[i].words), GFP_KERNEL); + dpn[i].num_words, + sizeof(*dpn[i].words), + GFP_KERNEL); if (!dpn[i].words) return -ENOMEM; @@ -205,36 +209,36 @@ static int sdw_slave_read_dpn(struct sdw_slave *slave, } fwnode_property_read_u32(node, "mipi-sdw-data-port-type", - &dpn[i].type); + &dpn[i].type); fwnode_property_read_u32(node, - "mipi-sdw-max-grouping-supported", - &dpn[i].max_grouping); + "mipi-sdw-max-grouping-supported", + &dpn[i].max_grouping); dpn[i].simple_ch_prep_sm = fwnode_property_read_bool(node, "mipi-sdw-simplified-channelprepare-sm"); fwnode_property_read_u32(node, - "mipi-sdw-port-channelprepare-timeout", - &dpn[i].ch_prep_timeout); + "mipi-sdw-port-channelprepare-timeout", + &dpn[i].ch_prep_timeout); fwnode_property_read_u32(node, "mipi-sdw-imp-def-dpn-interrupts-supported", &dpn[i].device_interrupts); fwnode_property_read_u32(node, "mipi-sdw-min-channel-number", - &dpn[i].min_ch); + &dpn[i].min_ch); fwnode_property_read_u32(node, "mipi-sdw-max-channel-number", - &dpn[i].max_ch); + &dpn[i].max_ch); nval = fwnode_property_read_u32_array(node, "mipi-sdw-channel-number-list", NULL, 0); if (nval > 0) { - dpn[i].num_ch = nval; dpn[i].ch = devm_kcalloc(&slave->dev, dpn[i].num_ch, - sizeof(*dpn[i].ch), GFP_KERNEL); + sizeof(*dpn[i].ch), + GFP_KERNEL); if (!dpn[i].ch) return -ENOMEM; @@ -246,7 +250,6 @@ static int sdw_slave_read_dpn(struct sdw_slave *slave, nval = fwnode_property_read_u32_array(node, "mipi-sdw-channel-combination-list", NULL, 0); if (nval > 0) { - dpn[i].num_ch_combinations = nval; dpn[i].ch_combinations = devm_kcalloc(&slave->dev, dpn[i].num_ch_combinations, @@ -265,13 +268,13 @@ static int sdw_slave_read_dpn(struct sdw_slave *slave, "mipi-sdw-modes-supported", &dpn[i].modes); fwnode_property_read_u32(node, "mipi-sdw-max-async-buffer", - &dpn[i].max_async_buffer); + &dpn[i].max_async_buffer); dpn[i].block_pack_mode = fwnode_property_read_bool(node, "mipi-sdw-block-packing-mode"); fwnode_property_read_u32(node, "mipi-sdw-port-encoding-type", - &dpn[i].port_encoding); + &dpn[i].port_encoding); /* TODO: Read audio mode */ @@ -293,7 +296,7 @@ int sdw_slave_read_prop(struct sdw_slave *slave) int num_of_ports, nval, i, dp0 = 0; device_property_read_u32(dev, "mipi-sdw-sw-interface-revision", - &prop->mipi_revision); + &prop->mipi_revision); prop->wake_capable = device_property_read_bool(dev, "mipi-sdw-wake-up-unavailable"); @@ -311,10 +314,10 @@ int sdw_slave_read_prop(struct sdw_slave *slave) "mipi-sdw-simplified-clockstopprepare-sm-supported"); device_property_read_u32(dev, "mipi-sdw-clockstopprepare-timeout", - &prop->clk_stop_timeout); + &prop->clk_stop_timeout); device_property_read_u32(dev, "mipi-sdw-slave-channelprepare-timeout", - &prop->ch_prep_timeout); + &prop->ch_prep_timeout); device_property_read_u32(dev, "mipi-sdw-clockstopprepare-hard-reset-behavior", @@ -333,22 +336,22 @@ int sdw_slave_read_prop(struct sdw_slave *slave) "mipi-sdw-port15-read-behavior", &prop->p15_behave); device_property_read_u32(dev, "mipi-sdw-master-count", - &prop->master_count); + &prop->master_count); device_property_read_u32(dev, "mipi-sdw-source-port-list", - &prop->source_ports); + &prop->source_ports); device_property_read_u32(dev, "mipi-sdw-sink-port-list", - &prop->sink_ports); + &prop->sink_ports); /* Read dp0 properties */ port = device_get_named_child_node(dev, "mipi-sdw-dp-0-subproperties"); if (!port) { dev_dbg(dev, "DP0 node not found!!\n"); } else { - prop->dp0_prop = devm_kzalloc(&slave->dev, - sizeof(*prop->dp0_prop), GFP_KERNEL); + sizeof(*prop->dp0_prop), + GFP_KERNEL); if (!prop->dp0_prop) return -ENOMEM; @@ -364,23 +367,25 @@ int sdw_slave_read_prop(struct sdw_slave *slave) /* Allocate memory for set bits in port lists */ nval = hweight32(prop->source_ports); prop->src_dpn_prop = devm_kcalloc(&slave->dev, nval, - sizeof(*prop->src_dpn_prop), GFP_KERNEL); + sizeof(*prop->src_dpn_prop), + GFP_KERNEL); if (!prop->src_dpn_prop) return -ENOMEM; /* Read dpn properties for source port(s) */ sdw_slave_read_dpn(slave, prop->src_dpn_prop, nval, - prop->source_ports, "source"); + prop->source_ports, "source"); nval = hweight32(prop->sink_ports); prop->sink_dpn_prop = devm_kcalloc(&slave->dev, nval, - sizeof(*prop->sink_dpn_prop), GFP_KERNEL); + sizeof(*prop->sink_dpn_prop), + GFP_KERNEL); if (!prop->sink_dpn_prop) return -ENOMEM; /* Read dpn properties for sink port(s) */ sdw_slave_read_dpn(slave, prop->sink_dpn_prop, nval, - prop->sink_ports, "sink"); + prop->sink_ports, "sink"); /* some ports are bidirectional so check total ports by ORing */ nval = prop->source_ports | prop->sink_ports; @@ -388,7 +393,8 @@ int sdw_slave_read_prop(struct sdw_slave *slave) /* Allocate port_ready based on num_of_ports */ slave->port_ready = devm_kcalloc(&slave->dev, num_of_ports, - sizeof(*slave->port_ready), GFP_KERNEL); + sizeof(*slave->port_ready), + GFP_KERNEL); if (!slave->port_ready) return -ENOMEM; diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c index ac103bd0c176..f39a5815e25d 100644 --- a/drivers/soundwire/slave.c +++ b/drivers/soundwire/slave.c @@ -14,7 +14,7 @@ static void sdw_slave_release(struct device *dev) } static int sdw_slave_add(struct sdw_bus *bus, - struct sdw_slave_id *id, struct fwnode_handle *fwnode) + struct sdw_slave_id *id, struct fwnode_handle *fwnode) { struct sdw_slave *slave; int ret; @@ -30,8 +30,8 @@ static int sdw_slave_add(struct sdw_bus *bus, /* name shall be sdw:link:mfg:part:class:unique */ dev_set_name(&slave->dev, "sdw:%x:%x:%x:%x:%x", - bus->link_id, id->mfg_id, id->part_id, - id->class_id, id->unique_id); + bus->link_id, id->mfg_id, id->part_id, + id->class_id, id->unique_id); slave->dev.release = sdw_slave_release; slave->dev.bus = &sdw_bus_type; @@ -84,11 +84,11 @@ int sdw_acpi_find_slaves(struct sdw_bus *bus) acpi_status status; status = acpi_evaluate_integer(adev->handle, - METHOD_NAME__ADR, NULL, &addr); + METHOD_NAME__ADR, NULL, &addr); if (ACPI_FAILURE(status)) { dev_err(bus->dev, "_ADR resolution failed: %x\n", - status); + status); return status; } diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c index bd879b1a76c8..d01060dbee96 100644 --- a/drivers/soundwire/stream.c +++ b/drivers/soundwire/stream.c @@ -52,10 +52,11 @@ static int sdw_find_row_index(int row) pr_warn("Requested row not found, selecting lowest row no: 48\n"); return 0; } + static int _sdw_program_slave_port_params(struct sdw_bus *bus, - struct sdw_slave *slave, - struct sdw_transport_params *t_params, - enum sdw_dpn_type type) + struct sdw_slave *slave, + struct sdw_transport_params *t_params, + enum sdw_dpn_type type) { u32 addr1, addr2, addr3, addr4; int ret; @@ -76,20 +77,20 @@ static int _sdw_program_slave_port_params(struct sdw_bus *bus, /* Program DPN_OffsetCtrl2 registers */ ret = sdw_write(slave, addr1, t_params->offset2); if (ret < 0) { - dev_err(bus->dev, "DPN_OffsetCtrl2 register write failed"); + dev_err(bus->dev, "DPN_OffsetCtrl2 register write failed\n"); return ret; } /* Program DPN_BlockCtrl3 register */ ret = sdw_write(slave, addr2, t_params->blk_pkg_mode); if (ret < 0) { - dev_err(bus->dev, "DPN_BlockCtrl3 register write failed"); + dev_err(bus->dev, "DPN_BlockCtrl3 register write failed\n"); return ret; } /* * Data ports are FULL, SIMPLE and REDUCED. This function handles - * FULL and REDUCED only and and beyond this point only FULL is + * FULL and REDUCED only and beyond this point only FULL is * handled, so bail out if we are not FULL data port type */ if (type != SDW_DPN_FULL) @@ -102,7 +103,7 @@ static int _sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(slave, addr3, wbuf); if (ret < 0) { - dev_err(bus->dev, "DPN_SampleCtrl2 register write failed"); + dev_err(bus->dev, "DPN_SampleCtrl2 register write failed\n"); return ret; } @@ -113,14 +114,14 @@ static int _sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(slave, addr4, wbuf); if (ret < 0) - dev_err(bus->dev, "DPN_HCtrl register write failed"); + dev_err(bus->dev, "DPN_HCtrl register write failed\n"); return ret; } static int sdw_program_slave_port_params(struct sdw_bus *bus, - struct sdw_slave_runtime *s_rt, - struct sdw_port_runtime *p_rt) + struct sdw_slave_runtime *s_rt, + struct sdw_port_runtime *p_rt) { struct sdw_transport_params *t_params = &p_rt->transport_params; struct sdw_port_params *p_params = &p_rt->port_params; @@ -131,8 +132,8 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, u8 wbuf; dpn_prop = sdw_get_slave_dpn_prop(s_rt->slave, - s_rt->direction, - t_params->port_num); + s_rt->direction, + t_params->port_num); if (!dpn_prop) return -EINVAL; @@ -159,7 +160,7 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_update(s_rt->slave, addr1, 0xF, wbuf); if (ret < 0) { dev_err(&s_rt->slave->dev, - "DPN_PortCtrl register write failed for port %d", + "DPN_PortCtrl register write failed for port %d\n", t_params->port_num); return ret; } @@ -168,7 +169,7 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(s_rt->slave, addr2, (p_params->bps - 1)); if (ret < 0) { dev_err(&s_rt->slave->dev, - "DPN_BlockCtrl1 register write failed for port %d", + "DPN_BlockCtrl1 register write failed for port %d\n", t_params->port_num); return ret; } @@ -178,7 +179,7 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(s_rt->slave, addr3, wbuf); if (ret < 0) { dev_err(&s_rt->slave->dev, - "DPN_SampleCtrl1 register write failed for port %d", + "DPN_SampleCtrl1 register write failed for port %d\n", t_params->port_num); return ret; } @@ -187,7 +188,7 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(s_rt->slave, addr4, t_params->offset1); if (ret < 0) { dev_err(&s_rt->slave->dev, - "DPN_OffsetCtrl1 register write failed for port %d", + "DPN_OffsetCtrl1 register write failed for port %d\n", t_params->port_num); return ret; } @@ -197,7 +198,7 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(s_rt->slave, addr5, t_params->blk_grp_ctrl); if (ret < 0) { dev_err(&s_rt->slave->dev, - "DPN_BlockCtrl2 reg write failed for port %d", + "DPN_BlockCtrl2 reg write failed for port %d\n", t_params->port_num); return ret; } @@ -208,7 +209,7 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, ret = sdw_write(s_rt->slave, addr6, t_params->lane_ctrl); if (ret < 0) { dev_err(&s_rt->slave->dev, - "DPN_LaneCtrl register write failed for port %d", + "DPN_LaneCtrl register write failed for port %d\n", t_params->port_num); return ret; } @@ -216,10 +217,10 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, if (dpn_prop->type != SDW_DPN_SIMPLE) { ret = _sdw_program_slave_port_params(bus, s_rt->slave, - t_params, dpn_prop->type); + t_params, dpn_prop->type); if (ret < 0) dev_err(&s_rt->slave->dev, - "Transport reg write failed for port: %d", + "Transport reg write failed for port: %d\n", t_params->port_num); } @@ -227,13 +228,13 @@ static int sdw_program_slave_port_params(struct sdw_bus *bus, } static int sdw_program_master_port_params(struct sdw_bus *bus, - struct sdw_port_runtime *p_rt) + struct sdw_port_runtime *p_rt) { int ret; /* * we need to set transport and port parameters for the port. - * Transport parameters refers to the smaple interval, offsets and + * Transport parameters refers to the sample interval, offsets and * hstart/stop etc of the data. Port parameters refers to word * length, flow mode etc of the port */ @@ -244,8 +245,8 @@ static int sdw_program_master_port_params(struct sdw_bus *bus, return ret; return bus->port_ops->dpn_set_port_params(bus, - &p_rt->port_params, - bus->params.next_bank); + &p_rt->port_params, + bus->params.next_bank); } /** @@ -292,8 +293,9 @@ static int sdw_program_port_params(struct sdw_master_runtime *m_rt) * actual enable/disable is done with a bank switch */ static int sdw_enable_disable_slave_ports(struct sdw_bus *bus, - struct sdw_slave_runtime *s_rt, - struct sdw_port_runtime *p_rt, bool en) + struct sdw_slave_runtime *s_rt, + struct sdw_port_runtime *p_rt, + bool en) { struct sdw_transport_params *t_params = &p_rt->transport_params; u32 addr; @@ -315,19 +317,20 @@ static int sdw_enable_disable_slave_ports(struct sdw_bus *bus, if (ret < 0) dev_err(&s_rt->slave->dev, - "Slave chn_en reg write failed:%d port:%d", + "Slave chn_en reg write failed:%d port:%d\n", ret, t_params->port_num); return ret; } static int sdw_enable_disable_master_ports(struct sdw_master_runtime *m_rt, - struct sdw_port_runtime *p_rt, bool en) + struct sdw_port_runtime *p_rt, + bool en) { struct sdw_transport_params *t_params = &p_rt->transport_params; struct sdw_bus *bus = m_rt->bus; struct sdw_enable_ch enable_ch; - int ret = 0; + int ret; enable_ch.port_num = p_rt->num; enable_ch.ch_mask = p_rt->ch_mask; @@ -336,10 +339,11 @@ static int sdw_enable_disable_master_ports(struct sdw_master_runtime *m_rt, /* Perform Master port channel(s) enable/disable */ if (bus->port_ops->dpn_port_enable_ch) { ret = bus->port_ops->dpn_port_enable_ch(bus, - &enable_ch, bus->params.next_bank); + &enable_ch, + bus->params.next_bank); if (ret < 0) { dev_err(bus->dev, - "Master chn_en write failed:%d port:%d", + "Master chn_en write failed:%d port:%d\n", ret, t_params->port_num); return ret; } @@ -370,7 +374,7 @@ static int sdw_enable_disable_ports(struct sdw_master_runtime *m_rt, bool en) list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { list_for_each_entry(s_port, &s_rt->port_list, port_node) { ret = sdw_enable_disable_slave_ports(m_rt->bus, s_rt, - s_port, en); + s_port, en); if (ret < 0) return ret; } @@ -387,7 +391,8 @@ static int sdw_enable_disable_ports(struct sdw_master_runtime *m_rt, bool en) } static int sdw_do_port_prep(struct sdw_slave_runtime *s_rt, - struct sdw_prepare_ch prep_ch, enum sdw_port_prep_ops cmd) + struct sdw_prepare_ch prep_ch, + enum sdw_port_prep_ops cmd) { const struct sdw_slave_ops *ops = s_rt->slave->ops; int ret; @@ -396,7 +401,8 @@ static int sdw_do_port_prep(struct sdw_slave_runtime *s_rt, ret = ops->port_prep(s_rt->slave, &prep_ch, cmd); if (ret < 0) { dev_err(&s_rt->slave->dev, - "Slave Port Prep cmd %d failed: %d", cmd, ret); + "Slave Port Prep cmd %d failed: %d\n", + cmd, ret); return ret; } } @@ -405,8 +411,9 @@ static int sdw_do_port_prep(struct sdw_slave_runtime *s_rt, } static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, - struct sdw_slave_runtime *s_rt, - struct sdw_port_runtime *p_rt, bool prep) + struct sdw_slave_runtime *s_rt, + struct sdw_port_runtime *p_rt, + bool prep) { struct completion *port_ready = NULL; struct sdw_dpn_prop *dpn_prop; @@ -420,11 +427,11 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, prep_ch.ch_mask = p_rt->ch_mask; dpn_prop = sdw_get_slave_dpn_prop(s_rt->slave, - s_rt->direction, - prep_ch.num); + s_rt->direction, + prep_ch.num); if (!dpn_prop) { dev_err(bus->dev, - "Slave Port:%d properties not found", prep_ch.num); + "Slave Port:%d properties not found\n", prep_ch.num); return -EINVAL; } @@ -442,7 +449,7 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, */ if (prep && intr) { ret = sdw_configure_dpn_intr(s_rt->slave, p_rt->num, prep, - dpn_prop->device_interrupts); + dpn_prop->device_interrupts); if (ret < 0) return ret; } @@ -456,13 +463,13 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, if (prep) ret = sdw_update(s_rt->slave, addr, - 0xFF, p_rt->ch_mask); + 0xFF, p_rt->ch_mask); else ret = sdw_update(s_rt->slave, addr, 0xFF, 0x0); if (ret < 0) { dev_err(&s_rt->slave->dev, - "Slave prep_ctrl reg write failed"); + "Slave prep_ctrl reg write failed\n"); return ret; } @@ -475,7 +482,7 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, val &= p_rt->ch_mask; if (!time_left || val) { dev_err(&s_rt->slave->dev, - "Chn prep failed for port:%d", prep_ch.num); + "Chn prep failed for port:%d\n", prep_ch.num); return -ETIMEDOUT; } } @@ -486,13 +493,14 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, /* Disable interrupt after Port de-prepare */ if (!prep && intr) ret = sdw_configure_dpn_intr(s_rt->slave, p_rt->num, prep, - dpn_prop->device_interrupts); + dpn_prop->device_interrupts); return ret; } static int sdw_prep_deprep_master_ports(struct sdw_master_runtime *m_rt, - struct sdw_port_runtime *p_rt, bool prep) + struct sdw_port_runtime *p_rt, + bool prep) { struct sdw_transport_params *t_params = &p_rt->transport_params; struct sdw_bus *bus = m_rt->bus; @@ -509,8 +517,8 @@ static int sdw_prep_deprep_master_ports(struct sdw_master_runtime *m_rt, if (ops->dpn_port_prep) { ret = ops->dpn_port_prep(bus, &prep_ch); if (ret < 0) { - dev_err(bus->dev, "Port prepare failed for port:%d", - t_params->port_num); + dev_err(bus->dev, "Port prepare failed for port:%d\n", + t_params->port_num); return ret; } } @@ -535,7 +543,7 @@ static int sdw_prep_deprep_ports(struct sdw_master_runtime *m_rt, bool prep) list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { list_for_each_entry(p_rt, &s_rt->port_list, port_node) { ret = sdw_prep_deprep_slave_ports(m_rt->bus, s_rt, - p_rt, prep); + p_rt, prep); if (ret < 0) return ret; } @@ -578,8 +586,8 @@ static int sdw_notify_config(struct sdw_master_runtime *m_rt) if (slave->ops->bus_config) { ret = slave->ops->bus_config(slave, &bus->params); if (ret < 0) - dev_err(bus->dev, "Notify Slave: %d failed", - slave->dev_num); + dev_err(bus->dev, "Notify Slave: %d failed\n", + slave->dev_num); return ret; } } @@ -602,13 +610,14 @@ static int sdw_program_params(struct sdw_bus *bus) ret = sdw_program_port_params(m_rt); if (ret < 0) { dev_err(bus->dev, - "Program transport params failed: %d", ret); + "Program transport params failed: %d\n", ret); return ret; } ret = sdw_notify_config(m_rt); if (ret < 0) { - dev_err(bus->dev, "Notify bus config failed: %d", ret); + dev_err(bus->dev, + "Notify bus config failed: %d\n", ret); return ret; } @@ -618,7 +627,7 @@ static int sdw_program_params(struct sdw_bus *bus) ret = sdw_enable_disable_ports(m_rt, true); if (ret < 0) { - dev_err(bus->dev, "Enable channel failed: %d", ret); + dev_err(bus->dev, "Enable channel failed: %d\n", ret); return ret; } } @@ -658,7 +667,7 @@ static int sdw_bank_switch(struct sdw_bus *bus, int m_rt_count) addr = SDW_SCP_FRAMECTRL_B0; sdw_fill_msg(wr_msg, NULL, addr, 1, SDW_BROADCAST_DEV_NUM, - SDW_MSG_FLAG_WRITE, wbuf); + SDW_MSG_FLAG_WRITE, wbuf); wr_msg->ssp_sync = true; /* @@ -673,7 +682,7 @@ static int sdw_bank_switch(struct sdw_bus *bus, int m_rt_count) ret = sdw_transfer(bus, wr_msg); if (ret < 0) { - dev_err(bus->dev, "Slave frame_ctrl reg write failed"); + dev_err(bus->dev, "Slave frame_ctrl reg write failed\n"); goto error; } @@ -713,7 +722,7 @@ static int sdw_ml_sync_bank_switch(struct sdw_bus *bus) bus->bank_switch_timeout); if (!time_left) { - dev_err(bus->dev, "Controller Timed out on bank switch"); + dev_err(bus->dev, "Controller Timed out on bank switch\n"); return -ETIMEDOUT; } @@ -750,7 +759,7 @@ static int do_bank_switch(struct sdw_stream_runtime *stream) ret = ops->pre_bank_switch(bus); if (ret < 0) { dev_err(bus->dev, - "Pre bank switch op failed: %d", ret); + "Pre bank switch op failed: %d\n", ret); goto msg_unlock; } } @@ -763,9 +772,8 @@ static int do_bank_switch(struct sdw_stream_runtime *stream) */ ret = sdw_bank_switch(bus, stream->m_rt_count); if (ret < 0) { - dev_err(bus->dev, "Bank switch failed: %d", ret); + dev_err(bus->dev, "Bank switch failed: %d\n", ret); goto error; - } } @@ -784,12 +792,13 @@ static int do_bank_switch(struct sdw_stream_runtime *stream) ret = ops->post_bank_switch(bus); if (ret < 0) { dev_err(bus->dev, - "Post bank switch op failed: %d", ret); + "Post bank switch op failed: %d\n", + ret); goto error; } } else if (bus->multi_link && stream->m_rt_count > 1) { dev_err(bus->dev, - "Post bank switch ops not implemented"); + "Post bank switch ops not implemented\n"); goto error; } @@ -801,7 +810,7 @@ static int do_bank_switch(struct sdw_stream_runtime *stream) ret = sdw_ml_sync_bank_switch(bus); if (ret < 0) { dev_err(bus->dev, - "multi link bank switch failed: %d", ret); + "multi link bank switch failed: %d\n", ret); goto error; } @@ -812,7 +821,6 @@ static int do_bank_switch(struct sdw_stream_runtime *stream) error: list_for_each_entry(m_rt, &stream->master_list, stream_node) { - bus = m_rt->bus; kfree(bus->defer_msg.msg->buf); @@ -873,7 +881,7 @@ EXPORT_SYMBOL(sdw_alloc_stream); static struct sdw_master_runtime *sdw_find_master_rt(struct sdw_bus *bus, - struct sdw_stream_runtime *stream) + struct sdw_stream_runtime *stream) { struct sdw_master_runtime *m_rt = NULL; @@ -897,8 +905,8 @@ static struct sdw_master_runtime */ static struct sdw_master_runtime *sdw_alloc_master_rt(struct sdw_bus *bus, - struct sdw_stream_config *stream_config, - struct sdw_stream_runtime *stream) + struct sdw_stream_config *stream_config, + struct sdw_stream_runtime *stream) { struct sdw_master_runtime *m_rt; @@ -941,8 +949,8 @@ stream_config: */ static struct sdw_slave_runtime *sdw_alloc_slave_rt(struct sdw_slave *slave, - struct sdw_stream_config *stream_config, - struct sdw_stream_runtime *stream) + struct sdw_stream_config *stream_config, + struct sdw_stream_runtime *stream) { struct sdw_slave_runtime *s_rt = NULL; @@ -959,20 +967,19 @@ static struct sdw_slave_runtime } static void sdw_master_port_release(struct sdw_bus *bus, - struct sdw_master_runtime *m_rt) + struct sdw_master_runtime *m_rt) { struct sdw_port_runtime *p_rt, *_p_rt; - list_for_each_entry_safe(p_rt, _p_rt, - &m_rt->port_list, port_node) { + list_for_each_entry_safe(p_rt, _p_rt, &m_rt->port_list, port_node) { list_del(&p_rt->port_node); kfree(p_rt); } } static void sdw_slave_port_release(struct sdw_bus *bus, - struct sdw_slave *slave, - struct sdw_stream_runtime *stream) + struct sdw_slave *slave, + struct sdw_stream_runtime *stream) { struct sdw_port_runtime *p_rt, *_p_rt; struct sdw_master_runtime *m_rt; @@ -980,13 +987,11 @@ static void sdw_slave_port_release(struct sdw_bus *bus, list_for_each_entry(m_rt, &stream->master_list, stream_node) { list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { - if (s_rt->slave != slave) continue; list_for_each_entry_safe(p_rt, _p_rt, - &s_rt->port_list, port_node) { - + &s_rt->port_list, port_node) { list_del(&p_rt->port_node); kfree(p_rt); } @@ -1003,7 +1008,7 @@ static void sdw_slave_port_release(struct sdw_bus *bus, * This function is to be called with bus_lock held. */ static void sdw_release_slave_stream(struct sdw_slave *slave, - struct sdw_stream_runtime *stream) + struct sdw_stream_runtime *stream) { struct sdw_slave_runtime *s_rt, *_s_rt; struct sdw_master_runtime *m_rt; @@ -1011,8 +1016,7 @@ static void sdw_release_slave_stream(struct sdw_slave *slave, list_for_each_entry(m_rt, &stream->master_list, stream_node) { /* Retrieve Slave runtime handle */ list_for_each_entry_safe(s_rt, _s_rt, - &m_rt->slave_rt_list, m_rt_node) { - + &m_rt->slave_rt_list, m_rt_node) { if (s_rt->slave == slave) { list_del(&s_rt->m_rt_node); kfree(s_rt); @@ -1034,7 +1038,7 @@ static void sdw_release_slave_stream(struct sdw_slave *slave, * no effect as Slave(s) runtime handle would already be freed up. */ static void sdw_release_master_stream(struct sdw_master_runtime *m_rt, - struct sdw_stream_runtime *stream) + struct sdw_stream_runtime *stream) { struct sdw_slave_runtime *s_rt, *_s_rt; @@ -1057,15 +1061,14 @@ static void sdw_release_master_stream(struct sdw_master_runtime *m_rt, * This removes and frees port_rt and master_rt from a stream */ int sdw_stream_remove_master(struct sdw_bus *bus, - struct sdw_stream_runtime *stream) + struct sdw_stream_runtime *stream) { struct sdw_master_runtime *m_rt, *_m_rt; mutex_lock(&bus->bus_lock); list_for_each_entry_safe(m_rt, _m_rt, - &stream->master_list, stream_node) { - + &stream->master_list, stream_node) { if (m_rt->bus != bus) continue; @@ -1092,7 +1095,7 @@ EXPORT_SYMBOL(sdw_stream_remove_master); * This removes and frees port_rt and slave_rt from a stream */ int sdw_stream_remove_slave(struct sdw_slave *slave, - struct sdw_stream_runtime *stream) + struct sdw_stream_runtime *stream) { mutex_lock(&slave->bus->bus_lock); @@ -1116,8 +1119,9 @@ EXPORT_SYMBOL(sdw_stream_remove_slave); * This function is to be called with bus_lock held. */ static int sdw_config_stream(struct device *dev, - struct sdw_stream_runtime *stream, - struct sdw_stream_config *stream_config, bool is_slave) + struct sdw_stream_runtime *stream, + struct sdw_stream_config *stream_config, + bool is_slave) { /* * Update the stream rate, channel and bps based on data @@ -1128,14 +1132,14 @@ static int sdw_config_stream(struct device *dev, * comparison and allow the value to be set and stored in stream */ if (stream->params.rate && - stream->params.rate != stream_config->frame_rate) { - dev_err(dev, "rate not matching, stream:%s", stream->name); + stream->params.rate != stream_config->frame_rate) { + dev_err(dev, "rate not matching, stream:%s\n", stream->name); return -EINVAL; } if (stream->params.bps && - stream->params.bps != stream_config->bps) { - dev_err(dev, "bps not matching, stream:%s", stream->name); + stream->params.bps != stream_config->bps) { + dev_err(dev, "bps not matching, stream:%s\n", stream->name); return -EINVAL; } @@ -1151,20 +1155,21 @@ static int sdw_config_stream(struct device *dev, } static int sdw_is_valid_port_range(struct device *dev, - struct sdw_port_runtime *p_rt) + struct sdw_port_runtime *p_rt) { if (!SDW_VALID_PORT_RANGE(p_rt->num)) { dev_err(dev, - "SoundWire: Invalid port number :%d", p_rt->num); + "SoundWire: Invalid port number :%d\n", p_rt->num); return -EINVAL; } return 0; } -static struct sdw_port_runtime *sdw_port_alloc(struct device *dev, - struct sdw_port_config *port_config, - int port_index) +static struct sdw_port_runtime +*sdw_port_alloc(struct device *dev, + struct sdw_port_config *port_config, + int port_index) { struct sdw_port_runtime *p_rt; @@ -1179,9 +1184,9 @@ static struct sdw_port_runtime *sdw_port_alloc(struct device *dev, } static int sdw_master_port_config(struct sdw_bus *bus, - struct sdw_master_runtime *m_rt, - struct sdw_port_config *port_config, - unsigned int num_ports) + struct sdw_master_runtime *m_rt, + struct sdw_port_config *port_config, + unsigned int num_ports) { struct sdw_port_runtime *p_rt; int i; @@ -1204,9 +1209,9 @@ static int sdw_master_port_config(struct sdw_bus *bus, } static int sdw_slave_port_config(struct sdw_slave *slave, - struct sdw_slave_runtime *s_rt, - struct sdw_port_config *port_config, - unsigned int num_config) + struct sdw_slave_runtime *s_rt, + struct sdw_port_config *port_config, + unsigned int num_config) { struct sdw_port_runtime *p_rt; int i, ret; @@ -1248,10 +1253,10 @@ static int sdw_slave_port_config(struct sdw_slave *slave, * @stream: SoundWire stream */ int sdw_stream_add_master(struct sdw_bus *bus, - struct sdw_stream_config *stream_config, - struct sdw_port_config *port_config, - unsigned int num_ports, - struct sdw_stream_runtime *stream) + struct sdw_stream_config *stream_config, + struct sdw_port_config *port_config, + unsigned int num_ports, + struct sdw_stream_runtime *stream) { struct sdw_master_runtime *m_rt = NULL; int ret; @@ -1265,7 +1270,7 @@ int sdw_stream_add_master(struct sdw_bus *bus, */ if (!bus->multi_link && stream->m_rt_count > 0) { dev_err(bus->dev, - "Multilink not supported, link %d", bus->link_id); + "Multilink not supported, link %d\n", bus->link_id); ret = -EINVAL; goto unlock; } @@ -1273,8 +1278,8 @@ int sdw_stream_add_master(struct sdw_bus *bus, m_rt = sdw_alloc_master_rt(bus, stream_config, stream); if (!m_rt) { dev_err(bus->dev, - "Master runtime config failed for stream:%s", - stream->name); + "Master runtime config failed for stream:%s\n", + stream->name); ret = -ENOMEM; goto unlock; } @@ -1313,10 +1318,10 @@ EXPORT_SYMBOL(sdw_stream_add_master); * */ int sdw_stream_add_slave(struct sdw_slave *slave, - struct sdw_stream_config *stream_config, - struct sdw_port_config *port_config, - unsigned int num_ports, - struct sdw_stream_runtime *stream) + struct sdw_stream_config *stream_config, + struct sdw_port_config *port_config, + unsigned int num_ports, + struct sdw_stream_runtime *stream) { struct sdw_slave_runtime *s_rt; struct sdw_master_runtime *m_rt; @@ -1331,8 +1336,8 @@ int sdw_stream_add_slave(struct sdw_slave *slave, m_rt = sdw_alloc_master_rt(slave->bus, stream_config, stream); if (!m_rt) { dev_err(&slave->dev, - "alloc master runtime failed for stream:%s", - stream->name); + "alloc master runtime failed for stream:%s\n", + stream->name); ret = -ENOMEM; goto error; } @@ -1340,8 +1345,8 @@ int sdw_stream_add_slave(struct sdw_slave *slave, s_rt = sdw_alloc_slave_rt(slave, stream_config, stream); if (!s_rt) { dev_err(&slave->dev, - "Slave runtime config failed for stream:%s", - stream->name); + "Slave runtime config failed for stream:%s\n", + stream->name); ret = -ENOMEM; goto stream_error; } @@ -1385,8 +1390,8 @@ EXPORT_SYMBOL(sdw_stream_add_slave); * @port_num: Port number */ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave, - enum sdw_data_direction direction, - unsigned int port_num) + enum sdw_data_direction direction, + unsigned int port_num) { struct sdw_dpn_prop *dpn_prop; u8 num_ports; @@ -1470,7 +1475,7 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream) /* TODO: Support Asynchronous mode */ if ((prop->max_freq % stream->params.rate) != 0) { - dev_err(bus->dev, "Async mode not supported"); + dev_err(bus->dev, "Async mode not supported\n"); return -EINVAL; } @@ -1482,15 +1487,14 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream) /* Program params */ ret = sdw_program_params(bus); if (ret < 0) { - dev_err(bus->dev, "Program params failed: %d", ret); + dev_err(bus->dev, "Program params failed: %d\n", ret); goto restore_params; } - } ret = do_bank_switch(stream); if (ret < 0) { - dev_err(bus->dev, "Bank switch failed: %d", ret); + dev_err(bus->dev, "Bank switch failed: %d\n", ret); goto restore_params; } @@ -1500,8 +1504,8 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream) /* Prepare port(s) on the new clock configuration */ ret = sdw_prep_deprep_ports(m_rt, true); if (ret < 0) { - dev_err(bus->dev, "Prepare port(s) failed ret = %d", - ret); + dev_err(bus->dev, "Prepare port(s) failed ret = %d\n", + ret); return ret; } } @@ -1527,7 +1531,7 @@ int sdw_prepare_stream(struct sdw_stream_runtime *stream) int ret = 0; if (!stream) { - pr_err("SoundWire: Handle not found for stream"); + pr_err("SoundWire: Handle not found for stream\n"); return -EINVAL; } @@ -1535,7 +1539,7 @@ int sdw_prepare_stream(struct sdw_stream_runtime *stream) ret = _sdw_prepare_stream(stream); if (ret < 0) - pr_err("Prepare for stream:%s failed: %d", stream->name, ret); + pr_err("Prepare for stream:%s failed: %d\n", stream->name, ret); sdw_release_bus_lock(stream); return ret; @@ -1555,21 +1559,22 @@ static int _sdw_enable_stream(struct sdw_stream_runtime *stream) /* Program params */ ret = sdw_program_params(bus); if (ret < 0) { - dev_err(bus->dev, "Program params failed: %d", ret); + dev_err(bus->dev, "Program params failed: %d\n", ret); return ret; } /* Enable port(s) */ ret = sdw_enable_disable_ports(m_rt, true); if (ret < 0) { - dev_err(bus->dev, "Enable port(s) failed ret: %d", ret); + dev_err(bus->dev, + "Enable port(s) failed ret: %d\n", ret); return ret; } } ret = do_bank_switch(stream); if (ret < 0) { - dev_err(bus->dev, "Bank switch failed: %d", ret); + dev_err(bus->dev, "Bank switch failed: %d\n", ret); return ret; } @@ -1589,7 +1594,7 @@ int sdw_enable_stream(struct sdw_stream_runtime *stream) int ret = 0; if (!stream) { - pr_err("SoundWire: Handle not found for stream"); + pr_err("SoundWire: Handle not found for stream\n"); return -EINVAL; } @@ -1597,7 +1602,7 @@ int sdw_enable_stream(struct sdw_stream_runtime *stream) ret = _sdw_enable_stream(stream); if (ret < 0) - pr_err("Enable for stream:%s failed: %d", stream->name, ret); + pr_err("Enable for stream:%s failed: %d\n", stream->name, ret); sdw_release_bus_lock(stream); return ret; @@ -1615,7 +1620,7 @@ static int _sdw_disable_stream(struct sdw_stream_runtime *stream) /* Disable port(s) */ ret = sdw_enable_disable_ports(m_rt, false); if (ret < 0) { - dev_err(bus->dev, "Disable port(s) failed: %d", ret); + dev_err(bus->dev, "Disable port(s) failed: %d\n", ret); return ret; } } @@ -1626,7 +1631,7 @@ static int _sdw_disable_stream(struct sdw_stream_runtime *stream) /* Program params */ ret = sdw_program_params(bus); if (ret < 0) { - dev_err(bus->dev, "Program params failed: %d", ret); + dev_err(bus->dev, "Program params failed: %d\n", ret); return ret; } } @@ -1646,7 +1651,7 @@ int sdw_disable_stream(struct sdw_stream_runtime *stream) int ret = 0; if (!stream) { - pr_err("SoundWire: Handle not found for stream"); + pr_err("SoundWire: Handle not found for stream\n"); return -EINVAL; } @@ -1654,7 +1659,7 @@ int sdw_disable_stream(struct sdw_stream_runtime *stream) ret = _sdw_disable_stream(stream); if (ret < 0) - pr_err("Disable for stream:%s failed: %d", stream->name, ret); + pr_err("Disable for stream:%s failed: %d\n", stream->name, ret); sdw_release_bus_lock(stream); return ret; @@ -1672,7 +1677,8 @@ static int _sdw_deprepare_stream(struct sdw_stream_runtime *stream) /* De-prepare port(s) */ ret = sdw_prep_deprep_ports(m_rt, false); if (ret < 0) { - dev_err(bus->dev, "De-prepare port(s) failed: %d", ret); + dev_err(bus->dev, + "De-prepare port(s) failed: %d\n", ret); return ret; } @@ -1683,10 +1689,9 @@ static int _sdw_deprepare_stream(struct sdw_stream_runtime *stream) /* Program params */ ret = sdw_program_params(bus); if (ret < 0) { - dev_err(bus->dev, "Program params failed: %d", ret); + dev_err(bus->dev, "Program params failed: %d\n", ret); return ret; } - } stream->state = SDW_STREAM_DEPREPARED; @@ -1705,14 +1710,14 @@ int sdw_deprepare_stream(struct sdw_stream_runtime *stream) int ret = 0; if (!stream) { - pr_err("SoundWire: Handle not found for stream"); + pr_err("SoundWire: Handle not found for stream\n"); return -EINVAL; } sdw_acquire_bus_lock(stream); ret = _sdw_deprepare_stream(stream); if (ret < 0) - pr_err("De-prepare for stream:%d failed: %d", ret, ret); + pr_err("De-prepare for stream:%d failed: %d\n", ret, ret); sdw_release_bus_lock(stream); return ret; diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile index f2f0de27252b..833bdee3cec7 100644 --- a/drivers/thunderbolt/Makefile +++ b/drivers/thunderbolt/Makefile @@ -1,3 +1,3 @@ obj-${CONFIG_THUNDERBOLT} := thunderbolt.o -thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o -thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o +thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o +thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o diff --git a/drivers/thunderbolt/cap.c b/drivers/thunderbolt/cap.c index 9553305c63ea..8bf8e031f0bc 100644 --- a/drivers/thunderbolt/cap.c +++ b/drivers/thunderbolt/cap.c @@ -13,6 +13,7 @@ #define CAP_OFFSET_MAX 0xff #define VSE_CAP_OFFSET_MAX 0xffff +#define TMU_ACCESS_EN BIT(20) struct tb_cap_any { union { @@ -22,28 +23,53 @@ struct tb_cap_any { }; } __packed; -/** - * tb_port_find_cap() - Find port capability - * @port: Port to find the capability for - * @cap: Capability to look - * - * Returns offset to start of capability or %-ENOENT if no such - * capability was found. Negative errno is returned if there was an - * error. - */ -int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap) +static int tb_port_enable_tmu(struct tb_port *port, bool enable) { - u32 offset; + struct tb_switch *sw = port->sw; + u32 value, offset; + int ret; /* - * DP out adapters claim to implement TMU capability but in - * reality they do not so we hard code the adapter specific - * capability offset here. + * Legacy devices need to have TMU access enabled before port + * space can be fully accessed. */ - if (port->config.type == TB_TYPE_DP_HDMI_OUT) - offset = 0x39; + if (tb_switch_is_lr(sw)) + offset = 0x26; + else if (tb_switch_is_er(sw)) + offset = 0x2a; else - offset = 0x1; + return 0; + + ret = tb_sw_read(sw, &value, TB_CFG_SWITCH, offset, 1); + if (ret) + return ret; + + if (enable) + value |= TMU_ACCESS_EN; + else + value &= ~TMU_ACCESS_EN; + + return tb_sw_write(sw, &value, TB_CFG_SWITCH, offset, 1); +} + +static void tb_port_dummy_read(struct tb_port *port) +{ + /* + * When reading from next capability pointer location in port + * config space the read data is not cleared on LR. To avoid + * reading stale data on next read perform one dummy read after + * port capabilities are walked. + */ + if (tb_switch_is_lr(port->sw)) { + u32 dummy; + + tb_port_read(port, &dummy, TB_CFG_PORT, 0, 1); + } +} + +static int __tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap) +{ + u32 offset = 1; do { struct tb_cap_any header; @@ -62,6 +88,31 @@ int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap) return -ENOENT; } +/** + * tb_port_find_cap() - Find port capability + * @port: Port to find the capability for + * @cap: Capability to look + * + * Returns offset to start of capability or %-ENOENT if no such + * capability was found. Negative errno is returned if there was an + * error. + */ +int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap) +{ + int ret; + + ret = tb_port_enable_tmu(port, true); + if (ret) + return ret; + + ret = __tb_port_find_cap(port, cap); + + tb_port_dummy_read(port); + tb_port_enable_tmu(port, false); + + return ret; +} + static int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap) { int offset = sw->config.first_cap_offset; diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index 73b386de4d15..2427d73be731 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -720,7 +720,7 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port, .port = port, .error = error, }; - tb_ctl_info(ctl, "resetting error on %llx:%x.\n", route, port); + tb_ctl_dbg(ctl, "resetting error on %llx:%x.\n", route, port); return tb_ctl_tx(ctl, &pkg, sizeof(pkg), TB_CFG_PKG_ERROR); } diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index e3fc920af682..f1c10378fa3e 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -42,7 +42,6 @@ #define ICM_TIMEOUT 5000 /* ms */ #define ICM_APPROVE_TIMEOUT 10000 /* ms */ #define ICM_MAX_LINK 4 -#define ICM_MAX_DEPTH 6 /** * struct icm - Internal connection manager private data @@ -469,10 +468,15 @@ static void add_switch(struct tb_switch *parent_sw, u64 route, pm_runtime_get_sync(&parent_sw->dev); sw = tb_switch_alloc(parent_sw->tb, &parent_sw->dev, route); - if (!sw) + if (IS_ERR(sw)) goto out; sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL); + if (!sw->uuid) { + tb_sw_warn(sw, "cannot allocate memory for switch\n"); + tb_switch_put(sw); + goto out; + } sw->connection_id = connection_id; sw->connection_key = connection_key; sw->link = link; @@ -709,7 +713,7 @@ icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr) depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> ICM_LINK_INFO_DEPTH_SHIFT; - if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) { + if (link > ICM_MAX_LINK || depth > TB_SWITCH_MAX_DEPTH) { tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth); return; } @@ -739,7 +743,7 @@ icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr) depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> ICM_LINK_INFO_DEPTH_SHIFT; - if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) { + if (link > ICM_MAX_LINK || depth > TB_SWITCH_MAX_DEPTH) { tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth); return; } @@ -793,9 +797,11 @@ icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr) * connected another host to the same port, remove the switch * first. */ - sw = get_switch_at_route(tb->root_switch, route); - if (sw) + sw = tb_switch_find_by_route(tb, route); + if (sw) { remove_switch(sw); + tb_switch_put(sw); + } sw = tb_switch_find_by_link_depth(tb, link, depth); if (!sw) { @@ -1138,9 +1144,11 @@ icm_tr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr) * connected another host to the same port, remove the switch * first. */ - sw = get_switch_at_route(tb->root_switch, route); - if (sw) + sw = tb_switch_find_by_route(tb, route); + if (sw) { remove_switch(sw); + tb_switch_put(sw); + } sw = tb_switch_find_by_route(tb, get_parent_route(route)); if (!sw) { @@ -1191,6 +1199,8 @@ static struct pci_dev *get_upstream_port(struct pci_dev *pdev) case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE: case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE: case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE: + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE: + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE: return parent; } @@ -1560,7 +1570,7 @@ static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi) if (val & REG_FW_STS_ICM_EN) return 0; - dev_info(&nhi->pdev->dev, "starting ICM firmware\n"); + dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n"); ret = icm_firmware_reset(tb, nhi); if (ret) @@ -1753,16 +1763,10 @@ static void icm_unplug_children(struct tb_switch *sw) for (i = 1; i <= sw->config.max_port_number; i++) { struct tb_port *port = &sw->ports[i]; - if (tb_is_upstream_port(port)) - continue; - if (port->xdomain) { + if (port->xdomain) port->xdomain->is_unplugged = true; - continue; - } - if (!port->remote) - continue; - - icm_unplug_children(port->remote->sw); + else if (tb_port_has_remote(port)) + icm_unplug_children(port->remote->sw); } } @@ -1773,23 +1777,16 @@ static void icm_free_unplugged_children(struct tb_switch *sw) for (i = 1; i <= sw->config.max_port_number; i++) { struct tb_port *port = &sw->ports[i]; - if (tb_is_upstream_port(port)) - continue; - if (port->xdomain && port->xdomain->is_unplugged) { tb_xdomain_remove(port->xdomain); port->xdomain = NULL; - continue; - } - - if (!port->remote) - continue; - - if (port->remote->sw->is_unplugged) { - tb_switch_remove(port->remote->sw); - port->remote = NULL; - } else { - icm_free_unplugged_children(port->remote->sw); + } else if (tb_port_has_remote(port)) { + if (port->remote->sw->is_unplugged) { + tb_switch_remove(port->remote->sw); + port->remote = NULL; + } else { + icm_free_unplugged_children(port->remote->sw); + } } } } @@ -1853,8 +1850,8 @@ static int icm_start(struct tb *tb) tb->root_switch = tb_switch_alloc_safe_mode(tb, &tb->dev, 0); else tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); - if (!tb->root_switch) - return -ENODEV; + if (IS_ERR(tb->root_switch)) + return PTR_ERR(tb->root_switch); /* * NVM upgrade has not been tested on Apple systems and they diff --git a/drivers/thunderbolt/lc.c b/drivers/thunderbolt/lc.c new file mode 100644 index 000000000000..ae1e92611c3e --- /dev/null +++ b/drivers/thunderbolt/lc.c @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Thunderbolt link controller support + * + * Copyright (C) 2019, Intel Corporation + * Author: Mika Westerberg <mika.westerberg@linux.intel.com> + */ + +#include "tb.h" + +/** + * tb_lc_read_uuid() - Read switch UUID from link controller common register + * @sw: Switch whose UUID is read + * @uuid: UUID is placed here + */ +int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid) +{ + if (!sw->cap_lc) + return -EINVAL; + return tb_sw_read(sw, uuid, TB_CFG_SWITCH, sw->cap_lc + TB_LC_FUSE, 4); +} + +static int read_lc_desc(struct tb_switch *sw, u32 *desc) +{ + if (!sw->cap_lc) + return -EINVAL; + return tb_sw_read(sw, desc, TB_CFG_SWITCH, sw->cap_lc + TB_LC_DESC, 1); +} + +static int find_port_lc_cap(struct tb_port *port) +{ + struct tb_switch *sw = port->sw; + int start, phys, ret, size; + u32 desc; + + ret = read_lc_desc(sw, &desc); + if (ret) + return ret; + + /* Start of port LC registers */ + start = (desc & TB_LC_DESC_SIZE_MASK) >> TB_LC_DESC_SIZE_SHIFT; + size = (desc & TB_LC_DESC_PORT_SIZE_MASK) >> TB_LC_DESC_PORT_SIZE_SHIFT; + phys = tb_phy_port_from_link(port->port); + + return sw->cap_lc + start + phys * size; +} + +static int tb_lc_configure_lane(struct tb_port *port, bool configure) +{ + bool upstream = tb_is_upstream_port(port); + struct tb_switch *sw = port->sw; + u32 ctrl, lane; + int cap, ret; + + if (sw->generation < 2) + return 0; + + cap = find_port_lc_cap(port); + if (cap < 0) + return cap; + + ret = tb_sw_read(sw, &ctrl, TB_CFG_SWITCH, cap + TB_LC_SX_CTRL, 1); + if (ret) + return ret; + + /* Resolve correct lane */ + if (port->port % 2) + lane = TB_LC_SX_CTRL_L1C; + else + lane = TB_LC_SX_CTRL_L2C; + + if (configure) { + ctrl |= lane; + if (upstream) + ctrl |= TB_LC_SX_CTRL_UPSTREAM; + } else { + ctrl &= ~lane; + if (upstream) + ctrl &= ~TB_LC_SX_CTRL_UPSTREAM; + } + + return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, cap + TB_LC_SX_CTRL, 1); +} + +/** + * tb_lc_configure_link() - Let LC know about configured link + * @sw: Switch that is being added + * + * Informs LC of both parent switch and @sw that there is established + * link between the two. + */ +int tb_lc_configure_link(struct tb_switch *sw) +{ + struct tb_port *up, *down; + int ret; + + if (!sw->config.enabled || !tb_route(sw)) + return 0; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), tb_to_switch(sw->dev.parent)); + + /* Configure parent link toward this switch */ + ret = tb_lc_configure_lane(down, true); + if (ret) + return ret; + + /* Configure upstream link from this switch to the parent */ + ret = tb_lc_configure_lane(up, true); + if (ret) + tb_lc_configure_lane(down, false); + + return ret; +} + +/** + * tb_lc_unconfigure_link() - Let LC know about unconfigured link + * @sw: Switch to unconfigure + * + * Informs LC of both parent switch and @sw that the link between the + * two does not exist anymore. + */ +void tb_lc_unconfigure_link(struct tb_switch *sw) +{ + struct tb_port *up, *down; + + if (sw->is_unplugged || !sw->config.enabled || !tb_route(sw)) + return; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), tb_to_switch(sw->dev.parent)); + + tb_lc_configure_lane(up, false); + tb_lc_configure_lane(down, false); +} + +/** + * tb_lc_set_sleep() - Inform LC that the switch is going to sleep + * @sw: Switch to set sleep + * + * Let the switch link controllers know that the switch is going to + * sleep. + */ +int tb_lc_set_sleep(struct tb_switch *sw) +{ + int start, size, nlc, ret, i; + u32 desc; + + if (sw->generation < 2) + return 0; + + ret = read_lc_desc(sw, &desc); + if (ret) + return ret; + + /* Figure out number of link controllers */ + nlc = desc & TB_LC_DESC_NLC_MASK; + start = (desc & TB_LC_DESC_SIZE_MASK) >> TB_LC_DESC_SIZE_SHIFT; + size = (desc & TB_LC_DESC_PORT_SIZE_MASK) >> TB_LC_DESC_PORT_SIZE_SHIFT; + + /* For each link controller set sleep bit */ + for (i = 0; i < nlc; i++) { + unsigned int offset = sw->cap_lc + start + i * size; + u32 ctrl; + + ret = tb_sw_read(sw, &ctrl, TB_CFG_SWITCH, + offset + TB_LC_SX_CTRL, 1); + if (ret) + return ret; + + ctrl |= TB_LC_SX_CTRL_SLP; + ret = tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, + offset + TB_LC_SX_CTRL, 1); + if (ret) + return ret; + } + + return 0; +} diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index 9aa44f9762a3..cac1ead5e302 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -27,8 +27,7 @@ * use this ring for anything else. */ #define RING_E2E_UNUSED_HOPID 2 -/* HopIDs 0-7 are reserved by the Thunderbolt protocol */ -#define RING_FIRST_USABLE_HOPID 8 +#define RING_FIRST_USABLE_HOPID TB_PATH_MIN_HOPID /* * Minimal number of vectors when we use MSI-X. Two for control channel diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c index a11956522bac..afe5f8391ebf 100644 --- a/drivers/thunderbolt/path.c +++ b/drivers/thunderbolt/path.c @@ -1,62 +1,330 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Thunderbolt Cactus Ridge driver - path/tunnel functionality + * Thunderbolt driver - path/tunnel functionality * * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> + * Copyright (C) 2019, Intel Corporation */ #include <linux/slab.h> #include <linux/errno.h> +#include <linux/delay.h> +#include <linux/ktime.h> #include "tb.h" - -static void tb_dump_hop(struct tb_port *port, struct tb_regs_hop *hop) +static void tb_dump_hop(const struct tb_path_hop *hop, const struct tb_regs_hop *regs) { - tb_port_dbg(port, " Hop through port %d to hop %d (%s)\n", - hop->out_port, hop->next_hop, - hop->enable ? "enabled" : "disabled"); + const struct tb_port *port = hop->in_port; + + tb_port_dbg(port, " In HopID: %d => Out port: %d Out HopID: %d\n", + hop->in_hop_index, regs->out_port, regs->next_hop); tb_port_dbg(port, " Weight: %d Priority: %d Credits: %d Drop: %d\n", - hop->weight, hop->priority, - hop->initial_credits, hop->drop_packages); + regs->weight, regs->priority, + regs->initial_credits, regs->drop_packages); tb_port_dbg(port, " Counter enabled: %d Counter index: %d\n", - hop->counter_enable, hop->counter); + regs->counter_enable, regs->counter); tb_port_dbg(port, " Flow Control (In/Eg): %d/%d Shared Buffer (In/Eg): %d/%d\n", - hop->ingress_fc, hop->egress_fc, - hop->ingress_shared_buffer, hop->egress_shared_buffer); + regs->ingress_fc, regs->egress_fc, + regs->ingress_shared_buffer, regs->egress_shared_buffer); tb_port_dbg(port, " Unknown1: %#x Unknown2: %#x Unknown3: %#x\n", - hop->unknown1, hop->unknown2, hop->unknown3); + regs->unknown1, regs->unknown2, regs->unknown3); +} + +static struct tb_port *tb_path_find_dst_port(struct tb_port *src, int src_hopid, + int dst_hopid) +{ + struct tb_port *port, *out_port = NULL; + struct tb_regs_hop hop; + struct tb_switch *sw; + int i, ret, hopid; + + hopid = src_hopid; + port = src; + + for (i = 0; port && i < TB_PATH_MAX_HOPS; i++) { + sw = port->sw; + + ret = tb_port_read(port, &hop, TB_CFG_HOPS, 2 * hopid, 2); + if (ret) { + tb_port_warn(port, "failed to read path at %d\n", hopid); + return NULL; + } + + if (!hop.enable) + return NULL; + + out_port = &sw->ports[hop.out_port]; + hopid = hop.next_hop; + port = out_port->remote; + } + + return out_port && hopid == dst_hopid ? out_port : NULL; +} + +static int tb_path_find_src_hopid(struct tb_port *src, + const struct tb_port *dst, int dst_hopid) +{ + struct tb_port *out; + int i; + + for (i = TB_PATH_MIN_HOPID; i <= src->config.max_in_hop_id; i++) { + out = tb_path_find_dst_port(src, i, dst_hopid); + if (out == dst) + return i; + } + + return 0; +} + +/** + * tb_path_discover() - Discover a path + * @src: First input port of a path + * @src_hopid: Starting HopID of a path (%-1 if don't care) + * @dst: Expected destination port of the path (%NULL if don't care) + * @dst_hopid: HopID to the @dst (%-1 if don't care) + * @last: Last port is filled here if not %NULL + * @name: Name of the path + * + * Follows a path starting from @src and @src_hopid to the last output + * port of the path. Allocates HopIDs for the visited ports. Call + * tb_path_free() to release the path and allocated HopIDs when the path + * is not needed anymore. + * + * Note function discovers also incomplete paths so caller should check + * that the @dst port is the expected one. If it is not, the path can be + * cleaned up by calling tb_path_deactivate() before tb_path_free(). + * + * Return: Discovered path on success, %NULL in case of failure + */ +struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid, + struct tb_port *dst, int dst_hopid, + struct tb_port **last, const char *name) +{ + struct tb_port *out_port; + struct tb_regs_hop hop; + struct tb_path *path; + struct tb_switch *sw; + struct tb_port *p; + size_t num_hops; + int ret, i, h; + + if (src_hopid < 0 && dst) { + /* + * For incomplete paths the intermediate HopID can be + * different from the one used by the protocol adapter + * so in that case find a path that ends on @dst with + * matching @dst_hopid. That should give us the correct + * HopID for the @src. + */ + src_hopid = tb_path_find_src_hopid(src, dst, dst_hopid); + if (!src_hopid) + return NULL; + } + + p = src; + h = src_hopid; + num_hops = 0; + + for (i = 0; p && i < TB_PATH_MAX_HOPS; i++) { + sw = p->sw; + + ret = tb_port_read(p, &hop, TB_CFG_HOPS, 2 * h, 2); + if (ret) { + tb_port_warn(p, "failed to read path at %d\n", h); + return NULL; + } + + /* If the hop is not enabled we got an incomplete path */ + if (!hop.enable) + break; + + out_port = &sw->ports[hop.out_port]; + if (last) + *last = out_port; + + h = hop.next_hop; + p = out_port->remote; + num_hops++; + } + + path = kzalloc(sizeof(*path), GFP_KERNEL); + if (!path) + return NULL; + + path->name = name; + path->tb = src->sw->tb; + path->path_length = num_hops; + path->activated = true; + + path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL); + if (!path->hops) { + kfree(path); + return NULL; + } + + p = src; + h = src_hopid; + + for (i = 0; i < num_hops; i++) { + int next_hop; + + sw = p->sw; + + ret = tb_port_read(p, &hop, TB_CFG_HOPS, 2 * h, 2); + if (ret) { + tb_port_warn(p, "failed to read path at %d\n", h); + goto err; + } + + if (tb_port_alloc_in_hopid(p, h, h) < 0) + goto err; + + out_port = &sw->ports[hop.out_port]; + next_hop = hop.next_hop; + + if (tb_port_alloc_out_hopid(out_port, next_hop, next_hop) < 0) { + tb_port_release_in_hopid(p, h); + goto err; + } + + path->hops[i].in_port = p; + path->hops[i].in_hop_index = h; + path->hops[i].in_counter_index = -1; + path->hops[i].out_port = out_port; + path->hops[i].next_hop_index = next_hop; + + h = next_hop; + p = out_port->remote; + } + + return path; + +err: + tb_port_warn(src, "failed to discover path starting at HopID %d\n", + src_hopid); + tb_path_free(path); + return NULL; } /** - * tb_path_alloc() - allocate a thunderbolt path + * tb_path_alloc() - allocate a thunderbolt path between two ports + * @tb: Domain pointer + * @src: Source port of the path + * @src_hopid: HopID used for the first ingress port in the path + * @dst: Destination port of the path + * @dst_hopid: HopID used for the last egress port in the path + * @link_nr: Preferred link if there are dual links on the path + * @name: Name of the path + * + * Creates path between two ports starting with given @src_hopid. Reserves + * HopIDs for each port (they can be different from @src_hopid depending on + * how many HopIDs each port already have reserved). If there are dual + * links on the path, prioritizes using @link_nr. * * Return: Returns a tb_path on success or NULL on failure. */ -struct tb_path *tb_path_alloc(struct tb *tb, int num_hops) +struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid, + struct tb_port *dst, int dst_hopid, int link_nr, + const char *name) { - struct tb_path *path = kzalloc(sizeof(*path), GFP_KERNEL); + struct tb_port *in_port, *out_port; + int in_hopid, out_hopid; + struct tb_path *path; + size_t num_hops; + int i, ret; + + path = kzalloc(sizeof(*path), GFP_KERNEL); if (!path) return NULL; + + /* + * Number of hops on a path is the distance between the two + * switches plus the source adapter port. + */ + num_hops = abs(tb_route_length(tb_route(src->sw)) - + tb_route_length(tb_route(dst->sw))) + 1; + path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL); if (!path->hops) { kfree(path); return NULL; } + + in_hopid = src_hopid; + out_port = NULL; + + for (i = 0; i < num_hops; i++) { + in_port = tb_next_port_on_path(src, dst, out_port); + if (!in_port) + goto err; + + if (in_port->dual_link_port && in_port->link_nr != link_nr) + in_port = in_port->dual_link_port; + + ret = tb_port_alloc_in_hopid(in_port, in_hopid, in_hopid); + if (ret < 0) + goto err; + in_hopid = ret; + + out_port = tb_next_port_on_path(src, dst, in_port); + if (!out_port) + goto err; + + if (out_port->dual_link_port && out_port->link_nr != link_nr) + out_port = out_port->dual_link_port; + + if (i == num_hops - 1) + ret = tb_port_alloc_out_hopid(out_port, dst_hopid, + dst_hopid); + else + ret = tb_port_alloc_out_hopid(out_port, -1, -1); + + if (ret < 0) + goto err; + out_hopid = ret; + + path->hops[i].in_hop_index = in_hopid; + path->hops[i].in_port = in_port; + path->hops[i].in_counter_index = -1; + path->hops[i].out_port = out_port; + path->hops[i].next_hop_index = out_hopid; + + in_hopid = out_hopid; + } + path->tb = tb; path->path_length = num_hops; + path->name = name; + return path; + +err: + tb_path_free(path); + return NULL; } /** - * tb_path_free() - free a deactivated path + * tb_path_free() - free a path + * @path: Path to free + * + * Frees a path. The path does not need to be deactivated. */ void tb_path_free(struct tb_path *path) { - if (path->activated) { - tb_WARN(path->tb, "trying to free an activated path\n") - return; + int i; + + for (i = 0; i < path->path_length; i++) { + const struct tb_path_hop *hop = &path->hops[i]; + + if (hop->in_port) + tb_port_release_in_hopid(hop->in_port, + hop->in_hop_index); + if (hop->out_port) + tb_port_release_out_hopid(hop->out_port, + hop->next_hop_index); } + kfree(path->hops); kfree(path); } @@ -74,14 +342,65 @@ static void __tb_path_deallocate_nfc(struct tb_path *path, int first_hop) } } +static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index, + bool clear_fc) +{ + struct tb_regs_hop hop; + ktime_t timeout; + int ret; + + /* Disable the path */ + ret = tb_port_read(port, &hop, TB_CFG_HOPS, 2 * hop_index, 2); + if (ret) + return ret; + + /* Already disabled */ + if (!hop.enable) + return 0; + + hop.enable = 0; + + ret = tb_port_write(port, &hop, TB_CFG_HOPS, 2 * hop_index, 2); + if (ret) + return ret; + + /* Wait until it is drained */ + timeout = ktime_add_ms(ktime_get(), 500); + do { + ret = tb_port_read(port, &hop, TB_CFG_HOPS, 2 * hop_index, 2); + if (ret) + return ret; + + if (!hop.pending) { + if (clear_fc) { + /* Clear flow control */ + hop.ingress_fc = 0; + hop.egress_fc = 0; + hop.ingress_shared_buffer = 0; + hop.egress_shared_buffer = 0; + + return tb_port_write(port, &hop, TB_CFG_HOPS, + 2 * hop_index, 2); + } + + return 0; + } + + usleep_range(10, 20); + } while (ktime_before(ktime_get(), timeout)); + + return -ETIMEDOUT; +} + static void __tb_path_deactivate_hops(struct tb_path *path, int first_hop) { int i, res; - struct tb_regs_hop hop = { }; + for (i = first_hop; i < path->path_length; i++) { - res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS, - 2 * path->hops[i].in_hop_index, 2); - if (res) + res = __tb_path_deactivate_hop(path->hops[i].in_port, + path->hops[i].in_hop_index, + path->clear_fc); + if (res && res != -ENODEV) tb_port_warn(path->hops[i].in_port, "hop deactivation failed for hop %d, index %d\n", i, path->hops[i].in_hop_index); @@ -94,12 +413,12 @@ void tb_path_deactivate(struct tb_path *path) tb_WARN(path->tb, "trying to deactivate an inactive path\n"); return; } - tb_info(path->tb, - "deactivating path from %llx:%x to %llx:%x\n", - tb_route(path->hops[0].in_port->sw), - path->hops[0].in_port->port, - tb_route(path->hops[path->path_length - 1].out_port->sw), - path->hops[path->path_length - 1].out_port->port); + tb_dbg(path->tb, + "deactivating %s path from %llx:%x to %llx:%x\n", + path->name, tb_route(path->hops[0].in_port->sw), + path->hops[0].in_port->port, + tb_route(path->hops[path->path_length - 1].out_port->sw), + path->hops[path->path_length - 1].out_port->port); __tb_path_deactivate_hops(path, 0); __tb_path_deallocate_nfc(path, 0); path->activated = false; @@ -122,12 +441,12 @@ int tb_path_activate(struct tb_path *path) return -EINVAL; } - tb_info(path->tb, - "activating path from %llx:%x to %llx:%x\n", - tb_route(path->hops[0].in_port->sw), - path->hops[0].in_port->port, - tb_route(path->hops[path->path_length - 1].out_port->sw), - path->hops[path->path_length - 1].out_port->port); + tb_dbg(path->tb, + "activating %s path from %llx:%x to %llx:%x\n", + path->name, tb_route(path->hops[0].in_port->sw), + path->hops[0].in_port->port, + tb_route(path->hops[path->path_length - 1].out_port->sw), + path->hops[path->path_length - 1].out_port->port); /* Clear counters. */ for (i = path->path_length - 1; i >= 0; i--) { @@ -153,30 +472,14 @@ int tb_path_activate(struct tb_path *path) for (i = path->path_length - 1; i >= 0; i--) { struct tb_regs_hop hop = { 0 }; - /* - * We do (currently) not tear down paths setup by the firmeware. - * If a firmware device is unplugged and plugged in again then - * it can happen that we reuse some of the hops from the (now - * defunct) firmeware path. This causes the hotplug operation to - * fail (the pci device does not show up). Clearing the hop - * before overwriting it fixes the problem. - * - * Should be removed once we discover and tear down firmeware - * paths. - */ - res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS, - 2 * path->hops[i].in_hop_index, 2); - if (res) { - __tb_path_deactivate_hops(path, i); - __tb_path_deallocate_nfc(path, 0); - goto err; - } + /* If it is left active deactivate it first */ + __tb_path_deactivate_hop(path->hops[i].in_port, + path->hops[i].in_hop_index, path->clear_fc); /* dword 0 */ hop.next_hop = path->hops[i].next_hop_index; hop.out_port = path->hops[i].out_port->port; - /* TODO: figure out why these are good values */ - hop.initial_credits = (i == path->path_length - 1) ? 16 : 7; + hop.initial_credits = path->hops[i].initial_credits; hop.unknown1 = 0; hop.enable = 1; @@ -198,9 +501,8 @@ int tb_path_activate(struct tb_path *path) & out_mask; hop.unknown3 = 0; - tb_port_info(path->hops[i].in_port, "Writing hop %d, index %d", - i, path->hops[i].in_hop_index); - tb_dump_hop(path->hops[i].in_port, &hop); + tb_port_dbg(path->hops[i].in_port, "Writing hop %d\n", i); + tb_dump_hop(&path->hops[i], &hop); res = tb_port_write(path->hops[i].in_port, &hop, TB_CFG_HOPS, 2 * path->hops[i].in_hop_index, 2); if (res) { @@ -210,7 +512,7 @@ int tb_path_activate(struct tb_path *path) } } path->activated = true; - tb_info(path->tb, "path activation complete\n"); + tb_dbg(path->tb, "path activation complete\n"); return 0; err: tb_WARN(path->tb, "path activation failed\n"); diff --git a/drivers/thunderbolt/property.c b/drivers/thunderbolt/property.c index b2f0d6386cee..d5b0cdb8f0b1 100644 --- a/drivers/thunderbolt/property.c +++ b/drivers/thunderbolt/property.c @@ -176,6 +176,10 @@ static struct tb_property_dir *__tb_property_parse_dir(const u32 *block, } else { dir->uuid = kmemdup(&block[dir_offset], sizeof(*dir->uuid), GFP_KERNEL); + if (!dir->uuid) { + tb_property_free_dir(dir); + return NULL; + } content_offset = dir_offset + 4; content_len = dir_len - 4; /* Length includes UUID */ } @@ -548,6 +552,11 @@ int tb_property_add_data(struct tb_property_dir *parent, const char *key, property->length = size / 4; property->value.data = kzalloc(size, GFP_KERNEL); + if (!property->value.data) { + kfree(property); + return -ENOMEM; + } + memcpy(property->value.data, buf, buflen); list_add_tail(&property->list, &parent->properties); @@ -578,7 +587,12 @@ int tb_property_add_text(struct tb_property_dir *parent, const char *key, return -ENOMEM; property->length = size / 4; - property->value.data = kzalloc(size, GFP_KERNEL); + property->value.text = kzalloc(size, GFP_KERNEL); + if (!property->value.text) { + kfree(property); + return -ENOMEM; + } + strcpy(property->value.text, text); list_add_tail(&property->list, &parent->properties); diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index cd96994dc094..c1b016574fb4 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -10,15 +10,13 @@ #include <linux/idr.h> #include <linux/nvmem-provider.h> #include <linux/pm_runtime.h> +#include <linux/sched/signal.h> #include <linux/sizes.h> #include <linux/slab.h> #include <linux/vmalloc.h> #include "tb.h" -/* Switch authorization from userspace is serialized by this lock */ -static DEFINE_MUTEX(switch_lock); - /* Switch NVM support */ #define NVM_DEVID 0x05 @@ -254,8 +252,8 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, struct tb_switch *sw = priv; int ret = 0; - if (mutex_lock_interruptible(&switch_lock)) - return -ERESTARTSYS; + if (!mutex_trylock(&sw->tb->lock)) + return restart_syscall(); /* * Since writing the NVM image might require some special steps, @@ -275,7 +273,7 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, memcpy(sw->nvm->buf + offset, val, bytes); unlock: - mutex_unlock(&switch_lock); + mutex_unlock(&sw->tb->lock); return ret; } @@ -364,10 +362,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw) } nvm->non_active = nvm_dev; - mutex_lock(&switch_lock); sw->nvm = nvm; - mutex_unlock(&switch_lock); - return 0; err_nvm_active: @@ -384,10 +379,8 @@ static void tb_switch_nvm_remove(struct tb_switch *sw) { struct tb_switch_nvm *nvm; - mutex_lock(&switch_lock); nvm = sw->nvm; sw->nvm = NULL; - mutex_unlock(&switch_lock); if (!nvm) return; @@ -500,23 +493,22 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged) if (state < 0) return state; if (state == TB_PORT_DISABLED) { - tb_port_info(port, "is disabled (state: 0)\n"); + tb_port_dbg(port, "is disabled (state: 0)\n"); return 0; } if (state == TB_PORT_UNPLUGGED) { if (wait_if_unplugged) { /* used during resume */ - tb_port_info(port, - "is unplugged (state: 7), retrying...\n"); + tb_port_dbg(port, + "is unplugged (state: 7), retrying...\n"); msleep(100); continue; } - tb_port_info(port, "is unplugged (state: 7)\n"); + tb_port_dbg(port, "is unplugged (state: 7)\n"); return 0; } if (state == TB_PORT_UP) { - tb_port_info(port, - "is connected, link is up (state: 2)\n"); + tb_port_dbg(port, "is connected, link is up (state: 2)\n"); return 1; } @@ -524,9 +516,9 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged) * After plug-in the state is TB_PORT_CONNECTING. Give it some * time. */ - tb_port_info(port, - "is connected, link is not up (state: %d), retrying...\n", - state); + tb_port_dbg(port, + "is connected, link is not up (state: %d), retrying...\n", + state); msleep(100); } tb_port_warn(port, @@ -544,19 +536,47 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged) */ int tb_port_add_nfc_credits(struct tb_port *port, int credits) { - if (credits == 0) + u32 nfc_credits; + + if (credits == 0 || port->sw->is_unplugged) return 0; - tb_port_info(port, - "adding %#x NFC credits (%#x -> %#x)", - credits, - port->config.nfc_credits, - port->config.nfc_credits + credits); - port->config.nfc_credits += credits; + + nfc_credits = port->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK; + nfc_credits += credits; + + tb_port_dbg(port, "adding %d NFC credits to %lu", + credits, port->config.nfc_credits & TB_PORT_NFC_CREDITS_MASK); + + port->config.nfc_credits &= ~TB_PORT_NFC_CREDITS_MASK; + port->config.nfc_credits |= nfc_credits; + return tb_port_write(port, &port->config.nfc_credits, TB_CFG_PORT, 4, 1); } /** + * tb_port_set_initial_credits() - Set initial port link credits allocated + * @port: Port to set the initial credits + * @credits: Number of credits to to allocate + * + * Set initial credits value to be used for ingress shared buffering. + */ +int tb_port_set_initial_credits(struct tb_port *port, u32 credits) +{ + u32 data; + int ret; + + ret = tb_port_read(port, &data, TB_CFG_PORT, 5, 1); + if (ret) + return ret; + + data &= ~TB_PORT_LCA_MASK; + data |= (credits << TB_PORT_LCA_SHIFT) & TB_PORT_LCA_MASK; + + return tb_port_write(port, &data, TB_CFG_PORT, 5, 1); +} + +/** * tb_port_clear_counter() - clear a counter in TB_CFG_COUNTER * * Return: Returns 0 on success or an error code on failure. @@ -564,7 +584,7 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits) int tb_port_clear_counter(struct tb_port *port, int counter) { u32 zero[3] = { 0, 0, 0 }; - tb_port_info(port, "clearing counter %d\n", counter); + tb_port_dbg(port, "clearing counter %d\n", counter); return tb_port_write(port, zero, TB_CFG_COUNTERS, 3 * counter, 3); } @@ -593,15 +613,304 @@ static int tb_init_port(struct tb_port *port) port->cap_phy = cap; else tb_port_WARN(port, "non switch port without a PHY\n"); + } else if (port->port != 0) { + cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP); + if (cap > 0) + port->cap_adap = cap; } tb_dump_port(port->sw->tb, &port->config); - /* TODO: Read dual link port, DP port and more from EEPROM. */ + /* Control port does not need HopID allocation */ + if (port->port) { + ida_init(&port->in_hopids); + ida_init(&port->out_hopids); + } + return 0; } +static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid, + int max_hopid) +{ + int port_max_hopid; + struct ida *ida; + + if (in) { + port_max_hopid = port->config.max_in_hop_id; + ida = &port->in_hopids; + } else { + port_max_hopid = port->config.max_out_hop_id; + ida = &port->out_hopids; + } + + /* HopIDs 0-7 are reserved */ + if (min_hopid < TB_PATH_MIN_HOPID) + min_hopid = TB_PATH_MIN_HOPID; + + if (max_hopid < 0 || max_hopid > port_max_hopid) + max_hopid = port_max_hopid; + + return ida_simple_get(ida, min_hopid, max_hopid + 1, GFP_KERNEL); +} + +/** + * tb_port_alloc_in_hopid() - Allocate input HopID from port + * @port: Port to allocate HopID for + * @min_hopid: Minimum acceptable input HopID + * @max_hopid: Maximum acceptable input HopID + * + * Return: HopID between @min_hopid and @max_hopid or negative errno in + * case of error. + */ +int tb_port_alloc_in_hopid(struct tb_port *port, int min_hopid, int max_hopid) +{ + return tb_port_alloc_hopid(port, true, min_hopid, max_hopid); +} + +/** + * tb_port_alloc_out_hopid() - Allocate output HopID from port + * @port: Port to allocate HopID for + * @min_hopid: Minimum acceptable output HopID + * @max_hopid: Maximum acceptable output HopID + * + * Return: HopID between @min_hopid and @max_hopid or negative errno in + * case of error. + */ +int tb_port_alloc_out_hopid(struct tb_port *port, int min_hopid, int max_hopid) +{ + return tb_port_alloc_hopid(port, false, min_hopid, max_hopid); +} + +/** + * tb_port_release_in_hopid() - Release allocated input HopID from port + * @port: Port whose HopID to release + * @hopid: HopID to release + */ +void tb_port_release_in_hopid(struct tb_port *port, int hopid) +{ + ida_simple_remove(&port->in_hopids, hopid); +} + +/** + * tb_port_release_out_hopid() - Release allocated output HopID from port + * @port: Port whose HopID to release + * @hopid: HopID to release + */ +void tb_port_release_out_hopid(struct tb_port *port, int hopid) +{ + ida_simple_remove(&port->out_hopids, hopid); +} + +/** + * tb_next_port_on_path() - Return next port for given port on a path + * @start: Start port of the walk + * @end: End port of the walk + * @prev: Previous port (%NULL if this is the first) + * + * This function can be used to walk from one port to another if they + * are connected through zero or more switches. If the @prev is dual + * link port, the function follows that link and returns another end on + * that same link. + * + * If the @end port has been reached, return %NULL. + * + * Domain tb->lock must be held when this function is called. + */ +struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, + struct tb_port *prev) +{ + struct tb_port *next; + + if (!prev) + return start; + + if (prev->sw == end->sw) { + if (prev == end) + return NULL; + return end; + } + + if (start->sw->config.depth < end->sw->config.depth) { + if (prev->remote && + prev->remote->sw->config.depth > prev->sw->config.depth) + next = prev->remote; + else + next = tb_port_at(tb_route(end->sw), prev->sw); + } else { + if (tb_is_upstream_port(prev)) { + next = prev->remote; + } else { + next = tb_upstream_port(prev->sw); + /* + * Keep the same link if prev and next are both + * dual link ports. + */ + if (next->dual_link_port && + next->link_nr != prev->link_nr) { + next = next->dual_link_port; + } + } + } + + return next; +} + +/** + * tb_port_is_enabled() - Is the adapter port enabled + * @port: Port to check + */ +bool tb_port_is_enabled(struct tb_port *port) +{ + switch (port->config.type) { + case TB_TYPE_PCIE_UP: + case TB_TYPE_PCIE_DOWN: + return tb_pci_port_is_enabled(port); + + case TB_TYPE_DP_HDMI_IN: + case TB_TYPE_DP_HDMI_OUT: + return tb_dp_port_is_enabled(port); + + default: + return false; + } +} + +/** + * tb_pci_port_is_enabled() - Is the PCIe adapter port enabled + * @port: PCIe port to check + */ +bool tb_pci_port_is_enabled(struct tb_port *port) +{ + u32 data; + + if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1)) + return false; + + return !!(data & TB_PCI_EN); +} + +/** + * tb_pci_port_enable() - Enable PCIe adapter port + * @port: PCIe port to enable + * @enable: Enable/disable the PCIe adapter + */ +int tb_pci_port_enable(struct tb_port *port, bool enable) +{ + u32 word = enable ? TB_PCI_EN : 0x0; + if (!port->cap_adap) + return -ENXIO; + return tb_port_write(port, &word, TB_CFG_PORT, port->cap_adap, 1); +} + +/** + * tb_dp_port_hpd_is_active() - Is HPD already active + * @port: DP out port to check + * + * Checks if the DP OUT adapter port has HDP bit already set. + */ +int tb_dp_port_hpd_is_active(struct tb_port *port) +{ + u32 data; + int ret; + + ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap + 2, 1); + if (ret) + return ret; + + return !!(data & TB_DP_HDP); +} + +/** + * tb_dp_port_hpd_clear() - Clear HPD from DP IN port + * @port: Port to clear HPD + * + * If the DP IN port has HDP set, this function can be used to clear it. + */ +int tb_dp_port_hpd_clear(struct tb_port *port) +{ + u32 data; + int ret; + + ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap + 3, 1); + if (ret) + return ret; + + data |= TB_DP_HPDC; + return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap + 3, 1); +} + +/** + * tb_dp_port_set_hops() - Set video/aux Hop IDs for DP port + * @port: DP IN/OUT port to set hops + * @video: Video Hop ID + * @aux_tx: AUX TX Hop ID + * @aux_rx: AUX RX Hop ID + * + * Programs specified Hop IDs for DP IN/OUT port. + */ +int tb_dp_port_set_hops(struct tb_port *port, unsigned int video, + unsigned int aux_tx, unsigned int aux_rx) +{ + u32 data[2]; + int ret; + + ret = tb_port_read(port, data, TB_CFG_PORT, port->cap_adap, + ARRAY_SIZE(data)); + if (ret) + return ret; + + data[0] &= ~TB_DP_VIDEO_HOPID_MASK; + data[1] &= ~(TB_DP_AUX_RX_HOPID_MASK | TB_DP_AUX_TX_HOPID_MASK); + + data[0] |= (video << TB_DP_VIDEO_HOPID_SHIFT) & TB_DP_VIDEO_HOPID_MASK; + data[1] |= aux_tx & TB_DP_AUX_TX_HOPID_MASK; + data[1] |= (aux_rx << TB_DP_AUX_RX_HOPID_SHIFT) & TB_DP_AUX_RX_HOPID_MASK; + + return tb_port_write(port, data, TB_CFG_PORT, port->cap_adap, + ARRAY_SIZE(data)); +} + +/** + * tb_dp_port_is_enabled() - Is DP adapter port enabled + * @port: DP adapter port to check + */ +bool tb_dp_port_is_enabled(struct tb_port *port) +{ + u32 data; + + if (tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1)) + return false; + + return !!(data & (TB_DP_VIDEO_EN | TB_DP_AUX_EN)); +} + +/** + * tb_dp_port_enable() - Enables/disables DP paths of a port + * @port: DP IN/OUT port + * @enable: Enable/disable DP path + * + * Once Hop IDs are programmed DP paths can be enabled or disabled by + * calling this function. + */ +int tb_dp_port_enable(struct tb_port *port, bool enable) +{ + u32 data; + int ret; + + ret = tb_port_read(port, &data, TB_CFG_PORT, port->cap_adap, 1); + if (ret) + return ret; + + if (enable) + data |= TB_DP_VIDEO_EN | TB_DP_AUX_EN; + else + data &= ~(TB_DP_VIDEO_EN | TB_DP_AUX_EN); + + return tb_port_write(port, &data, TB_CFG_PORT, port->cap_adap, 1); +} + /* switch utility functions */ static void tb_dump_switch(struct tb *tb, struct tb_regs_switch_header *sw) @@ -644,24 +953,6 @@ int tb_switch_reset(struct tb *tb, u64 route) return res.err; } -struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route) -{ - u8 next_port = route; /* - * Routes use a stride of 8 bits, - * eventhough a port index has 6 bits at most. - * */ - if (route == 0) - return sw; - if (next_port > sw->config.max_port_number) - return NULL; - if (tb_is_upstream_port(&sw->ports[next_port])) - return NULL; - if (!sw->ports[next_port].remote) - return NULL; - return get_switch_at_route(sw->ports[next_port].remote->sw, - route >> TB_ROUTE_SHIFT); -} - /** * tb_plug_events_active() - enable/disable plug events on a switch * @@ -716,8 +1007,8 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val) { int ret = -EINVAL; - if (mutex_lock_interruptible(&switch_lock)) - return -ERESTARTSYS; + if (!mutex_trylock(&sw->tb->lock)) + return restart_syscall(); if (sw->authorized) goto unlock; @@ -760,7 +1051,7 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val) } unlock: - mutex_unlock(&switch_lock); + mutex_unlock(&sw->tb->lock); return ret; } @@ -817,15 +1108,15 @@ static ssize_t key_show(struct device *dev, struct device_attribute *attr, struct tb_switch *sw = tb_to_switch(dev); ssize_t ret; - if (mutex_lock_interruptible(&switch_lock)) - return -ERESTARTSYS; + if (!mutex_trylock(&sw->tb->lock)) + return restart_syscall(); if (sw->key) ret = sprintf(buf, "%*phN\n", TB_SWITCH_KEY_SIZE, sw->key); else ret = sprintf(buf, "\n"); - mutex_unlock(&switch_lock); + mutex_unlock(&sw->tb->lock); return ret; } @@ -842,8 +1133,8 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr, else if (hex2bin(key, buf, sizeof(key))) return -EINVAL; - if (mutex_lock_interruptible(&switch_lock)) - return -ERESTARTSYS; + if (!mutex_trylock(&sw->tb->lock)) + return restart_syscall(); if (sw->authorized) { ret = -EBUSY; @@ -858,7 +1149,7 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr, } } - mutex_unlock(&switch_lock); + mutex_unlock(&sw->tb->lock); return ret; } static DEVICE_ATTR(key, 0600, key_show, key_store); @@ -904,8 +1195,8 @@ static ssize_t nvm_authenticate_store(struct device *dev, bool val; int ret; - if (mutex_lock_interruptible(&switch_lock)) - return -ERESTARTSYS; + if (!mutex_trylock(&sw->tb->lock)) + return restart_syscall(); /* If NVMem devices are not yet added */ if (!sw->nvm) { @@ -953,7 +1244,7 @@ static ssize_t nvm_authenticate_store(struct device *dev, } exit_unlock: - mutex_unlock(&switch_lock); + mutex_unlock(&sw->tb->lock); if (ret) return ret; @@ -967,8 +1258,8 @@ static ssize_t nvm_version_show(struct device *dev, struct tb_switch *sw = tb_to_switch(dev); int ret; - if (mutex_lock_interruptible(&switch_lock)) - return -ERESTARTSYS; + if (!mutex_trylock(&sw->tb->lock)) + return restart_syscall(); if (sw->safe_mode) ret = -ENODATA; @@ -977,7 +1268,7 @@ static ssize_t nvm_version_show(struct device *dev, else ret = sprintf(buf, "%x.%x\n", sw->nvm->major, sw->nvm->minor); - mutex_unlock(&switch_lock); + mutex_unlock(&sw->tb->lock); return ret; } @@ -1063,9 +1354,17 @@ static const struct attribute_group *switch_groups[] = { static void tb_switch_release(struct device *dev) { struct tb_switch *sw = tb_to_switch(dev); + int i; dma_port_free(sw->dma_port); + for (i = 1; i <= sw->config.max_port_number; i++) { + if (!sw->ports[i].disabled) { + ida_destroy(&sw->ports[i].in_hopids); + ida_destroy(&sw->ports[i].out_hopids); + } + } + kfree(sw->uuid); kfree(sw->device_name); kfree(sw->vendor_name); @@ -1150,24 +1449,32 @@ static int tb_switch_get_generation(struct tb_switch *sw) * separately. The returned switch should be released by calling * tb_switch_put(). * - * Return: Pointer to the allocated switch or %NULL in case of failure + * Return: Pointer to the allocated switch or ERR_PTR() in case of + * failure. */ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, u64 route) { - int i; - int cap; struct tb_switch *sw; - int upstream_port = tb_cfg_get_upstream_port(tb->ctl, route); + int upstream_port; + int i, ret, depth; + + /* Make sure we do not exceed maximum topology limit */ + depth = tb_route_length(route); + if (depth > TB_SWITCH_MAX_DEPTH) + return ERR_PTR(-EADDRNOTAVAIL); + + upstream_port = tb_cfg_get_upstream_port(tb->ctl, route); if (upstream_port < 0) - return NULL; + return ERR_PTR(upstream_port); sw = kzalloc(sizeof(*sw), GFP_KERNEL); if (!sw) - return NULL; + return ERR_PTR(-ENOMEM); sw->tb = tb; - if (tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5)) + ret = tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5); + if (ret) goto err_free_sw_ports; tb_dbg(tb, "current switch config:\n"); @@ -1175,16 +1482,18 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, /* configure switch */ sw->config.upstream_port_number = upstream_port; - sw->config.depth = tb_route_length(route); - sw->config.route_lo = route; - sw->config.route_hi = route >> 32; + sw->config.depth = depth; + sw->config.route_hi = upper_32_bits(route); + sw->config.route_lo = lower_32_bits(route); sw->config.enabled = 0; /* initialize ports */ sw->ports = kcalloc(sw->config.max_port_number + 1, sizeof(*sw->ports), GFP_KERNEL); - if (!sw->ports) + if (!sw->ports) { + ret = -ENOMEM; goto err_free_sw_ports; + } for (i = 0; i <= sw->config.max_port_number; i++) { /* minimum setup for tb_find_cap and tb_drom_read to work */ @@ -1194,12 +1503,16 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, sw->generation = tb_switch_get_generation(sw); - cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS); - if (cap < 0) { + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS); + if (ret < 0) { tb_sw_warn(sw, "cannot find TB_VSE_CAP_PLUG_EVENTS aborting\n"); goto err_free_sw_ports; } - sw->cap_plug_events = cap; + sw->cap_plug_events = ret; + + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER); + if (ret > 0) + sw->cap_lc = ret; /* Root switch is always authorized */ if (!route) @@ -1218,7 +1531,7 @@ err_free_sw_ports: kfree(sw->ports); kfree(sw); - return NULL; + return ERR_PTR(ret); } /** @@ -1233,7 +1546,7 @@ err_free_sw_ports: * * The returned switch must be released by calling tb_switch_put(). * - * Return: Pointer to the allocated switch or %NULL in case of failure + * Return: Pointer to the allocated switch or ERR_PTR() in case of failure */ struct tb_switch * tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route) @@ -1242,7 +1555,7 @@ tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route) sw = kzalloc(sizeof(*sw), GFP_KERNEL); if (!sw) - return NULL; + return ERR_PTR(-ENOMEM); sw->tb = tb; sw->config.depth = tb_route_length(route); @@ -1291,25 +1604,27 @@ int tb_switch_configure(struct tb_switch *sw) if (ret) return ret; + ret = tb_lc_configure_link(sw); + if (ret) + return ret; + return tb_plug_events_active(sw, true); } -static void tb_switch_set_uuid(struct tb_switch *sw) +static int tb_switch_set_uuid(struct tb_switch *sw) { u32 uuid[4]; - int cap; + int ret; if (sw->uuid) - return; + return 0; /* * The newer controllers include fused UUID as part of link * controller specific registers */ - cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER); - if (cap > 0) { - tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4); - } else { + ret = tb_lc_read_uuid(sw, uuid); + if (ret) { /* * ICM generates UUID based on UID and fills the upper * two words with ones. This is not strictly following @@ -1323,6 +1638,9 @@ static void tb_switch_set_uuid(struct tb_switch *sw) } sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL); + if (!sw->uuid) + return -ENOMEM; + return 0; } static int tb_switch_add_dma_port(struct tb_switch *sw) @@ -1372,7 +1690,9 @@ static int tb_switch_add_dma_port(struct tb_switch *sw) if (status) { tb_sw_info(sw, "switch flash authentication failed\n"); - tb_switch_set_uuid(sw); + ret = tb_switch_set_uuid(sw); + if (ret) + return ret; nvm_set_auth_status(sw, status); } @@ -1422,7 +1742,9 @@ int tb_switch_add(struct tb_switch *sw) } tb_sw_dbg(sw, "uid: %#llx\n", sw->uid); - tb_switch_set_uuid(sw); + ret = tb_switch_set_uuid(sw); + if (ret) + return ret; for (i = 0; i <= sw->config.max_port_number; i++) { if (sw->ports[i].disabled) { @@ -1484,18 +1806,18 @@ void tb_switch_remove(struct tb_switch *sw) /* port 0 is the switch itself and never has a remote */ for (i = 1; i <= sw->config.max_port_number; i++) { - if (tb_is_upstream_port(&sw->ports[i])) - continue; - if (sw->ports[i].remote) + if (tb_port_has_remote(&sw->ports[i])) { tb_switch_remove(sw->ports[i].remote->sw); - sw->ports[i].remote = NULL; - if (sw->ports[i].xdomain) + sw->ports[i].remote = NULL; + } else if (sw->ports[i].xdomain) { tb_xdomain_remove(sw->ports[i].xdomain); - sw->ports[i].xdomain = NULL; + sw->ports[i].xdomain = NULL; + } } if (!sw->is_unplugged) tb_plug_events_active(sw, false); + tb_lc_unconfigure_link(sw); tb_switch_nvm_remove(sw); @@ -1520,8 +1842,10 @@ void tb_sw_set_unplugged(struct tb_switch *sw) } sw->is_unplugged = true; for (i = 0; i <= sw->config.max_port_number; i++) { - if (!tb_is_upstream_port(&sw->ports[i]) && sw->ports[i].remote) + if (tb_port_has_remote(&sw->ports[i])) tb_sw_set_unplugged(sw->ports[i].remote->sw); + else if (sw->ports[i].xdomain) + sw->ports[i].xdomain->is_unplugged = true; } } @@ -1537,6 +1861,17 @@ int tb_switch_resume(struct tb_switch *sw) if (tb_route(sw)) { u64 uid; + /* + * Check first that we can still read the switch config + * space. It may be that there is now another domain + * connected. + */ + err = tb_cfg_get_upstream_port(sw->tb->ctl, tb_route(sw)); + if (err < 0) { + tb_sw_info(sw, "switch not present anymore\n"); + return err; + } + err = tb_drom_read_uid_only(sw, &uid); if (err) { tb_sw_warn(sw, "uid read failed\n"); @@ -1555,6 +1890,10 @@ int tb_switch_resume(struct tb_switch *sw) if (err) return err; + err = tb_lc_configure_link(sw); + if (err) + return err; + err = tb_plug_events_active(sw, true); if (err) return err; @@ -1562,15 +1901,23 @@ int tb_switch_resume(struct tb_switch *sw) /* check for surviving downstream switches */ for (i = 1; i <= sw->config.max_port_number; i++) { struct tb_port *port = &sw->ports[i]; - if (tb_is_upstream_port(port)) - continue; - if (!port->remote) + + if (!tb_port_has_remote(port) && !port->xdomain) continue; - if (tb_wait_for_port(port, true) <= 0 - || tb_switch_resume(port->remote->sw)) { + + if (tb_wait_for_port(port, true) <= 0) { tb_port_warn(port, "lost during suspend, disconnecting\n"); - tb_sw_set_unplugged(port->remote->sw); + if (tb_port_has_remote(port)) + tb_sw_set_unplugged(port->remote->sw); + else if (port->xdomain) + port->xdomain->is_unplugged = true; + } else if (tb_port_has_remote(port)) { + if (tb_switch_resume(port->remote->sw)) { + tb_port_warn(port, + "lost during suspend, disconnecting\n"); + tb_sw_set_unplugged(port->remote->sw); + } } } return 0; @@ -1584,13 +1931,11 @@ void tb_switch_suspend(struct tb_switch *sw) return; for (i = 1; i <= sw->config.max_port_number; i++) { - if (!tb_is_upstream_port(&sw->ports[i]) && sw->ports[i].remote) + if (tb_port_has_remote(&sw->ports[i])) tb_switch_suspend(sw->ports[i].remote->sw); } - /* - * TODO: invoke tb_cfg_prepare_to_sleep here? does not seem to have any - * effect? - */ + + tb_lc_set_sleep(sw); } struct tb_sw_lookup { diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 30e02c716f6c..1f7a9e1cc09c 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1,8 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Thunderbolt Cactus Ridge driver - bus logic (NHI independent) + * Thunderbolt driver - bus logic (NHI independent) * * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> + * Copyright (C) 2019, Intel Corporation */ #include <linux/slab.h> @@ -12,7 +13,7 @@ #include "tb.h" #include "tb_regs.h" -#include "tunnel_pci.h" +#include "tunnel.h" /** * struct tb_cm - Simple Thunderbolt connection manager @@ -27,8 +28,100 @@ struct tb_cm { bool hotplug_active; }; +struct tb_hotplug_event { + struct work_struct work; + struct tb *tb; + u64 route; + u8 port; + bool unplug; +}; + +static void tb_handle_hotplug(struct work_struct *work); + +static void tb_queue_hotplug(struct tb *tb, u64 route, u8 port, bool unplug) +{ + struct tb_hotplug_event *ev; + + ev = kmalloc(sizeof(*ev), GFP_KERNEL); + if (!ev) + return; + + ev->tb = tb; + ev->route = route; + ev->port = port; + ev->unplug = unplug; + INIT_WORK(&ev->work, tb_handle_hotplug); + queue_work(tb->wq, &ev->work); +} + /* enumeration & hot plug handling */ +static void tb_discover_tunnels(struct tb_switch *sw) +{ + struct tb *tb = sw->tb; + struct tb_cm *tcm = tb_priv(tb); + struct tb_port *port; + int i; + + for (i = 1; i <= sw->config.max_port_number; i++) { + struct tb_tunnel *tunnel = NULL; + + port = &sw->ports[i]; + switch (port->config.type) { + case TB_TYPE_DP_HDMI_IN: + tunnel = tb_tunnel_discover_dp(tb, port); + break; + + case TB_TYPE_PCIE_DOWN: + tunnel = tb_tunnel_discover_pci(tb, port); + break; + + default: + break; + } + + if (!tunnel) + continue; + + if (tb_tunnel_is_pci(tunnel)) { + struct tb_switch *parent = tunnel->dst_port->sw; + + while (parent != tunnel->src_port->sw) { + parent->boot = true; + parent = tb_switch_parent(parent); + } + } + + list_add_tail(&tunnel->list, &tcm->tunnel_list); + } + + for (i = 1; i <= sw->config.max_port_number; i++) { + if (tb_port_has_remote(&sw->ports[i])) + tb_discover_tunnels(sw->ports[i].remote->sw); + } +} + +static void tb_scan_xdomain(struct tb_port *port) +{ + struct tb_switch *sw = port->sw; + struct tb *tb = sw->tb; + struct tb_xdomain *xd; + u64 route; + + route = tb_downstream_route(port); + xd = tb_xdomain_find_by_route(tb, route); + if (xd) { + tb_xdomain_put(xd); + return; + } + + xd = tb_xdomain_alloc(tb, &sw->dev, route, tb->root_switch->uuid, + NULL); + if (xd) { + tb_port_at(route, sw)->xdomain = xd; + tb_xdomain_add(xd); + } +} static void tb_scan_port(struct tb_port *port); @@ -47,9 +140,21 @@ static void tb_scan_switch(struct tb_switch *sw) */ static void tb_scan_port(struct tb_port *port) { + struct tb_cm *tcm = tb_priv(port->sw->tb); + struct tb_port *upstream_port; struct tb_switch *sw; + if (tb_is_upstream_port(port)) return; + + if (tb_port_is_dpout(port) && tb_dp_port_hpd_is_active(port) == 1 && + !tb_dp_port_is_enabled(port)) { + tb_port_dbg(port, "DP adapter HPD set, queuing hotplug\n"); + tb_queue_hotplug(port->sw->tb, tb_route(port->sw), port->port, + false); + return; + } + if (port->config.type != TB_TYPE_PORT) return; if (port->dual_link_port && port->link_nr) @@ -60,45 +165,95 @@ static void tb_scan_port(struct tb_port *port) if (tb_wait_for_port(port, false) <= 0) return; if (port->remote) { - tb_port_WARN(port, "port already has a remote!\n"); + tb_port_dbg(port, "port already has a remote\n"); return; } sw = tb_switch_alloc(port->sw->tb, &port->sw->dev, tb_downstream_route(port)); - if (!sw) + if (IS_ERR(sw)) { + /* + * If there is an error accessing the connected switch + * it may be connected to another domain. Also we allow + * the other domain to be connected to a max depth switch. + */ + if (PTR_ERR(sw) == -EIO || PTR_ERR(sw) == -EADDRNOTAVAIL) + tb_scan_xdomain(port); return; + } if (tb_switch_configure(sw)) { tb_switch_put(sw); return; } - sw->authorized = true; + /* + * If there was previously another domain connected remove it + * first. + */ + if (port->xdomain) { + tb_xdomain_remove(port->xdomain); + port->xdomain = NULL; + } + + /* + * Do not send uevents until we have discovered all existing + * tunnels and know which switches were authorized already by + * the boot firmware. + */ + if (!tcm->hotplug_active) + dev_set_uevent_suppress(&sw->dev, true); if (tb_switch_add(sw)) { tb_switch_put(sw); return; } - port->remote = tb_upstream_port(sw); - tb_upstream_port(sw)->remote = port; + /* Link the switches using both links if available */ + upstream_port = tb_upstream_port(sw); + port->remote = upstream_port; + upstream_port->remote = port; + if (port->dual_link_port && upstream_port->dual_link_port) { + port->dual_link_port->remote = upstream_port->dual_link_port; + upstream_port->dual_link_port->remote = port->dual_link_port; + } + tb_scan_switch(sw); } +static int tb_free_tunnel(struct tb *tb, enum tb_tunnel_type type, + struct tb_port *src_port, struct tb_port *dst_port) +{ + struct tb_cm *tcm = tb_priv(tb); + struct tb_tunnel *tunnel; + + list_for_each_entry(tunnel, &tcm->tunnel_list, list) { + if (tunnel->type == type && + ((src_port && src_port == tunnel->src_port) || + (dst_port && dst_port == tunnel->dst_port))) { + tb_tunnel_deactivate(tunnel); + list_del(&tunnel->list); + tb_tunnel_free(tunnel); + return 0; + } + } + + return -ENODEV; +} + /** * tb_free_invalid_tunnels() - destroy tunnels of devices that have gone away */ static void tb_free_invalid_tunnels(struct tb *tb) { struct tb_cm *tcm = tb_priv(tb); - struct tb_pci_tunnel *tunnel; - struct tb_pci_tunnel *n; + struct tb_tunnel *tunnel; + struct tb_tunnel *n; list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) { - if (tb_pci_is_invalid(tunnel)) { - tb_pci_deactivate(tunnel); + if (tb_tunnel_is_invalid(tunnel)) { + tb_tunnel_deactivate(tunnel); list_del(&tunnel->list); - tb_pci_free(tunnel); + tb_tunnel_free(tunnel); } } } @@ -111,136 +266,232 @@ static void tb_free_unplugged_children(struct tb_switch *sw) int i; for (i = 1; i <= sw->config.max_port_number; i++) { struct tb_port *port = &sw->ports[i]; - if (tb_is_upstream_port(port)) - continue; - if (!port->remote) + + if (!tb_port_has_remote(port)) continue; + if (port->remote->sw->is_unplugged) { tb_switch_remove(port->remote->sw); port->remote = NULL; + if (port->dual_link_port) + port->dual_link_port->remote = NULL; } else { tb_free_unplugged_children(port->remote->sw); } } } - /** - * find_pci_up_port() - return the first PCIe up port on @sw or NULL + * tb_find_port() - return the first port of @type on @sw or NULL + * @sw: Switch to find the port from + * @type: Port type to look for */ -static struct tb_port *tb_find_pci_up_port(struct tb_switch *sw) +static struct tb_port *tb_find_port(struct tb_switch *sw, + enum tb_port_type type) { int i; for (i = 1; i <= sw->config.max_port_number; i++) - if (sw->ports[i].config.type == TB_TYPE_PCIE_UP) + if (sw->ports[i].config.type == type) return &sw->ports[i]; return NULL; } /** - * find_unused_down_port() - return the first inactive PCIe down port on @sw + * tb_find_unused_port() - return the first inactive port on @sw + * @sw: Switch to find the port on + * @type: Port type to look for */ -static struct tb_port *tb_find_unused_down_port(struct tb_switch *sw) +static struct tb_port *tb_find_unused_port(struct tb_switch *sw, + enum tb_port_type type) { int i; - int cap; - int res; - int data; + for (i = 1; i <= sw->config.max_port_number; i++) { if (tb_is_upstream_port(&sw->ports[i])) continue; - if (sw->ports[i].config.type != TB_TYPE_PCIE_DOWN) - continue; - cap = tb_port_find_cap(&sw->ports[i], TB_PORT_CAP_ADAP); - if (cap < 0) + if (sw->ports[i].config.type != type) continue; - res = tb_port_read(&sw->ports[i], &data, TB_CFG_PORT, cap, 1); - if (res < 0) + if (!sw->ports[i].cap_adap) continue; - if (data & 0x80000000) + if (tb_port_is_enabled(&sw->ports[i])) continue; return &sw->ports[i]; } return NULL; } -/** - * tb_activate_pcie_devices() - scan for and activate PCIe devices - * - * This method is somewhat ad hoc. For now it only supports one device - * per port and only devices at depth 1. - */ -static void tb_activate_pcie_devices(struct tb *tb) +static struct tb_port *tb_find_pcie_down(struct tb_switch *sw, + const struct tb_port *port) +{ + /* + * To keep plugging devices consistently in the same PCIe + * hierarchy, do mapping here for root switch downstream PCIe + * ports. + */ + if (!tb_route(sw)) { + int phy_port = tb_phy_port_from_link(port->port); + int index; + + /* + * Hard-coded Thunderbolt port to PCIe down port mapping + * per controller. + */ + if (tb_switch_is_cr(sw)) + index = !phy_port ? 6 : 7; + else if (tb_switch_is_fr(sw)) + index = !phy_port ? 6 : 8; + else + goto out; + + /* Validate the hard-coding */ + if (WARN_ON(index > sw->config.max_port_number)) + goto out; + if (WARN_ON(!tb_port_is_pcie_down(&sw->ports[index]))) + goto out; + if (WARN_ON(tb_pci_port_is_enabled(&sw->ports[index]))) + goto out; + + return &sw->ports[index]; + } + +out: + return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN); +} + +static int tb_tunnel_dp(struct tb *tb, struct tb_port *out) { - int i; - int cap; - u32 data; - struct tb_switch *sw; - struct tb_port *up_port; - struct tb_port *down_port; - struct tb_pci_tunnel *tunnel; struct tb_cm *tcm = tb_priv(tb); + struct tb_switch *sw = out->sw; + struct tb_tunnel *tunnel; + struct tb_port *in; + + if (tb_port_is_enabled(out)) + return 0; + + do { + sw = tb_to_switch(sw->dev.parent); + if (!sw) + return 0; + in = tb_find_unused_port(sw, TB_TYPE_DP_HDMI_IN); + } while (!in); + + tunnel = tb_tunnel_alloc_dp(tb, in, out); + if (!tunnel) { + tb_port_dbg(out, "DP tunnel allocation failed\n"); + return -ENOMEM; + } - /* scan for pcie devices at depth 1*/ - for (i = 1; i <= tb->root_switch->config.max_port_number; i++) { - if (tb_is_upstream_port(&tb->root_switch->ports[i])) - continue; - if (tb->root_switch->ports[i].config.type != TB_TYPE_PORT) - continue; - if (!tb->root_switch->ports[i].remote) - continue; - sw = tb->root_switch->ports[i].remote->sw; - up_port = tb_find_pci_up_port(sw); - if (!up_port) { - tb_sw_info(sw, "no PCIe devices found, aborting\n"); - continue; - } + if (tb_tunnel_activate(tunnel)) { + tb_port_info(out, "DP tunnel activation failed, aborting\n"); + tb_tunnel_free(tunnel); + return -EIO; + } - /* check whether port is already activated */ - cap = tb_port_find_cap(up_port, TB_PORT_CAP_ADAP); - if (cap < 0) - continue; - if (tb_port_read(up_port, &data, TB_CFG_PORT, cap, 1)) - continue; - if (data & 0x80000000) { - tb_port_info(up_port, - "PCIe port already activated, aborting\n"); - continue; - } + list_add_tail(&tunnel->list, &tcm->tunnel_list); + return 0; +} - down_port = tb_find_unused_down_port(tb->root_switch); - if (!down_port) { - tb_port_info(up_port, - "All PCIe down ports are occupied, aborting\n"); - continue; - } - tunnel = tb_pci_alloc(tb, up_port, down_port); - if (!tunnel) { - tb_port_info(up_port, - "PCIe tunnel allocation failed, aborting\n"); - continue; - } +static void tb_teardown_dp(struct tb *tb, struct tb_port *out) +{ + tb_free_tunnel(tb, TB_TUNNEL_DP, NULL, out); +} - if (tb_pci_activate(tunnel)) { - tb_port_info(up_port, - "PCIe tunnel activation failed, aborting\n"); - tb_pci_free(tunnel); - continue; - } +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw) +{ + struct tb_port *up, *down, *port; + struct tb_cm *tcm = tb_priv(tb); + struct tb_switch *parent_sw; + struct tb_tunnel *tunnel; + + up = tb_find_port(sw, TB_TYPE_PCIE_UP); + if (!up) + return 0; - list_add(&tunnel->list, &tcm->tunnel_list); + /* + * Look up available down port. Since we are chaining it should + * be found right above this switch. + */ + parent_sw = tb_to_switch(sw->dev.parent); + port = tb_port_at(tb_route(sw), parent_sw); + down = tb_find_pcie_down(parent_sw, port); + if (!down) + return 0; + + tunnel = tb_tunnel_alloc_pci(tb, up, down); + if (!tunnel) + return -ENOMEM; + + if (tb_tunnel_activate(tunnel)) { + tb_port_info(up, + "PCIe tunnel activation failed, aborting\n"); + tb_tunnel_free(tunnel); + return -EIO; } + + list_add_tail(&tunnel->list, &tcm->tunnel_list); + return 0; } -/* hotplug handling */ +static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +{ + struct tb_cm *tcm = tb_priv(tb); + struct tb_port *nhi_port, *dst_port; + struct tb_tunnel *tunnel; + struct tb_switch *sw; -struct tb_hotplug_event { - struct work_struct work; - struct tb *tb; - u64 route; - u8 port; - bool unplug; -}; + sw = tb_to_switch(xd->dev.parent); + dst_port = tb_port_at(xd->route, sw); + nhi_port = tb_find_port(tb->root_switch, TB_TYPE_NHI); + + mutex_lock(&tb->lock); + tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring, + xd->transmit_path, xd->receive_ring, + xd->receive_path); + if (!tunnel) { + mutex_unlock(&tb->lock); + return -ENOMEM; + } + + if (tb_tunnel_activate(tunnel)) { + tb_port_info(nhi_port, + "DMA tunnel activation failed, aborting\n"); + tb_tunnel_free(tunnel); + mutex_unlock(&tb->lock); + return -EIO; + } + + list_add_tail(&tunnel->list, &tcm->tunnel_list); + mutex_unlock(&tb->lock); + return 0; +} + +static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +{ + struct tb_port *dst_port; + struct tb_switch *sw; + + sw = tb_to_switch(xd->dev.parent); + dst_port = tb_port_at(xd->route, sw); + + /* + * It is possible that the tunnel was already teared down (in + * case of cable disconnect) so it is fine if we cannot find it + * here anymore. + */ + tb_free_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port); +} + +static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) +{ + if (!xd->is_unplugged) { + mutex_lock(&tb->lock); + __tb_disconnect_xdomain_paths(tb, xd); + mutex_unlock(&tb->lock); + } + return 0; +} + +/* hotplug handling */ /** * tb_handle_hotplug() - handle hotplug event @@ -258,7 +509,7 @@ static void tb_handle_hotplug(struct work_struct *work) if (!tcm->hotplug_active) goto out; /* during init, suspend or shutdown */ - sw = get_switch_at_route(tb->root_switch, ev->route); + sw = tb_switch_find_by_route(tb, ev->route); if (!sw) { tb_warn(tb, "hotplug event from non existent switch %llx:%x (unplug: %d)\n", @@ -269,43 +520,60 @@ static void tb_handle_hotplug(struct work_struct *work) tb_warn(tb, "hotplug event from non existent port %llx:%x (unplug: %d)\n", ev->route, ev->port, ev->unplug); - goto out; + goto put_sw; } port = &sw->ports[ev->port]; if (tb_is_upstream_port(port)) { - tb_warn(tb, - "hotplug event for upstream port %llx:%x (unplug: %d)\n", - ev->route, ev->port, ev->unplug); - goto out; + tb_dbg(tb, "hotplug event for upstream port %llx:%x (unplug: %d)\n", + ev->route, ev->port, ev->unplug); + goto put_sw; } if (ev->unplug) { - if (port->remote) { - tb_port_info(port, "unplugged\n"); + if (tb_port_has_remote(port)) { + tb_port_dbg(port, "switch unplugged\n"); tb_sw_set_unplugged(port->remote->sw); tb_free_invalid_tunnels(tb); tb_switch_remove(port->remote->sw); port->remote = NULL; + if (port->dual_link_port) + port->dual_link_port->remote = NULL; + } else if (port->xdomain) { + struct tb_xdomain *xd = tb_xdomain_get(port->xdomain); + + tb_port_dbg(port, "xdomain unplugged\n"); + /* + * Service drivers are unbound during + * tb_xdomain_remove() so setting XDomain as + * unplugged here prevents deadlock if they call + * tb_xdomain_disable_paths(). We will tear down + * the path below. + */ + xd->is_unplugged = true; + tb_xdomain_remove(xd); + port->xdomain = NULL; + __tb_disconnect_xdomain_paths(tb, xd); + tb_xdomain_put(xd); + } else if (tb_port_is_dpout(port)) { + tb_teardown_dp(tb, port); } else { - tb_port_info(port, - "got unplug event for disconnected port, ignoring\n"); + tb_port_dbg(port, + "got unplug event for disconnected port, ignoring\n"); } } else if (port->remote) { - tb_port_info(port, - "got plug event for connected port, ignoring\n"); + tb_port_dbg(port, "got plug event for connected port, ignoring\n"); } else { - tb_port_info(port, "hotplug: scanning\n"); - tb_scan_port(port); - if (!port->remote) { - tb_port_info(port, "hotplug: no switch found\n"); - } else if (port->remote->sw->config.depth > 1) { - tb_sw_warn(port->remote->sw, - "hotplug: chaining not supported\n"); - } else { - tb_sw_info(port->remote->sw, - "hotplug: activating pcie devices\n"); - tb_activate_pcie_devices(tb); + if (tb_port_is_null(port)) { + tb_port_dbg(port, "hotplug: scanning\n"); + tb_scan_port(port); + if (!port->remote) + tb_port_dbg(port, "hotplug: no switch found\n"); + } else if (tb_port_is_dpout(port)) { + tb_tunnel_dp(tb, port); } } + +put_sw: + tb_switch_put(sw); out: mutex_unlock(&tb->lock); kfree(ev); @@ -320,7 +588,6 @@ static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type, const void *buf, size_t size) { const struct cfg_event_pkg *pkg = buf; - struct tb_hotplug_event *ev; u64 route; if (type != TB_CFG_PKG_EVENT) { @@ -336,40 +603,59 @@ static void tb_handle_event(struct tb *tb, enum tb_cfg_pkg_type type, pkg->port); } - ev = kmalloc(sizeof(*ev), GFP_KERNEL); - if (!ev) - return; - INIT_WORK(&ev->work, tb_handle_hotplug); - ev->tb = tb; - ev->route = route; - ev->port = pkg->port; - ev->unplug = pkg->unplug; - queue_work(tb->wq, &ev->work); + tb_queue_hotplug(tb, route, pkg->port, pkg->unplug); } static void tb_stop(struct tb *tb) { struct tb_cm *tcm = tb_priv(tb); - struct tb_pci_tunnel *tunnel; - struct tb_pci_tunnel *n; + struct tb_tunnel *tunnel; + struct tb_tunnel *n; /* tunnels are only present after everything has been initialized */ list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) { - tb_pci_deactivate(tunnel); - tb_pci_free(tunnel); + /* + * DMA tunnels require the driver to be functional so we + * tear them down. Other protocol tunnels can be left + * intact. + */ + if (tb_tunnel_is_dma(tunnel)) + tb_tunnel_deactivate(tunnel); + tb_tunnel_free(tunnel); } tb_switch_remove(tb->root_switch); tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */ } +static int tb_scan_finalize_switch(struct device *dev, void *data) +{ + if (tb_is_switch(dev)) { + struct tb_switch *sw = tb_to_switch(dev); + + /* + * If we found that the switch was already setup by the + * boot firmware, mark it as authorized now before we + * send uevent to userspace. + */ + if (sw->boot) + sw->authorized = 1; + + dev_set_uevent_suppress(dev, false); + kobject_uevent(&dev->kobj, KOBJ_ADD); + device_for_each_child(dev, NULL, tb_scan_finalize_switch); + } + + return 0; +} + static int tb_start(struct tb *tb) { struct tb_cm *tcm = tb_priv(tb); int ret; tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0); - if (!tb->root_switch) - return -ENOMEM; + if (IS_ERR(tb->root_switch)) + return PTR_ERR(tb->root_switch); /* * ICM firmware upgrade needs running firmware and in native @@ -393,7 +679,11 @@ static int tb_start(struct tb *tb) /* Full scan to discover devices added before the driver was loaded. */ tb_scan_switch(tb->root_switch); - tb_activate_pcie_devices(tb); + /* Find out tunnels created by the boot firmware */ + tb_discover_tunnels(tb->root_switch); + /* Make the discovered switches available to the userspace */ + device_for_each_child(&tb->root_switch->dev, NULL, + tb_scan_finalize_switch); /* Allow tb_handle_hotplug to progress events */ tcm->hotplug_active = true; @@ -415,7 +705,7 @@ static int tb_suspend_noirq(struct tb *tb) static int tb_resume_noirq(struct tb *tb) { struct tb_cm *tcm = tb_priv(tb); - struct tb_pci_tunnel *tunnel, *n; + struct tb_tunnel *tunnel, *n; tb_dbg(tb, "resuming...\n"); @@ -426,7 +716,7 @@ static int tb_resume_noirq(struct tb *tb) tb_free_invalid_tunnels(tb); tb_free_unplugged_children(tb->root_switch); list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) - tb_pci_restart(tunnel); + tb_tunnel_restart(tunnel); if (!list_empty(&tcm->tunnel_list)) { /* * the pcie links need some time to get going. @@ -442,12 +732,50 @@ static int tb_resume_noirq(struct tb *tb) return 0; } +static int tb_free_unplugged_xdomains(struct tb_switch *sw) +{ + int i, ret = 0; + + for (i = 1; i <= sw->config.max_port_number; i++) { + struct tb_port *port = &sw->ports[i]; + + if (tb_is_upstream_port(port)) + continue; + if (port->xdomain && port->xdomain->is_unplugged) { + tb_xdomain_remove(port->xdomain); + port->xdomain = NULL; + ret++; + } else if (port->remote) { + ret += tb_free_unplugged_xdomains(port->remote->sw); + } + } + + return ret; +} + +static void tb_complete(struct tb *tb) +{ + /* + * Release any unplugged XDomains and if there is a case where + * another domain is swapped in place of unplugged XDomain we + * need to run another rescan. + */ + mutex_lock(&tb->lock); + if (tb_free_unplugged_xdomains(tb->root_switch)) + tb_scan_switch(tb->root_switch); + mutex_unlock(&tb->lock); +} + static const struct tb_cm_ops tb_cm_ops = { .start = tb_start, .stop = tb_stop, .suspend_noirq = tb_suspend_noirq, .resume_noirq = tb_resume_noirq, + .complete = tb_complete, .handle_event = tb_handle_event, + .approve_switch = tb_tunnel_pci, + .approve_xdomain_paths = tb_approve_xdomain_paths, + .disconnect_xdomain_paths = tb_disconnect_xdomain_paths, }; struct tb *tb_probe(struct tb_nhi *nhi) @@ -462,7 +790,7 @@ struct tb *tb_probe(struct tb_nhi *nhi) if (!tb) return NULL; - tb->security_level = TB_SECURITY_NONE; + tb->security_level = TB_SECURITY_USER; tb->cm_ops = &tb_cm_ops; tcm = tb_priv(tb); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 52584c4003e3..b12c8f33d89c 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -43,6 +43,7 @@ struct tb_switch_nvm { }; #define TB_SWITCH_KEY_SIZE 32 +#define TB_SWITCH_MAX_DEPTH 6 /** * struct tb_switch - a thunderbolt switch @@ -62,6 +63,7 @@ struct tb_switch_nvm { * @device_name: Name of the device (or %NULL if not known) * @generation: Switch Thunderbolt generation * @cap_plug_events: Offset to the plug events capability (%0 if not found) + * @cap_lc: Offset to the link controller capability (%0 if not found) * @is_unplugged: The switch is going away * @drom: DROM of the switch (%NULL if not found) * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) @@ -70,7 +72,6 @@ struct tb_switch_nvm { * @boot: Whether the switch was already authorized on boot or not * @rpm: The switch supports runtime PM * @authorized: Whether the switch is authorized by user or policy - * @work: Work used to automatically authorize a switch * @security_level: Switch supported security level * @key: Contains the key used to challenge the device or %NULL if not * supported. Size of the key is %TB_SWITCH_KEY_SIZE. @@ -80,8 +81,7 @@ struct tb_switch_nvm { * @depth: Depth in the chain this switch is connected (ICM only) * * When the switch is being added or removed to the domain (other - * switches) you need to have domain lock held. For switch authorization - * internal switch_lock is enough. + * switches) you need to have domain lock held. */ struct tb_switch { struct device dev; @@ -97,6 +97,7 @@ struct tb_switch { const char *device_name; unsigned int generation; int cap_plug_events; + int cap_lc; bool is_unplugged; u8 *drom; struct tb_switch_nvm *nvm; @@ -105,7 +106,6 @@ struct tb_switch { bool boot; bool rpm; unsigned int authorized; - struct work_struct work; enum tb_security_level security_level; u8 *key; u8 connection_id; @@ -121,11 +121,14 @@ struct tb_switch { * @remote: Remote port (%NULL if not connected) * @xdomain: Remote host (%NULL if not connected) * @cap_phy: Offset, zero if not found + * @cap_adap: Offset of the adapter specific capability (%0 if not present) * @port: Port number on switch * @disabled: Disabled by eeprom * @dual_link_port: If the switch is connected using two ports, points * to the other port. * @link_nr: Is this primary or secondary port on the dual_link. + * @in_hopids: Currently allocated input HopIDs + * @out_hopids: Currently allocated output HopIDs */ struct tb_port { struct tb_regs_port_header config; @@ -133,19 +136,35 @@ struct tb_port { struct tb_port *remote; struct tb_xdomain *xdomain; int cap_phy; + int cap_adap; u8 port; bool disabled; struct tb_port *dual_link_port; u8 link_nr:1; + struct ida in_hopids; + struct ida out_hopids; }; /** * struct tb_path_hop - routing information for a tb_path + * @in_port: Ingress port of a switch + * @out_port: Egress port of a switch where the packet is routed out + * (must be on the same switch than @in_port) + * @in_hop_index: HopID where the path configuration entry is placed in + * the path config space of @in_port. + * @in_counter_index: Used counter index (not used in the driver + * currently, %-1 to disable) + * @next_hop_index: HopID of the packet when it is routed out from @out_port + * @initial_credits: Number of initial flow control credits allocated for + * the path * * Hop configuration is always done on the IN port of a switch. * in_port and out_port have to be on the same switch. Packets arriving on * in_port with "hop" = in_hop_index will get routed to through out_port. The - * next hop to take (on out_port->remote) is determined by next_hop_index. + * next hop to take (on out_port->remote) is determined by + * next_hop_index. When routing packet to another switch (out->remote is + * set) the @next_hop_index must match the @in_hop_index of that next + * hop to make routing possible. * * in_counter_index is the index of a counter (in TB_CFG_COUNTERS) on the in * port. @@ -154,44 +173,71 @@ struct tb_path_hop { struct tb_port *in_port; struct tb_port *out_port; int in_hop_index; - int in_counter_index; /* write -1 to disable counters for this hop. */ + int in_counter_index; int next_hop_index; + unsigned int initial_credits; }; /** * enum tb_path_port - path options mask + * @TB_PATH_NONE: Do not activate on any hop on path + * @TB_PATH_SOURCE: Activate on the first hop (out of src) + * @TB_PATH_INTERNAL: Activate on the intermediate hops (not the first/last) + * @TB_PATH_DESTINATION: Activate on the last hop (into dst) + * @TB_PATH_ALL: Activate on all hops on the path */ enum tb_path_port { TB_PATH_NONE = 0, - TB_PATH_SOURCE = 1, /* activate on the first hop (out of src) */ - TB_PATH_INTERNAL = 2, /* activate on other hops (not the first/last) */ - TB_PATH_DESTINATION = 4, /* activate on the last hop (into dst) */ + TB_PATH_SOURCE = 1, + TB_PATH_INTERNAL = 2, + TB_PATH_DESTINATION = 4, TB_PATH_ALL = 7, }; /** * struct tb_path - a unidirectional path between two ports + * @tb: Pointer to the domain structure + * @name: Name of the path (used for debugging) + * @nfc_credits: Number of non flow controlled credits allocated for the path + * @ingress_shared_buffer: Shared buffering used for ingress ports on the path + * @egress_shared_buffer: Shared buffering used for egress ports on the path + * @ingress_fc_enable: Flow control for ingress ports on the path + * @egress_fc_enable: Flow control for egress ports on the path + * @priority: Priority group if the path + * @weight: Weight of the path inside the priority group + * @drop_packages: Drop packages from queue tail or head + * @activated: Is the path active + * @clear_fc: Clear all flow control from the path config space entries + * when deactivating this path + * @hops: Path hops + * @path_length: How many hops the path uses * - * A path consists of a number of hops (see tb_path_hop). To establish a PCIe - * tunnel two paths have to be created between the two PCIe ports. - * + * A path consists of a number of hops (see &struct tb_path_hop). To + * establish a PCIe tunnel two paths have to be created between the two + * PCIe ports. */ struct tb_path { struct tb *tb; - int nfc_credits; /* non flow controlled credits */ + const char *name; + int nfc_credits; enum tb_path_port ingress_shared_buffer; enum tb_path_port egress_shared_buffer; enum tb_path_port ingress_fc_enable; enum tb_path_port egress_fc_enable; - int priority:3; + unsigned int priority:3; int weight:4; bool drop_packages; bool activated; + bool clear_fc; struct tb_path_hop *hops; - int path_length; /* number of hops */ + int path_length; }; +/* HopIDs 0-7 are reserved by the Thunderbolt protocol */ +#define TB_PATH_MIN_HOPID 8 +#define TB_PATH_MAX_HOPS 7 + /** * struct tb_cm_ops - Connection manager specific operations vector * @driver_ready: Called right after control channel is started. Used by @@ -261,7 +307,20 @@ static inline struct tb_port *tb_upstream_port(struct tb_switch *sw) return &sw->ports[sw->config.upstream_port_number]; } -static inline u64 tb_route(struct tb_switch *sw) +/** + * tb_is_upstream_port() - Is the port upstream facing + * @port: Port to check + * + * Returns true if @port is upstream facing port. In case of dual link + * ports both return true. + */ +static inline bool tb_is_upstream_port(const struct tb_port *port) +{ + const struct tb_port *upstream_port = tb_upstream_port(port->sw); + return port == upstream_port || port->dual_link_port == upstream_port; +} + +static inline u64 tb_route(const struct tb_switch *sw) { return ((u64) sw->config.route_hi) << 32 | sw->config.route_lo; } @@ -276,9 +335,54 @@ static inline struct tb_port *tb_port_at(u64 route, struct tb_switch *sw) return &sw->ports[port]; } +/** + * tb_port_has_remote() - Does the port have switch connected downstream + * @port: Port to check + * + * Returns true only when the port is primary port and has remote set. + */ +static inline bool tb_port_has_remote(const struct tb_port *port) +{ + if (tb_is_upstream_port(port)) + return false; + if (!port->remote) + return false; + if (port->dual_link_port && port->link_nr) + return false; + + return true; +} + +static inline bool tb_port_is_null(const struct tb_port *port) +{ + return port && port->port && port->config.type == TB_TYPE_PORT; +} + +static inline bool tb_port_is_pcie_down(const struct tb_port *port) +{ + return port && port->config.type == TB_TYPE_PCIE_DOWN; +} + +static inline bool tb_port_is_pcie_up(const struct tb_port *port) +{ + return port && port->config.type == TB_TYPE_PCIE_UP; +} + +static inline bool tb_port_is_dpin(const struct tb_port *port) +{ + return port && port->config.type == TB_TYPE_DP_HDMI_IN; +} + +static inline bool tb_port_is_dpout(const struct tb_port *port) +{ + return port && port->config.type == TB_TYPE_DP_HDMI_OUT; +} + static inline int tb_sw_read(struct tb_switch *sw, void *buffer, enum tb_cfg_space space, u32 offset, u32 length) { + if (sw->is_unplugged) + return -ENODEV; return tb_cfg_read(sw->tb->ctl, buffer, tb_route(sw), @@ -291,6 +395,8 @@ static inline int tb_sw_read(struct tb_switch *sw, void *buffer, static inline int tb_sw_write(struct tb_switch *sw, void *buffer, enum tb_cfg_space space, u32 offset, u32 length) { + if (sw->is_unplugged) + return -ENODEV; return tb_cfg_write(sw->tb->ctl, buffer, tb_route(sw), @@ -303,6 +409,8 @@ static inline int tb_sw_write(struct tb_switch *sw, void *buffer, static inline int tb_port_read(struct tb_port *port, void *buffer, enum tb_cfg_space space, u32 offset, u32 length) { + if (port->sw->is_unplugged) + return -ENODEV; return tb_cfg_read(port->sw->tb->ctl, buffer, tb_route(port->sw), @@ -315,6 +423,8 @@ static inline int tb_port_read(struct tb_port *port, void *buffer, static inline int tb_port_write(struct tb_port *port, const void *buffer, enum tb_cfg_space space, u32 offset, u32 length) { + if (port->sw->is_unplugged) + return -ENODEV; return tb_cfg_write(port->sw->tb->ctl, buffer, tb_route(port->sw), @@ -332,7 +442,7 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer, #define __TB_SW_PRINT(level, sw, fmt, arg...) \ do { \ - struct tb_switch *__sw = (sw); \ + const struct tb_switch *__sw = (sw); \ level(__sw->tb, "%llx: " fmt, \ tb_route(__sw), ## arg); \ } while (0) @@ -343,7 +453,7 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer, #define __TB_PORT_PRINT(level, _port, fmt, arg...) \ do { \ - struct tb_port *__port = (_port); \ + const struct tb_port *__port = (_port); \ level(__port->sw->tb, "%llx:%x: " fmt, \ tb_route(__port->sw), __port->port, ## arg); \ } while (0) @@ -385,6 +495,13 @@ int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd); int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd); int tb_domain_disconnect_all_paths(struct tb *tb); +static inline struct tb *tb_domain_get(struct tb *tb) +{ + if (tb) + get_device(&tb->dev); + return tb; +} + static inline void tb_domain_put(struct tb *tb) { put_device(&tb->dev); @@ -401,7 +518,6 @@ void tb_switch_suspend(struct tb_switch *sw); int tb_switch_resume(struct tb_switch *sw); int tb_switch_reset(struct tb *tb, u64 route); void tb_sw_set_unplugged(struct tb_switch *sw); -struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route); struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, u8 depth); struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid); @@ -431,14 +547,74 @@ static inline struct tb_switch *tb_to_switch(struct device *dev) return NULL; } +static inline struct tb_switch *tb_switch_parent(struct tb_switch *sw) +{ + return tb_to_switch(sw->dev.parent); +} + +static inline bool tb_switch_is_lr(const struct tb_switch *sw) +{ + return sw->config.device_id == PCI_DEVICE_ID_INTEL_LIGHT_RIDGE; +} + +static inline bool tb_switch_is_er(const struct tb_switch *sw) +{ + return sw->config.device_id == PCI_DEVICE_ID_INTEL_EAGLE_RIDGE; +} + +static inline bool tb_switch_is_cr(const struct tb_switch *sw) +{ + switch (sw->config.device_id) { + case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_2C: + case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C: + return true; + default: + return false; + } +} + +static inline bool tb_switch_is_fr(const struct tb_switch *sw) +{ + switch (sw->config.device_id) { + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE: + case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE: + return true; + default: + return false; + } +} + int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); int tb_port_add_nfc_credits(struct tb_port *port, int credits); +int tb_port_set_initial_credits(struct tb_port *port, u32 credits); int tb_port_clear_counter(struct tb_port *port, int counter); +int tb_port_alloc_in_hopid(struct tb_port *port, int hopid, int max_hopid); +void tb_port_release_in_hopid(struct tb_port *port, int hopid); +int tb_port_alloc_out_hopid(struct tb_port *port, int hopid, int max_hopid); +void tb_port_release_out_hopid(struct tb_port *port, int hopid); +struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, + struct tb_port *prev); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap); - -struct tb_path *tb_path_alloc(struct tb *tb, int num_hops); +bool tb_port_is_enabled(struct tb_port *port); + +bool tb_pci_port_is_enabled(struct tb_port *port); +int tb_pci_port_enable(struct tb_port *port, bool enable); + +int tb_dp_port_hpd_is_active(struct tb_port *port); +int tb_dp_port_hpd_clear(struct tb_port *port); +int tb_dp_port_set_hops(struct tb_port *port, unsigned int video, + unsigned int aux_tx, unsigned int aux_rx); +bool tb_dp_port_is_enabled(struct tb_port *port); +int tb_dp_port_enable(struct tb_port *port, bool enable); + +struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid, + struct tb_port *dst, int dst_hopid, + struct tb_port **last, const char *name); +struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid, + struct tb_port *dst, int dst_hopid, int link_nr, + const char *name); void tb_path_free(struct tb_path *path); int tb_path_activate(struct tb_path *path); void tb_path_deactivate(struct tb_path *path); @@ -447,17 +623,16 @@ bool tb_path_is_invalid(struct tb_path *path); int tb_drom_read(struct tb_switch *sw); int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid); +int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid); +int tb_lc_configure_link(struct tb_switch *sw); +void tb_lc_unconfigure_link(struct tb_switch *sw); +int tb_lc_set_sleep(struct tb_switch *sw); static inline int tb_route_length(u64 route) { return (fls64(route) + TB_ROUTE_SHIFT - 1) / TB_ROUTE_SHIFT; } -static inline bool tb_is_upstream_port(struct tb_port *port) -{ - return port == tb_upstream_port(port->sw); -} - /** * tb_downstream_route() - get route to downstream switch * diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h index 02c84aa3d018..afbe1d29bb03 100644 --- a/drivers/thunderbolt/tb_msgs.h +++ b/drivers/thunderbolt/tb_msgs.h @@ -492,6 +492,17 @@ struct tb_xdp_header { u32 type; }; +struct tb_xdp_uuid { + struct tb_xdp_header hdr; +}; + +struct tb_xdp_uuid_response { + struct tb_xdp_header hdr; + uuid_t src_uuid; + u32 src_route_hi; + u32 src_route_lo; +}; + struct tb_xdp_properties { struct tb_xdp_header hdr; uuid_t src_uuid; diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 6f1ff04ee195..deb9d4a977b9 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -211,6 +211,38 @@ struct tb_regs_port_header { } __packed; +/* DWORD 4 */ +#define TB_PORT_NFC_CREDITS_MASK GENMASK(19, 0) +#define TB_PORT_MAX_CREDITS_SHIFT 20 +#define TB_PORT_MAX_CREDITS_MASK GENMASK(26, 20) +/* DWORD 5 */ +#define TB_PORT_LCA_SHIFT 22 +#define TB_PORT_LCA_MASK GENMASK(28, 22) + +/* Display Port adapter registers */ + +/* DWORD 0 */ +#define TB_DP_VIDEO_HOPID_SHIFT 16 +#define TB_DP_VIDEO_HOPID_MASK GENMASK(26, 16) +#define TB_DP_AUX_EN BIT(30) +#define TB_DP_VIDEO_EN BIT(31) +/* DWORD 1 */ +#define TB_DP_AUX_TX_HOPID_MASK GENMASK(10, 0) +#define TB_DP_AUX_RX_HOPID_SHIFT 11 +#define TB_DP_AUX_RX_HOPID_MASK GENMASK(21, 11) +/* DWORD 2 */ +#define TB_DP_HDP BIT(6) +/* DWORD 3 */ +#define TB_DP_HPDC BIT(9) +/* DWORD 4 */ +#define TB_DP_LOCAL_CAP 0x4 +/* DWORD 5 */ +#define TB_DP_REMOTE_CAP 0x5 + +/* PCIe adapter registers */ + +#define TB_PCI_EN BIT(31) + /* Hop register from TB_CFG_HOPS. 8 byte per entry. */ struct tb_regs_hop { /* DWORD 0 */ @@ -234,8 +266,24 @@ struct tb_regs_hop { bool egress_fc:1; bool ingress_shared_buffer:1; bool egress_shared_buffer:1; - u32 unknown3:4; /* set to zero */ + bool pending:1; + u32 unknown3:3; /* set to zero */ } __packed; +/* Common link controller registers */ +#define TB_LC_DESC 0x02 +#define TB_LC_DESC_NLC_MASK GENMASK(3, 0) +#define TB_LC_DESC_SIZE_SHIFT 8 +#define TB_LC_DESC_SIZE_MASK GENMASK(15, 8) +#define TB_LC_DESC_PORT_SIZE_SHIFT 16 +#define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16) +#define TB_LC_FUSE 0x03 + +/* Link controller registers */ +#define TB_LC_SX_CTRL 0x96 +#define TB_LC_SX_CTRL_L1C BIT(16) +#define TB_LC_SX_CTRL_L2C BIT(20) +#define TB_LC_SX_CTRL_UPSTREAM BIT(30) +#define TB_LC_SX_CTRL_SLP BIT(31) #endif diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c new file mode 100644 index 000000000000..31d0234837e4 --- /dev/null +++ b/drivers/thunderbolt/tunnel.c @@ -0,0 +1,691 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Thunderbolt driver - Tunneling support + * + * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> + * Copyright (C) 2019, Intel Corporation + */ + +#include <linux/slab.h> +#include <linux/list.h> + +#include "tunnel.h" +#include "tb.h" + +/* PCIe adapters use always HopID of 8 for both directions */ +#define TB_PCI_HOPID 8 + +#define TB_PCI_PATH_DOWN 0 +#define TB_PCI_PATH_UP 1 + +/* DP adapters use HopID 8 for AUX and 9 for Video */ +#define TB_DP_AUX_TX_HOPID 8 +#define TB_DP_AUX_RX_HOPID 8 +#define TB_DP_VIDEO_HOPID 9 + +#define TB_DP_VIDEO_PATH_OUT 0 +#define TB_DP_AUX_PATH_OUT 1 +#define TB_DP_AUX_PATH_IN 2 + +#define TB_DMA_PATH_OUT 0 +#define TB_DMA_PATH_IN 1 + +static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA" }; + +#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ + do { \ + struct tb_tunnel *__tunnel = (tunnel); \ + level(__tunnel->tb, "%llx:%x <-> %llx:%x (%s): " fmt, \ + tb_route(__tunnel->src_port->sw), \ + __tunnel->src_port->port, \ + tb_route(__tunnel->dst_port->sw), \ + __tunnel->dst_port->port, \ + tb_tunnel_names[__tunnel->type], \ + ## arg); \ + } while (0) + +#define tb_tunnel_WARN(tunnel, fmt, arg...) \ + __TB_TUNNEL_PRINT(tb_WARN, tunnel, fmt, ##arg) +#define tb_tunnel_warn(tunnel, fmt, arg...) \ + __TB_TUNNEL_PRINT(tb_warn, tunnel, fmt, ##arg) +#define tb_tunnel_info(tunnel, fmt, arg...) \ + __TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg) +#define tb_tunnel_dbg(tunnel, fmt, arg...) \ + __TB_TUNNEL_PRINT(tb_dbg, tunnel, fmt, ##arg) + +static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths, + enum tb_tunnel_type type) +{ + struct tb_tunnel *tunnel; + + tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL); + if (!tunnel) + return NULL; + + tunnel->paths = kcalloc(npaths, sizeof(tunnel->paths[0]), GFP_KERNEL); + if (!tunnel->paths) { + tb_tunnel_free(tunnel); + return NULL; + } + + INIT_LIST_HEAD(&tunnel->list); + tunnel->tb = tb; + tunnel->npaths = npaths; + tunnel->type = type; + + return tunnel; +} + +static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate) +{ + int res; + + res = tb_pci_port_enable(tunnel->src_port, activate); + if (res) + return res; + + if (tb_port_is_pcie_up(tunnel->dst_port)) + return tb_pci_port_enable(tunnel->dst_port, activate); + + return 0; +} + +static void tb_pci_init_path(struct tb_path *path) +{ + path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; + path->egress_shared_buffer = TB_PATH_NONE; + path->ingress_fc_enable = TB_PATH_ALL; + path->ingress_shared_buffer = TB_PATH_NONE; + path->priority = 3; + path->weight = 1; + path->drop_packages = 0; + path->nfc_credits = 0; + path->hops[0].initial_credits = 7; + path->hops[1].initial_credits = 16; +} + +/** + * tb_tunnel_discover_pci() - Discover existing PCIe tunnels + * @tb: Pointer to the domain structure + * @down: PCIe downstream adapter + * + * If @down adapter is active, follows the tunnel to the PCIe upstream + * adapter and back. Returns the discovered tunnel or %NULL if there was + * no tunnel. + */ +struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down) +{ + struct tb_tunnel *tunnel; + struct tb_path *path; + + if (!tb_pci_port_is_enabled(down)) + return NULL; + + tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_PCI); + if (!tunnel) + return NULL; + + tunnel->activate = tb_pci_activate; + tunnel->src_port = down; + + /* + * Discover both paths even if they are not complete. We will + * clean them up by calling tb_tunnel_deactivate() below in that + * case. + */ + path = tb_path_discover(down, TB_PCI_HOPID, NULL, -1, + &tunnel->dst_port, "PCIe Up"); + if (!path) { + /* Just disable the downstream port */ + tb_pci_port_enable(down, false); + goto err_free; + } + tunnel->paths[TB_PCI_PATH_UP] = path; + tb_pci_init_path(tunnel->paths[TB_PCI_PATH_UP]); + + path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL, + "PCIe Down"); + if (!path) + goto err_deactivate; + tunnel->paths[TB_PCI_PATH_DOWN] = path; + tb_pci_init_path(tunnel->paths[TB_PCI_PATH_DOWN]); + + /* Validate that the tunnel is complete */ + if (!tb_port_is_pcie_up(tunnel->dst_port)) { + tb_port_warn(tunnel->dst_port, + "path does not end on a PCIe adapter, cleaning up\n"); + goto err_deactivate; + } + + if (down != tunnel->src_port) { + tb_tunnel_warn(tunnel, "path is not complete, cleaning up\n"); + goto err_deactivate; + } + + if (!tb_pci_port_is_enabled(tunnel->dst_port)) { + tb_tunnel_warn(tunnel, + "tunnel is not fully activated, cleaning up\n"); + goto err_deactivate; + } + + tb_tunnel_dbg(tunnel, "discovered\n"); + return tunnel; + +err_deactivate: + tb_tunnel_deactivate(tunnel); +err_free: + tb_tunnel_free(tunnel); + + return NULL; +} + +/** + * tb_tunnel_alloc_pci() - allocate a pci tunnel + * @tb: Pointer to the domain structure + * @up: PCIe upstream adapter port + * @down: PCIe downstream adapter port + * + * Allocate a PCI tunnel. The ports must be of type TB_TYPE_PCIE_UP and + * TB_TYPE_PCIE_DOWN. + * + * Return: Returns a tb_tunnel on success or NULL on failure. + */ +struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up, + struct tb_port *down) +{ + struct tb_tunnel *tunnel; + struct tb_path *path; + + tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_PCI); + if (!tunnel) + return NULL; + + tunnel->activate = tb_pci_activate; + tunnel->src_port = down; + tunnel->dst_port = up; + + path = tb_path_alloc(tb, down, TB_PCI_HOPID, up, TB_PCI_HOPID, 0, + "PCIe Down"); + if (!path) { + tb_tunnel_free(tunnel); + return NULL; + } + tb_pci_init_path(path); + tunnel->paths[TB_PCI_PATH_UP] = path; + + path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0, + "PCIe Up"); + if (!path) { + tb_tunnel_free(tunnel); + return NULL; + } + tb_pci_init_path(path); + tunnel->paths[TB_PCI_PATH_DOWN] = path; + + return tunnel; +} + +static int tb_dp_xchg_caps(struct tb_tunnel *tunnel) +{ + struct tb_port *out = tunnel->dst_port; + struct tb_port *in = tunnel->src_port; + u32 in_dp_cap, out_dp_cap; + int ret; + + /* + * Copy DP_LOCAL_CAP register to DP_REMOTE_CAP register for + * newer generation hardware. + */ + if (in->sw->generation < 2 || out->sw->generation < 2) + return 0; + + /* Read both DP_LOCAL_CAP registers */ + ret = tb_port_read(in, &in_dp_cap, TB_CFG_PORT, + in->cap_adap + TB_DP_LOCAL_CAP, 1); + if (ret) + return ret; + + ret = tb_port_read(out, &out_dp_cap, TB_CFG_PORT, + out->cap_adap + TB_DP_LOCAL_CAP, 1); + if (ret) + return ret; + + /* Write IN local caps to OUT remote caps */ + ret = tb_port_write(out, &in_dp_cap, TB_CFG_PORT, + out->cap_adap + TB_DP_REMOTE_CAP, 1); + if (ret) + return ret; + + return tb_port_write(in, &out_dp_cap, TB_CFG_PORT, + in->cap_adap + TB_DP_REMOTE_CAP, 1); +} + +static int tb_dp_activate(struct tb_tunnel *tunnel, bool active) +{ + int ret; + + if (active) { + struct tb_path **paths; + int last; + + paths = tunnel->paths; + last = paths[TB_DP_VIDEO_PATH_OUT]->path_length - 1; + + tb_dp_port_set_hops(tunnel->src_port, + paths[TB_DP_VIDEO_PATH_OUT]->hops[0].in_hop_index, + paths[TB_DP_AUX_PATH_OUT]->hops[0].in_hop_index, + paths[TB_DP_AUX_PATH_IN]->hops[last].next_hop_index); + + tb_dp_port_set_hops(tunnel->dst_port, + paths[TB_DP_VIDEO_PATH_OUT]->hops[last].next_hop_index, + paths[TB_DP_AUX_PATH_IN]->hops[0].in_hop_index, + paths[TB_DP_AUX_PATH_OUT]->hops[last].next_hop_index); + } else { + tb_dp_port_hpd_clear(tunnel->src_port); + tb_dp_port_set_hops(tunnel->src_port, 0, 0, 0); + if (tb_port_is_dpout(tunnel->dst_port)) + tb_dp_port_set_hops(tunnel->dst_port, 0, 0, 0); + } + + ret = tb_dp_port_enable(tunnel->src_port, active); + if (ret) + return ret; + + if (tb_port_is_dpout(tunnel->dst_port)) + return tb_dp_port_enable(tunnel->dst_port, active); + + return 0; +} + +static void tb_dp_init_aux_path(struct tb_path *path) +{ + int i; + + path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; + path->egress_shared_buffer = TB_PATH_NONE; + path->ingress_fc_enable = TB_PATH_ALL; + path->ingress_shared_buffer = TB_PATH_NONE; + path->priority = 2; + path->weight = 1; + + for (i = 0; i < path->path_length; i++) + path->hops[i].initial_credits = 1; +} + +static void tb_dp_init_video_path(struct tb_path *path, bool discover) +{ + u32 nfc_credits = path->hops[0].in_port->config.nfc_credits; + + path->egress_fc_enable = TB_PATH_NONE; + path->egress_shared_buffer = TB_PATH_NONE; + path->ingress_fc_enable = TB_PATH_NONE; + path->ingress_shared_buffer = TB_PATH_NONE; + path->priority = 1; + path->weight = 1; + + if (discover) { + path->nfc_credits = nfc_credits & TB_PORT_NFC_CREDITS_MASK; + } else { + u32 max_credits; + + max_credits = (nfc_credits & TB_PORT_MAX_CREDITS_MASK) >> + TB_PORT_MAX_CREDITS_SHIFT; + /* Leave some credits for AUX path */ + path->nfc_credits = min(max_credits - 2, 12U); + } +} + +/** + * tb_tunnel_discover_dp() - Discover existing Display Port tunnels + * @tb: Pointer to the domain structure + * @in: DP in adapter + * + * If @in adapter is active, follows the tunnel to the DP out adapter + * and back. Returns the discovered tunnel or %NULL if there was no + * tunnel. + * + * Return: DP tunnel or %NULL if no tunnel found. + */ +struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in) +{ + struct tb_tunnel *tunnel; + struct tb_port *port; + struct tb_path *path; + + if (!tb_dp_port_is_enabled(in)) + return NULL; + + tunnel = tb_tunnel_alloc(tb, 3, TB_TUNNEL_DP); + if (!tunnel) + return NULL; + + tunnel->init = tb_dp_xchg_caps; + tunnel->activate = tb_dp_activate; + tunnel->src_port = in; + + path = tb_path_discover(in, TB_DP_VIDEO_HOPID, NULL, -1, + &tunnel->dst_port, "Video"); + if (!path) { + /* Just disable the DP IN port */ + tb_dp_port_enable(in, false); + goto err_free; + } + tunnel->paths[TB_DP_VIDEO_PATH_OUT] = path; + tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT], true); + + path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX"); + if (!path) + goto err_deactivate; + tunnel->paths[TB_DP_AUX_PATH_OUT] = path; + tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_OUT]); + + path = tb_path_discover(tunnel->dst_port, -1, in, TB_DP_AUX_RX_HOPID, + &port, "AUX RX"); + if (!path) + goto err_deactivate; + tunnel->paths[TB_DP_AUX_PATH_IN] = path; + tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_IN]); + + /* Validate that the tunnel is complete */ + if (!tb_port_is_dpout(tunnel->dst_port)) { + tb_port_warn(in, "path does not end on a DP adapter, cleaning up\n"); + goto err_deactivate; + } + + if (!tb_dp_port_is_enabled(tunnel->dst_port)) + goto err_deactivate; + + if (!tb_dp_port_hpd_is_active(tunnel->dst_port)) + goto err_deactivate; + + if (port != tunnel->src_port) { + tb_tunnel_warn(tunnel, "path is not complete, cleaning up\n"); + goto err_deactivate; + } + + tb_tunnel_dbg(tunnel, "discovered\n"); + return tunnel; + +err_deactivate: + tb_tunnel_deactivate(tunnel); +err_free: + tb_tunnel_free(tunnel); + + return NULL; +} + +/** + * tb_tunnel_alloc_dp() - allocate a Display Port tunnel + * @tb: Pointer to the domain structure + * @in: DP in adapter port + * @out: DP out adapter port + * + * Allocates a tunnel between @in and @out that is capable of tunneling + * Display Port traffic. + * + * Return: Returns a tb_tunnel on success or NULL on failure. + */ +struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, + struct tb_port *out) +{ + struct tb_tunnel *tunnel; + struct tb_path **paths; + struct tb_path *path; + + if (WARN_ON(!in->cap_adap || !out->cap_adap)) + return NULL; + + tunnel = tb_tunnel_alloc(tb, 3, TB_TUNNEL_DP); + if (!tunnel) + return NULL; + + tunnel->init = tb_dp_xchg_caps; + tunnel->activate = tb_dp_activate; + tunnel->src_port = in; + tunnel->dst_port = out; + + paths = tunnel->paths; + + path = tb_path_alloc(tb, in, TB_DP_VIDEO_HOPID, out, TB_DP_VIDEO_HOPID, + 1, "Video"); + if (!path) + goto err_free; + tb_dp_init_video_path(path, false); + paths[TB_DP_VIDEO_PATH_OUT] = path; + + path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out, + TB_DP_AUX_TX_HOPID, 1, "AUX TX"); + if (!path) + goto err_free; + tb_dp_init_aux_path(path); + paths[TB_DP_AUX_PATH_OUT] = path; + + path = tb_path_alloc(tb, out, TB_DP_AUX_RX_HOPID, in, + TB_DP_AUX_RX_HOPID, 1, "AUX RX"); + if (!path) + goto err_free; + tb_dp_init_aux_path(path); + paths[TB_DP_AUX_PATH_IN] = path; + + return tunnel; + +err_free: + tb_tunnel_free(tunnel); + return NULL; +} + +static u32 tb_dma_credits(struct tb_port *nhi) +{ + u32 max_credits; + + max_credits = (nhi->config.nfc_credits & TB_PORT_MAX_CREDITS_MASK) >> + TB_PORT_MAX_CREDITS_SHIFT; + return min(max_credits, 13U); +} + +static int tb_dma_activate(struct tb_tunnel *tunnel, bool active) +{ + struct tb_port *nhi = tunnel->src_port; + u32 credits; + + credits = active ? tb_dma_credits(nhi) : 0; + return tb_port_set_initial_credits(nhi, credits); +} + +static void tb_dma_init_path(struct tb_path *path, unsigned int isb, + unsigned int efc, u32 credits) +{ + int i; + + path->egress_fc_enable = efc; + path->ingress_fc_enable = TB_PATH_ALL; + path->egress_shared_buffer = TB_PATH_NONE; + path->ingress_shared_buffer = isb; + path->priority = 5; + path->weight = 1; + path->clear_fc = true; + + for (i = 0; i < path->path_length; i++) + path->hops[i].initial_credits = credits; +} + +/** + * tb_tunnel_alloc_dma() - allocate a DMA tunnel + * @tb: Pointer to the domain structure + * @nhi: Host controller port + * @dst: Destination null port which the other domain is connected to + * @transmit_ring: NHI ring number used to send packets towards the + * other domain + * @transmit_path: HopID used for transmitting packets + * @receive_ring: NHI ring number used to receive packets from the + * other domain + * @reveive_path: HopID used for receiving packets + * + * Return: Returns a tb_tunnel on success or NULL on failure. + */ +struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, + struct tb_port *dst, int transmit_ring, + int transmit_path, int receive_ring, + int receive_path) +{ + struct tb_tunnel *tunnel; + struct tb_path *path; + u32 credits; + + tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_DMA); + if (!tunnel) + return NULL; + + tunnel->activate = tb_dma_activate; + tunnel->src_port = nhi; + tunnel->dst_port = dst; + + credits = tb_dma_credits(nhi); + + path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX"); + if (!path) { + tb_tunnel_free(tunnel); + return NULL; + } + tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, + credits); + tunnel->paths[TB_DMA_PATH_IN] = path; + + path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX"); + if (!path) { + tb_tunnel_free(tunnel); + return NULL; + } + tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); + tunnel->paths[TB_DMA_PATH_OUT] = path; + + return tunnel; +} + +/** + * tb_tunnel_free() - free a tunnel + * @tunnel: Tunnel to be freed + * + * Frees a tunnel. The tunnel does not need to be deactivated. + */ +void tb_tunnel_free(struct tb_tunnel *tunnel) +{ + int i; + + if (!tunnel) + return; + + for (i = 0; i < tunnel->npaths; i++) { + if (tunnel->paths[i]) + tb_path_free(tunnel->paths[i]); + } + + kfree(tunnel->paths); + kfree(tunnel); +} + +/** + * tb_tunnel_is_invalid - check whether an activated path is still valid + * @tunnel: Tunnel to check + */ +bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel) +{ + int i; + + for (i = 0; i < tunnel->npaths; i++) { + WARN_ON(!tunnel->paths[i]->activated); + if (tb_path_is_invalid(tunnel->paths[i])) + return true; + } + + return false; +} + +/** + * tb_tunnel_restart() - activate a tunnel after a hardware reset + * @tunnel: Tunnel to restart + * + * Return: 0 on success and negative errno in case if failure + */ +int tb_tunnel_restart(struct tb_tunnel *tunnel) +{ + int res, i; + + tb_tunnel_dbg(tunnel, "activating\n"); + + /* + * Make sure all paths are properly disabled before enabling + * them again. + */ + for (i = 0; i < tunnel->npaths; i++) { + if (tunnel->paths[i]->activated) { + tb_path_deactivate(tunnel->paths[i]); + tunnel->paths[i]->activated = false; + } + } + + if (tunnel->init) { + res = tunnel->init(tunnel); + if (res) + return res; + } + + for (i = 0; i < tunnel->npaths; i++) { + res = tb_path_activate(tunnel->paths[i]); + if (res) + goto err; + } + + if (tunnel->activate) { + res = tunnel->activate(tunnel, true); + if (res) + goto err; + } + + return 0; + +err: + tb_tunnel_warn(tunnel, "activation failed\n"); + tb_tunnel_deactivate(tunnel); + return res; +} + +/** + * tb_tunnel_activate() - activate a tunnel + * @tunnel: Tunnel to activate + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_tunnel_activate(struct tb_tunnel *tunnel) +{ + int i; + + for (i = 0; i < tunnel->npaths; i++) { + if (tunnel->paths[i]->activated) { + tb_tunnel_WARN(tunnel, + "trying to activate an already activated tunnel\n"); + return -EINVAL; + } + } + + return tb_tunnel_restart(tunnel); +} + +/** + * tb_tunnel_deactivate() - deactivate a tunnel + * @tunnel: Tunnel to deactivate + */ +void tb_tunnel_deactivate(struct tb_tunnel *tunnel) +{ + int i; + + tb_tunnel_dbg(tunnel, "deactivating\n"); + + if (tunnel->activate) + tunnel->activate(tunnel, false); + + for (i = 0; i < tunnel->npaths; i++) { + if (tunnel->paths[i] && tunnel->paths[i]->activated) + tb_path_deactivate(tunnel->paths[i]); + } +} diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h new file mode 100644 index 000000000000..c68bbcd3a62c --- /dev/null +++ b/drivers/thunderbolt/tunnel.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Thunderbolt driver - Tunneling support + * + * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> + * Copyright (C) 2019, Intel Corporation + */ + +#ifndef TB_TUNNEL_H_ +#define TB_TUNNEL_H_ + +#include "tb.h" + +enum tb_tunnel_type { + TB_TUNNEL_PCI, + TB_TUNNEL_DP, + TB_TUNNEL_DMA, +}; + +/** + * struct tb_tunnel - Tunnel between two ports + * @tb: Pointer to the domain + * @src_port: Source port of the tunnel + * @dst_port: Destination port of the tunnel. For discovered incomplete + * tunnels may be %NULL or null adapter port instead. + * @paths: All paths required by the tunnel + * @npaths: Number of paths in @paths + * @init: Optional tunnel specific initialization + * @activate: Optional tunnel specific activation/deactivation + * @list: Tunnels are linked using this field + * @type: Type of the tunnel + */ +struct tb_tunnel { + struct tb *tb; + struct tb_port *src_port; + struct tb_port *dst_port; + struct tb_path **paths; + size_t npaths; + int (*init)(struct tb_tunnel *tunnel); + int (*activate)(struct tb_tunnel *tunnel, bool activate); + struct list_head list; + enum tb_tunnel_type type; +}; + +struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down); +struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up, + struct tb_port *down); +struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in); +struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, + struct tb_port *out); +struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, + struct tb_port *dst, int transmit_ring, + int transmit_path, int receive_ring, + int receive_path); + +void tb_tunnel_free(struct tb_tunnel *tunnel); +int tb_tunnel_activate(struct tb_tunnel *tunnel); +int tb_tunnel_restart(struct tb_tunnel *tunnel); +void tb_tunnel_deactivate(struct tb_tunnel *tunnel); +bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel); + +static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel) +{ + return tunnel->type == TB_TUNNEL_PCI; +} + +static inline bool tb_tunnel_is_dp(const struct tb_tunnel *tunnel) +{ + return tunnel->type == TB_TUNNEL_DP; +} + +static inline bool tb_tunnel_is_dma(const struct tb_tunnel *tunnel) +{ + return tunnel->type == TB_TUNNEL_DMA; +} + +#endif + diff --git a/drivers/thunderbolt/tunnel_pci.c b/drivers/thunderbolt/tunnel_pci.c deleted file mode 100644 index 0637537ea53f..000000000000 --- a/drivers/thunderbolt/tunnel_pci.c +++ /dev/null @@ -1,226 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Thunderbolt Cactus Ridge driver - PCIe tunnel - * - * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> - */ - -#include <linux/slab.h> -#include <linux/list.h> - -#include "tunnel_pci.h" -#include "tb.h" - -#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ - do { \ - struct tb_pci_tunnel *__tunnel = (tunnel); \ - level(__tunnel->tb, "%llx:%x <-> %llx:%x (PCI): " fmt, \ - tb_route(__tunnel->down_port->sw), \ - __tunnel->down_port->port, \ - tb_route(__tunnel->up_port->sw), \ - __tunnel->up_port->port, \ - ## arg); \ - } while (0) - -#define tb_tunnel_WARN(tunnel, fmt, arg...) \ - __TB_TUNNEL_PRINT(tb_WARN, tunnel, fmt, ##arg) -#define tb_tunnel_warn(tunnel, fmt, arg...) \ - __TB_TUNNEL_PRINT(tb_warn, tunnel, fmt, ##arg) -#define tb_tunnel_info(tunnel, fmt, arg...) \ - __TB_TUNNEL_PRINT(tb_info, tunnel, fmt, ##arg) - -static void tb_pci_init_path(struct tb_path *path) -{ - path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; - path->egress_shared_buffer = TB_PATH_NONE; - path->ingress_fc_enable = TB_PATH_ALL; - path->ingress_shared_buffer = TB_PATH_NONE; - path->priority = 3; - path->weight = 1; - path->drop_packages = 0; - path->nfc_credits = 0; -} - -/** - * tb_pci_alloc() - allocate a pci tunnel - * - * Allocate a PCI tunnel. The ports must be of type TB_TYPE_PCIE_UP and - * TB_TYPE_PCIE_DOWN. - * - * Currently only paths consisting of two hops are supported (that is the - * ports must be on "adjacent" switches). - * - * The paths are hard-coded to use hop 8 (the only working hop id available on - * my thunderbolt devices). Therefore at most ONE path per device may be - * activated. - * - * Return: Returns a tb_pci_tunnel on success or NULL on failure. - */ -struct tb_pci_tunnel *tb_pci_alloc(struct tb *tb, struct tb_port *up, - struct tb_port *down) -{ - struct tb_pci_tunnel *tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL); - if (!tunnel) - goto err; - tunnel->tb = tb; - tunnel->down_port = down; - tunnel->up_port = up; - INIT_LIST_HEAD(&tunnel->list); - tunnel->path_to_up = tb_path_alloc(up->sw->tb, 2); - if (!tunnel->path_to_up) - goto err; - tunnel->path_to_down = tb_path_alloc(up->sw->tb, 2); - if (!tunnel->path_to_down) - goto err; - tb_pci_init_path(tunnel->path_to_up); - tb_pci_init_path(tunnel->path_to_down); - - tunnel->path_to_up->hops[0].in_port = down; - tunnel->path_to_up->hops[0].in_hop_index = 8; - tunnel->path_to_up->hops[0].in_counter_index = -1; - tunnel->path_to_up->hops[0].out_port = tb_upstream_port(up->sw)->remote; - tunnel->path_to_up->hops[0].next_hop_index = 8; - - tunnel->path_to_up->hops[1].in_port = tb_upstream_port(up->sw); - tunnel->path_to_up->hops[1].in_hop_index = 8; - tunnel->path_to_up->hops[1].in_counter_index = -1; - tunnel->path_to_up->hops[1].out_port = up; - tunnel->path_to_up->hops[1].next_hop_index = 8; - - tunnel->path_to_down->hops[0].in_port = up; - tunnel->path_to_down->hops[0].in_hop_index = 8; - tunnel->path_to_down->hops[0].in_counter_index = -1; - tunnel->path_to_down->hops[0].out_port = tb_upstream_port(up->sw); - tunnel->path_to_down->hops[0].next_hop_index = 8; - - tunnel->path_to_down->hops[1].in_port = - tb_upstream_port(up->sw)->remote; - tunnel->path_to_down->hops[1].in_hop_index = 8; - tunnel->path_to_down->hops[1].in_counter_index = -1; - tunnel->path_to_down->hops[1].out_port = down; - tunnel->path_to_down->hops[1].next_hop_index = 8; - return tunnel; - -err: - if (tunnel) { - if (tunnel->path_to_down) - tb_path_free(tunnel->path_to_down); - if (tunnel->path_to_up) - tb_path_free(tunnel->path_to_up); - kfree(tunnel); - } - return NULL; -} - -/** - * tb_pci_free() - free a tunnel - * - * The tunnel must have been deactivated. - */ -void tb_pci_free(struct tb_pci_tunnel *tunnel) -{ - if (tunnel->path_to_up->activated || tunnel->path_to_down->activated) { - tb_tunnel_WARN(tunnel, "trying to free an activated tunnel\n"); - return; - } - tb_path_free(tunnel->path_to_up); - tb_path_free(tunnel->path_to_down); - kfree(tunnel); -} - -/** - * tb_pci_is_invalid - check whether an activated path is still valid - */ -bool tb_pci_is_invalid(struct tb_pci_tunnel *tunnel) -{ - WARN_ON(!tunnel->path_to_up->activated); - WARN_ON(!tunnel->path_to_down->activated); - - return tb_path_is_invalid(tunnel->path_to_up) - || tb_path_is_invalid(tunnel->path_to_down); -} - -/** - * tb_pci_port_active() - activate/deactivate PCI capability - * - * Return: Returns 0 on success or an error code on failure. - */ -static int tb_pci_port_active(struct tb_port *port, bool active) -{ - u32 word = active ? 0x80000000 : 0x0; - int cap = tb_port_find_cap(port, TB_PORT_CAP_ADAP); - if (cap < 0) { - tb_port_warn(port, "TB_PORT_CAP_ADAP not found: %d\n", cap); - return cap; - } - return tb_port_write(port, &word, TB_CFG_PORT, cap, 1); -} - -/** - * tb_pci_restart() - activate a tunnel after a hardware reset - */ -int tb_pci_restart(struct tb_pci_tunnel *tunnel) -{ - int res; - tunnel->path_to_up->activated = false; - tunnel->path_to_down->activated = false; - - tb_tunnel_info(tunnel, "activating\n"); - - res = tb_path_activate(tunnel->path_to_up); - if (res) - goto err; - res = tb_path_activate(tunnel->path_to_down); - if (res) - goto err; - - res = tb_pci_port_active(tunnel->down_port, true); - if (res) - goto err; - - res = tb_pci_port_active(tunnel->up_port, true); - if (res) - goto err; - return 0; -err: - tb_tunnel_warn(tunnel, "activation failed\n"); - tb_pci_deactivate(tunnel); - return res; -} - -/** - * tb_pci_activate() - activate a tunnel - * - * Return: Returns 0 on success or an error code on failure. - */ -int tb_pci_activate(struct tb_pci_tunnel *tunnel) -{ - if (tunnel->path_to_up->activated || tunnel->path_to_down->activated) { - tb_tunnel_WARN(tunnel, - "trying to activate an already activated tunnel\n"); - return -EINVAL; - } - - return tb_pci_restart(tunnel); -} - - - -/** - * tb_pci_deactivate() - deactivate a tunnel - */ -void tb_pci_deactivate(struct tb_pci_tunnel *tunnel) -{ - tb_tunnel_info(tunnel, "deactivating\n"); - /* - * TODO: enable reset by writing 0x04000000 to TB_CAP_PCIE + 1 on up - * port. Seems to have no effect? - */ - tb_pci_port_active(tunnel->up_port, false); - tb_pci_port_active(tunnel->down_port, false); - if (tunnel->path_to_down->activated) - tb_path_deactivate(tunnel->path_to_down); - if (tunnel->path_to_up->activated) - tb_path_deactivate(tunnel->path_to_up); -} - diff --git a/drivers/thunderbolt/tunnel_pci.h b/drivers/thunderbolt/tunnel_pci.h deleted file mode 100644 index f9b65fa1fd4d..000000000000 --- a/drivers/thunderbolt/tunnel_pci.h +++ /dev/null @@ -1,31 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Thunderbolt Cactus Ridge driver - PCIe tunnel - * - * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> - */ - -#ifndef TB_PCI_H_ -#define TB_PCI_H_ - -#include "tb.h" - -struct tb_pci_tunnel { - struct tb *tb; - struct tb_port *up_port; - struct tb_port *down_port; - struct tb_path *path_to_up; - struct tb_path *path_to_down; - struct list_head list; -}; - -struct tb_pci_tunnel *tb_pci_alloc(struct tb *tb, struct tb_port *up, - struct tb_port *down); -void tb_pci_free(struct tb_pci_tunnel *tunnel); -int tb_pci_activate(struct tb_pci_tunnel *tunnel); -int tb_pci_restart(struct tb_pci_tunnel *tunnel); -void tb_pci_deactivate(struct tb_pci_tunnel *tunnel); -bool tb_pci_is_invalid(struct tb_pci_tunnel *tunnel); - -#endif - diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index e27dd8beb94b..5118d46702d5 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -18,6 +18,7 @@ #include "tb.h" #define XDOMAIN_DEFAULT_TIMEOUT 5000 /* ms */ +#define XDOMAIN_UUID_RETRIES 10 #define XDOMAIN_PROPERTIES_RETRIES 60 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10 @@ -222,6 +223,50 @@ static int tb_xdp_handle_error(const struct tb_xdp_header *hdr) return 0; } +static int tb_xdp_uuid_request(struct tb_ctl *ctl, u64 route, int retry, + uuid_t *uuid) +{ + struct tb_xdp_uuid_response res; + struct tb_xdp_uuid req; + int ret; + + memset(&req, 0, sizeof(req)); + tb_xdp_fill_header(&req.hdr, route, retry % 4, UUID_REQUEST, + sizeof(req)); + + memset(&res, 0, sizeof(res)); + ret = __tb_xdomain_request(ctl, &req, sizeof(req), + TB_CFG_PKG_XDOMAIN_REQ, &res, sizeof(res), + TB_CFG_PKG_XDOMAIN_RESP, + XDOMAIN_DEFAULT_TIMEOUT); + if (ret) + return ret; + + ret = tb_xdp_handle_error(&res.hdr); + if (ret) + return ret; + + uuid_copy(uuid, &res.src_uuid); + return 0; +} + +static int tb_xdp_uuid_response(struct tb_ctl *ctl, u64 route, u8 sequence, + const uuid_t *uuid) +{ + struct tb_xdp_uuid_response res; + + memset(&res, 0, sizeof(res)); + tb_xdp_fill_header(&res.hdr, route, sequence, UUID_RESPONSE, + sizeof(res)); + + uuid_copy(&res.src_uuid, uuid); + res.src_route_hi = upper_32_bits(route); + res.src_route_lo = lower_32_bits(route); + + return __tb_xdomain_response(ctl, &res, sizeof(res), + TB_CFG_PKG_XDOMAIN_RESP); +} + static int tb_xdp_error_response(struct tb_ctl *ctl, u64 route, u8 sequence, enum tb_xdp_error error) { @@ -512,7 +557,14 @@ static void tb_xdp_handle_request(struct work_struct *work) break; } + case UUID_REQUEST_OLD: + case UUID_REQUEST: + ret = tb_xdp_uuid_response(ctl, route, sequence, uuid); + break; + default: + tb_xdp_error_response(ctl, route, sequence, + ERROR_NOT_SUPPORTED); break; } @@ -524,9 +576,11 @@ static void tb_xdp_handle_request(struct work_struct *work) out: kfree(xw->pkg); kfree(xw); + + tb_domain_put(tb); } -static void +static bool tb_xdp_schedule_request(struct tb *tb, const struct tb_xdp_header *hdr, size_t size) { @@ -534,13 +588,18 @@ tb_xdp_schedule_request(struct tb *tb, const struct tb_xdp_header *hdr, xw = kmalloc(sizeof(*xw), GFP_KERNEL); if (!xw) - return; + return false; INIT_WORK(&xw->work, tb_xdp_handle_request); xw->pkg = kmemdup(hdr, size, GFP_KERNEL); - xw->tb = tb; + if (!xw->pkg) { + kfree(xw); + return false; + } + xw->tb = tb_domain_get(tb); - queue_work(tb->wq, &xw->work); + schedule_work(&xw->work); + return true; } /** @@ -740,6 +799,7 @@ static void enumerate_services(struct tb_xdomain *xd) struct tb_service *svc; struct tb_property *p; struct device *dev; + int id; /* * First remove all services that are not available anymore in @@ -768,7 +828,12 @@ static void enumerate_services(struct tb_xdomain *xd) break; } - svc->id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL); + id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL); + if (id < 0) { + kfree(svc); + break; + } + svc->id = id; svc->dev.bus = &tb_bus_type; svc->dev.type = &tb_service_type; svc->dev.parent = &xd->dev; @@ -826,6 +891,55 @@ static void tb_xdomain_restore_paths(struct tb_xdomain *xd) } } +static void tb_xdomain_get_uuid(struct work_struct *work) +{ + struct tb_xdomain *xd = container_of(work, typeof(*xd), + get_uuid_work.work); + struct tb *tb = xd->tb; + uuid_t uuid; + int ret; + + ret = tb_xdp_uuid_request(tb->ctl, xd->route, xd->uuid_retries, &uuid); + if (ret < 0) { + if (xd->uuid_retries-- > 0) { + queue_delayed_work(xd->tb->wq, &xd->get_uuid_work, + msecs_to_jiffies(100)); + } else { + dev_dbg(&xd->dev, "failed to read remote UUID\n"); + } + return; + } + + if (uuid_equal(&uuid, xd->local_uuid)) { + dev_dbg(&xd->dev, "intra-domain loop detected\n"); + return; + } + + /* + * If the UUID is different, there is another domain connected + * so mark this one unplugged and wait for the connection + * manager to replace it. + */ + if (xd->remote_uuid && !uuid_equal(&uuid, xd->remote_uuid)) { + dev_dbg(&xd->dev, "remote UUID is different, unplugging\n"); + xd->is_unplugged = true; + return; + } + + /* First time fill in the missing UUID */ + if (!xd->remote_uuid) { + xd->remote_uuid = kmemdup(&uuid, sizeof(uuid_t), GFP_KERNEL); + if (!xd->remote_uuid) + return; + } + + /* Now we can start the normal properties exchange */ + queue_delayed_work(xd->tb->wq, &xd->properties_changed_work, + msecs_to_jiffies(100)); + queue_delayed_work(xd->tb->wq, &xd->get_properties_work, + msecs_to_jiffies(1000)); +} + static void tb_xdomain_get_properties(struct work_struct *work) { struct tb_xdomain *xd = container_of(work, typeof(*xd), @@ -1032,21 +1146,29 @@ static void tb_xdomain_release(struct device *dev) static void start_handshake(struct tb_xdomain *xd) { + xd->uuid_retries = XDOMAIN_UUID_RETRIES; xd->properties_retries = XDOMAIN_PROPERTIES_RETRIES; xd->properties_changed_retries = XDOMAIN_PROPERTIES_CHANGED_RETRIES; - /* Start exchanging properties with the other host */ - queue_delayed_work(xd->tb->wq, &xd->properties_changed_work, - msecs_to_jiffies(100)); - queue_delayed_work(xd->tb->wq, &xd->get_properties_work, - msecs_to_jiffies(1000)); + if (xd->needs_uuid) { + queue_delayed_work(xd->tb->wq, &xd->get_uuid_work, + msecs_to_jiffies(100)); + } else { + /* Start exchanging properties with the other host */ + queue_delayed_work(xd->tb->wq, &xd->properties_changed_work, + msecs_to_jiffies(100)); + queue_delayed_work(xd->tb->wq, &xd->get_properties_work, + msecs_to_jiffies(1000)); + } } static void stop_handshake(struct tb_xdomain *xd) { + xd->uuid_retries = 0; xd->properties_retries = 0; xd->properties_changed_retries = 0; + cancel_delayed_work_sync(&xd->get_uuid_work); cancel_delayed_work_sync(&xd->get_properties_work); cancel_delayed_work_sync(&xd->properties_changed_work); } @@ -1089,7 +1211,7 @@ EXPORT_SYMBOL_GPL(tb_xdomain_type); * other domain is reached). * @route: Route string used to reach the other domain * @local_uuid: Our local domain UUID - * @remote_uuid: UUID of the other domain + * @remote_uuid: UUID of the other domain (optional) * * Allocates new XDomain structure and returns pointer to that. The * object must be released by calling tb_xdomain_put(). @@ -1108,6 +1230,7 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent, xd->route = route; ida_init(&xd->service_ids); mutex_init(&xd->lock); + INIT_DELAYED_WORK(&xd->get_uuid_work, tb_xdomain_get_uuid); INIT_DELAYED_WORK(&xd->get_properties_work, tb_xdomain_get_properties); INIT_DELAYED_WORK(&xd->properties_changed_work, tb_xdomain_properties_changed); @@ -1116,9 +1239,14 @@ struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent, if (!xd->local_uuid) goto err_free; - xd->remote_uuid = kmemdup(remote_uuid, sizeof(uuid_t), GFP_KERNEL); - if (!xd->remote_uuid) - goto err_free_local_uuid; + if (remote_uuid) { + xd->remote_uuid = kmemdup(remote_uuid, sizeof(uuid_t), + GFP_KERNEL); + if (!xd->remote_uuid) + goto err_free_local_uuid; + } else { + xd->needs_uuid = true; + } device_initialize(&xd->dev); xd->dev.parent = get_device(parent); @@ -1282,14 +1410,12 @@ static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw, struct tb_port *port = &sw->ports[i]; struct tb_xdomain *xd; - if (tb_is_upstream_port(port)) - continue; - if (port->xdomain) { xd = port->xdomain; if (lookup->uuid) { - if (uuid_equal(xd->remote_uuid, lookup->uuid)) + if (xd->remote_uuid && + uuid_equal(xd->remote_uuid, lookup->uuid)) return xd; } else if (lookup->link && lookup->link == xd->link && @@ -1299,7 +1425,7 @@ static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw, lookup->route == xd->route) { return xd; } - } else if (port->remote) { + } else if (tb_port_has_remote(port)) { xd = switch_find_xdomain(port->remote->sw, lookup); if (xd) return xd; @@ -1416,10 +1542,8 @@ bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type, * handlers in turn. */ if (uuid_equal(&hdr->uuid, &tb_xdp_uuid)) { - if (type == TB_CFG_PKG_XDOMAIN_REQ) { - tb_xdp_schedule_request(tb, hdr, size); - return true; - } + if (type == TB_CFG_PKG_XDOMAIN_REQ) + return tb_xdp_schedule_request(tb, hdr, size); return false; } diff --git a/drivers/uio/uio_fsl_elbc_gpcm.c b/drivers/uio/uio_fsl_elbc_gpcm.c index 0ee3cd3c25ee..450e2f5c9b43 100644 --- a/drivers/uio/uio_fsl_elbc_gpcm.c +++ b/drivers/uio/uio_fsl_elbc_gpcm.c @@ -68,8 +68,8 @@ static ssize_t reg_show(struct device *dev, struct device_attribute *attr, static ssize_t reg_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count); -DEVICE_ATTR(reg_br, S_IRUGO|S_IWUSR|S_IWGRP, reg_show, reg_store); -DEVICE_ATTR(reg_or, S_IRUGO|S_IWUSR|S_IWGRP, reg_show, reg_store); +static DEVICE_ATTR(reg_br, 0664, reg_show, reg_store); +static DEVICE_ATTR(reg_or, 0664, reg_show, reg_store); static ssize_t reg_show(struct device *dev, struct device_attribute *attr, char *buf) diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c index 8ca333f21292..2307b0329aec 100644 --- a/drivers/virt/vboxguest/vboxguest_core.c +++ b/drivers/virt/vboxguest/vboxguest_core.c @@ -1298,6 +1298,20 @@ static int vbg_ioctl_hgcm_disconnect(struct vbg_dev *gdev, return ret; } +static bool vbg_param_valid(enum vmmdev_hgcm_function_parameter_type type) +{ + switch (type) { + case VMMDEV_HGCM_PARM_TYPE_32BIT: + case VMMDEV_HGCM_PARM_TYPE_64BIT: + case VMMDEV_HGCM_PARM_TYPE_LINADDR: + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: + return true; + default: + return false; + } +} + static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev, struct vbg_session *session, bool f32bit, struct vbg_ioctl_hgcm_call *call) @@ -1333,6 +1347,23 @@ static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev, } call->hdr.size_out = actual_size; + /* Validate parameter types */ + if (f32bit) { + struct vmmdev_hgcm_function_parameter32 *parm = + VBG_IOCTL_HGCM_CALL_PARMS32(call); + + for (i = 0; i < call->parm_count; i++) + if (!vbg_param_valid(parm[i].type)) + return -EINVAL; + } else { + struct vmmdev_hgcm_function_parameter *parm = + VBG_IOCTL_HGCM_CALL_PARMS(call); + + for (i = 0; i < call->parm_count; i++) + if (!vbg_param_valid(parm[i].type)) + return -EINVAL; + } + /* * Validate the client id. */ diff --git a/drivers/w1/masters/ds2482.c b/drivers/w1/masters/ds2482.c index 8b5e598ffdb3..8f2b25f1614c 100644 --- a/drivers/w1/masters/ds2482.c +++ b/drivers/w1/masters/ds2482.c @@ -37,6 +37,11 @@ module_param_named(active_pullup, ds2482_active_pullup, int, 0644); MODULE_PARM_DESC(active_pullup, "Active pullup (apply to all buses): " \ "0-disable, 1-enable (default)"); +/* extra configurations - e.g. 1WS */ +static int extra_config; +module_param(extra_config, int, S_IRUGO | S_IWUSR); +MODULE_PARM_DESC(extra_config, "Extra Configuration settings 1=APU,2=PPM,3=SPU,8=1WS"); + /** * The DS2482 registers - there are 3 registers that are addressed by a read * pointer. The read pointer is set by the last command executed. @@ -70,8 +75,6 @@ MODULE_PARM_DESC(active_pullup, "Active pullup (apply to all buses): " \ #define DS2482_REG_CFG_PPM 0x02 /* presence pulse masking */ #define DS2482_REG_CFG_APU 0x01 /* active pull-up */ -/* extra configurations - e.g. 1WS */ -static int extra_config; /** * Write and verify codes for the CHANNEL_SELECT command (DS2482-800 only). @@ -130,6 +133,8 @@ struct ds2482_data { */ static inline u8 ds2482_calculate_config(u8 conf) { + conf |= extra_config; + if (ds2482_active_pullup) conf |= DS2482_REG_CFG_APU; @@ -405,7 +410,7 @@ static u8 ds2482_w1_reset_bus(void *data) /* If the chip did reset since detect, re-config it */ if (err & DS2482_REG_STS_RST) ds2482_send_cmd_data(pdev, DS2482_CMD_WRITE_CONFIG, - ds2482_calculate_config(extra_config)); + ds2482_calculate_config(0x00)); } mutex_unlock(&pdev->access_lock); @@ -431,7 +436,8 @@ static u8 ds2482_w1_set_pullup(void *data, int delay) ds2482_wait_1wire_idle(pdev); /* note: it seems like both SPU and APU have to be set! */ retval = ds2482_send_cmd_data(pdev, DS2482_CMD_WRITE_CONFIG, - ds2482_calculate_config(extra_config|DS2482_REG_CFG_SPU|DS2482_REG_CFG_APU)); + ds2482_calculate_config(DS2482_REG_CFG_SPU | + DS2482_REG_CFG_APU)); ds2482_wait_1wire_idle(pdev); } @@ -484,7 +490,7 @@ static int ds2482_probe(struct i2c_client *client, /* Set all config items to 0 (off) */ ds2482_send_cmd_data(data, DS2482_CMD_WRITE_CONFIG, - ds2482_calculate_config(extra_config)); + ds2482_calculate_config(0x00)); mutex_init(&data->access_lock); @@ -559,7 +565,5 @@ module_i2c_driver(ds2482_driver); MODULE_AUTHOR("Ben Gardner <bgardner@wabtec.com>"); MODULE_DESCRIPTION("DS2482 driver"); -module_param(extra_config, int, S_IRUGO | S_IWUSR); -MODULE_PARM_DESC(extra_config, "Extra Configuration settings 1=APU,2=PPM,3=SPU,8=1WS"); MODULE_LICENSE("GPL"); diff --git a/drivers/w1/slaves/w1_ds2408.c b/drivers/w1/slaves/w1_ds2408.c index b535d5ec35b6..92e8f0755b9a 100644 --- a/drivers/w1/slaves/w1_ds2408.c +++ b/drivers/w1/slaves/w1_ds2408.c @@ -138,14 +138,37 @@ static ssize_t status_control_read(struct file *filp, struct kobject *kobj, W1_F29_REG_CONTROL_AND_STATUS, buf); } +#ifdef fCONFIG_W1_SLAVE_DS2408_READBACK +static bool optional_read_back_valid(struct w1_slave *sl, u8 expected) +{ + u8 w1_buf[3]; + + if (w1_reset_resume_command(sl->master)) + return false; + + w1_buf[0] = W1_F29_FUNC_READ_PIO_REGS; + w1_buf[1] = W1_F29_REG_OUTPUT_LATCH_STATE; + w1_buf[2] = 0; + + w1_write_block(sl->master, w1_buf, 3); + + return (w1_read_8(sl->master) == expected); +} +#else +static bool optional_read_back_valid(struct w1_slave *sl, u8 expected) +{ + return true; +} +#endif + static ssize_t output_write(struct file *filp, struct kobject *kobj, struct bin_attribute *bin_attr, char *buf, loff_t off, size_t count) { struct w1_slave *sl = kobj_to_w1_slave(kobj); u8 w1_buf[3]; - u8 readBack; unsigned int retries = W1_F29_RETRIES; + ssize_t bytes_written = -EIO; if (count != 1 || off != 0) return -EFAULT; @@ -155,54 +178,33 @@ static ssize_t output_write(struct file *filp, struct kobject *kobj, dev_dbg(&sl->dev, "mutex locked"); if (w1_reset_select_slave(sl)) - goto error; + goto out; - while (retries--) { + do { w1_buf[0] = W1_F29_FUNC_CHANN_ACCESS_WRITE; w1_buf[1] = *buf; w1_buf[2] = ~(*buf); - w1_write_block(sl->master, w1_buf, 3); - readBack = w1_read_8(sl->master); + w1_write_block(sl->master, w1_buf, 3); - if (readBack != W1_F29_SUCCESS_CONFIRM_BYTE) { - if (w1_reset_resume_command(sl->master)) - goto error; - /* try again, the slave is ready for a command */ - continue; + if (w1_read_8(sl->master) == W1_F29_SUCCESS_CONFIRM_BYTE && + optional_read_back_valid(sl, *buf)) { + bytes_written = 1; + goto out; } -#ifdef CONFIG_W1_SLAVE_DS2408_READBACK - /* here the master could read another byte which - would be the PIO reg (the actual pin logic state) - since in this driver we don't know which pins are - in and outs, there's no value to read the state and - compare. with (*buf) so end this command abruptly: */ if (w1_reset_resume_command(sl->master)) - goto error; + goto out; /* unrecoverable error */ + /* try again, the slave is ready for a command */ + } while (--retries); - /* go read back the output latches */ - /* (the direct effect of the write above) */ - w1_buf[0] = W1_F29_FUNC_READ_PIO_REGS; - w1_buf[1] = W1_F29_REG_OUTPUT_LATCH_STATE; - w1_buf[2] = 0; - w1_write_block(sl->master, w1_buf, 3); - /* read the result of the READ_PIO_REGS command */ - if (w1_read_8(sl->master) == *buf) -#endif - { - /* success! */ - mutex_unlock(&sl->master->bus_mutex); - dev_dbg(&sl->dev, - "mutex unlocked, retries:%d", retries); - return 1; - } - } -error: +out: mutex_unlock(&sl->master->bus_mutex); - dev_dbg(&sl->dev, "mutex unlocked in error, retries:%d", retries); - return -EIO; + dev_dbg(&sl->dev, "%s, mutex unlocked retries:%d\n", + (bytes_written > 0) ? "succeeded" : "error", retries); + + return bytes_written; } diff --git a/drivers/w1/w1_io.c b/drivers/w1/w1_io.c index 0364d3329c52..3516ce6718d9 100644 --- a/drivers/w1/w1_io.c +++ b/drivers/w1/w1_io.c @@ -432,8 +432,7 @@ int w1_reset_resume_command(struct w1_master *dev) if (w1_reset_bus(dev)) return -1; - /* This will make only the last matched slave perform a skip ROM. */ - w1_write_8(dev, W1_RESUME_CMD); + w1_write_8(dev, dev->slave_count > 1 ? W1_RESUME_CMD : W1_SKIP_ROM); return 0; } EXPORT_SYMBOL_GPL(w1_reset_resume_command); diff --git a/fs/char_dev.c b/fs/char_dev.c index a279c58fe360..d18cad28c1c3 100644 --- a/fs/char_dev.c +++ b/fs/char_dev.c @@ -88,22 +88,31 @@ static int find_dynamic_major(void) /* * Register a single major with a specified minor range. * - * If major == 0 this functions will dynamically allocate a major and return - * its number. - * - * If major > 0 this function will attempt to reserve the passed range of - * minors and will return zero on success. + * If major == 0 this function will dynamically allocate an unused major. + * If major > 0 this function will attempt to reserve the range of minors + * with given major. * - * Returns a -ve errno on failure. */ static struct char_device_struct * __register_chrdev_region(unsigned int major, unsigned int baseminor, int minorct, const char *name) { - struct char_device_struct *cd, **cp; - int ret = 0; + struct char_device_struct *cd, *curr, *prev = NULL; + int ret = -EBUSY; int i; + if (major >= CHRDEV_MAJOR_MAX) { + pr_err("CHRDEV \"%s\" major requested (%u) is greater than the maximum (%u)\n", + name, major, CHRDEV_MAJOR_MAX-1); + return ERR_PTR(-EINVAL); + } + + if (minorct > MINORMASK + 1 - baseminor) { + pr_err("CHRDEV \"%s\" minor range requested (%u-%u) is out of range of maximum range (%u-%u) for a single major\n", + name, baseminor, baseminor + minorct - 1, 0, MINORMASK); + return ERR_PTR(-EINVAL); + } + cd = kzalloc(sizeof(struct char_device_struct), GFP_KERNEL); if (cd == NULL) return ERR_PTR(-ENOMEM); @@ -120,10 +129,20 @@ __register_chrdev_region(unsigned int major, unsigned int baseminor, major = ret; } - if (major >= CHRDEV_MAJOR_MAX) { - pr_err("CHRDEV \"%s\" major requested (%u) is greater than the maximum (%u)\n", - name, major, CHRDEV_MAJOR_MAX-1); - ret = -EINVAL; + i = major_to_index(major); + for (curr = chrdevs[i]; curr; prev = curr, curr = curr->next) { + if (curr->major < major) + continue; + + if (curr->major > major) + break; + + if (curr->baseminor + curr->minorct <= baseminor) + continue; + + if (curr->baseminor >= baseminor + minorct) + break; + goto out; } @@ -132,37 +151,14 @@ __register_chrdev_region(unsigned int major, unsigned int baseminor, cd->minorct = minorct; strlcpy(cd->name, name, sizeof(cd->name)); - i = major_to_index(major); - - for (cp = &chrdevs[i]; *cp; cp = &(*cp)->next) - if ((*cp)->major > major || - ((*cp)->major == major && - (((*cp)->baseminor >= baseminor) || - ((*cp)->baseminor + (*cp)->minorct > baseminor)))) - break; - - /* Check for overlapping minor ranges. */ - if (*cp && (*cp)->major == major) { - int old_min = (*cp)->baseminor; - int old_max = (*cp)->baseminor + (*cp)->minorct - 1; - int new_min = baseminor; - int new_max = baseminor + minorct - 1; - - /* New driver overlaps from the left. */ - if (new_max >= old_min && new_max <= old_max) { - ret = -EBUSY; - goto out; - } - - /* New driver overlaps from the right. */ - if (new_min <= old_max && new_min >= old_min) { - ret = -EBUSY; - goto out; - } + if (!prev) { + cd->next = curr; + chrdevs[i] = cd; + } else { + cd->next = prev->next; + prev->next = cd; } - cd->next = *cp; - *cp = cd; mutex_unlock(&chrdevs_lock); return cd; out: diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h index a1a959ba24ff..b0e35eec6499 100644 --- a/include/linux/coresight-pmu.h +++ b/include/linux/coresight-pmu.h @@ -12,11 +12,13 @@ /* ETMv3.5/PTM's ETMCR config bit */ #define ETM_OPT_CYCACC 12 +#define ETM_OPT_CTXTID 14 #define ETM_OPT_TS 28 #define ETM_OPT_RETSTK 29 /* ETMv4 CONFIGR programming bits for the ETM OPTs */ #define ETM4_CFG_BIT_CYCACC 4 +#define ETM4_CFG_BIT_CTXTID 6 #define ETM4_CFG_BIT_TS 11 #define ETM4_CFG_BIT_RETSTK 12 diff --git a/include/linux/coresight.h b/include/linux/coresight.h index 7b87965f7a65..62a520df8add 100644 --- a/include/linux/coresight.h +++ b/include/linux/coresight.h @@ -192,9 +192,10 @@ struct coresight_device { */ struct coresight_ops_sink { int (*enable)(struct coresight_device *csdev, u32 mode, void *data); - void (*disable)(struct coresight_device *csdev); - void *(*alloc_buffer)(struct coresight_device *csdev, int cpu, - void **pages, int nr_pages, bool overwrite); + int (*disable)(struct coresight_device *csdev); + void *(*alloc_buffer)(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool overwrite); void (*free_buffer)(void *config); unsigned long (*update_buffer)(struct coresight_device *csdev, struct perf_output_handle *handle, diff --git a/include/linux/mei_cl_bus.h b/include/linux/mei_cl_bus.h index 03b6ba2a63f8..52aa4821093a 100644 --- a/include/linux/mei_cl_bus.h +++ b/include/linux/mei_cl_bus.h @@ -1,4 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2013-2016, Intel Corporation. All rights reserved. + */ #ifndef _LINUX_MEI_CL_BUS_H #define _LINUX_MEI_CL_BUS_H diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h index 312bfa5efd80..8f8be5b00060 100644 --- a/include/linux/nvmem-consumer.h +++ b/include/linux/nvmem-consumer.h @@ -61,6 +61,7 @@ void nvmem_cell_put(struct nvmem_cell *cell); void devm_nvmem_cell_put(struct device *dev, struct nvmem_cell *cell); void *nvmem_cell_read(struct nvmem_cell *cell, size_t *len); int nvmem_cell_write(struct nvmem_cell *cell, void *buf, size_t len); +int nvmem_cell_read_u16(struct device *dev, const char *cell_id, u16 *val); int nvmem_cell_read_u32(struct device *dev, const char *cell_id, u32 *val); /* direct nvmem device read/write interface */ @@ -122,6 +123,12 @@ static inline int nvmem_cell_write(struct nvmem_cell *cell, return -EOPNOTSUPP; } +static inline int nvmem_cell_read_u16(struct device *dev, + const char *cell_id, u16 *val) +{ + return -EOPNOTSUPP; +} + static inline int nvmem_cell_read_u32(struct device *dev, const char *cell_id, u32 *val) { diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h index df313913e856..35662d9c2c62 100644 --- a/include/linux/soundwire/sdw.h +++ b/include/linux/soundwire/sdw.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* Copyright(c) 2015-17 Intel Corporation. */ #ifndef __SOUNDWIRE_H #define __SOUNDWIRE_H @@ -36,7 +36,7 @@ struct sdw_slave; #define SDW_FRAME_CTRL_BITS 48 #define SDW_MAX_DEVICES 11 -#define SDW_VALID_PORT_RANGE(n) (n <= 14 && n >= 1) +#define SDW_VALID_PORT_RANGE(n) ((n) <= 14 && (n) >= 1) #define SDW_DAI_ID_RANGE_START 100 #define SDW_DAI_ID_RANGE_END 200 @@ -470,14 +470,14 @@ struct sdw_bus_params { struct sdw_slave_ops { int (*read_prop)(struct sdw_slave *sdw); int (*interrupt_callback)(struct sdw_slave *slave, - struct sdw_slave_intr_status *status); + struct sdw_slave_intr_status *status); int (*update_status)(struct sdw_slave *slave, - enum sdw_slave_status status); + enum sdw_slave_status status); int (*bus_config)(struct sdw_slave *slave, - struct sdw_bus_params *params); + struct sdw_bus_params *params); int (*port_prep)(struct sdw_slave *slave, - struct sdw_prepare_ch *prepare_ch, - enum sdw_port_prep_ops pre_ops); + struct sdw_prepare_ch *prepare_ch, + enum sdw_port_prep_ops pre_ops); }; /** diff --git a/include/linux/soundwire/sdw_intel.h b/include/linux/soundwire/sdw_intel.h index 2b9573b8aedd..4d70da45363d 100644 --- a/include/linux/soundwire/sdw_intel.h +++ b/include/linux/soundwire/sdw_intel.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* Copyright(c) 2015-17 Intel Corporation. */ #ifndef __SDW_INTEL_H #define __SDW_INTEL_H @@ -11,7 +11,7 @@ */ struct sdw_intel_ops { int (*config_stream)(void *arg, void *substream, - void *dai, void *hw_params, int stream_num); + void *dai, void *hw_params, int stream_num); }; /** diff --git a/include/linux/soundwire/sdw_registers.h b/include/linux/soundwire/sdw_registers.h index df472b1ab410..a686f7988156 100644 --- a/include/linux/soundwire/sdw_registers.h +++ b/include/linux/soundwire/sdw_registers.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* Copyright(c) 2015-17 Intel Corporation. */ #ifndef __SDW_REGISTERS_H #define __SDW_REGISTERS_H @@ -73,7 +73,6 @@ #define SDW_SCP_INTSTAT2_SCP3_CASCADE BIT(7) #define SDW_SCP_INTSTAT2_PORT4_10 GENMASK(6, 0) - #define SDW_SCP_INTSTAT3 0x43 #define SDW_SCP_INTSTAT3_PORT11_14 GENMASK(3, 0) diff --git a/include/linux/soundwire/sdw_type.h b/include/linux/soundwire/sdw_type.h index 9fd553e553e9..9c756b5a0dfe 100644 --- a/include/linux/soundwire/sdw_type.h +++ b/include/linux/soundwire/sdw_type.h @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: GPL-2.0 -// Copyright(c) 2015-17 Intel Corporation. +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2015-17 Intel Corporation. */ #ifndef __SOUNDWIRE_TYPES_H #define __SOUNDWIRE_TYPES_H @@ -11,7 +11,7 @@ extern struct bus_type sdw_bus_type; #define sdw_register_driver(drv) \ __sdw_register_driver(drv, THIS_MODULE) -int __sdw_register_driver(struct sdw_driver *drv, struct module *); +int __sdw_register_driver(struct sdw_driver *drv, struct module *owner); void sdw_unregister_driver(struct sdw_driver *drv); int sdw_slave_modalias(const struct sdw_slave *slave, char *buf, size_t size); diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index bf6ec83e60ee..2d7e012db03f 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -181,6 +181,8 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @device_name: Name of the device (or %NULL if not known) * @is_unplugged: The XDomain is unplugged * @resume: The XDomain is being resumed + * @needs_uuid: If the XDomain does not have @remote_uuid it will be + * queried first * @transmit_path: HopID which the remote end expects us to transmit * @transmit_ring: Local ring (hop) where outgoing packets are pushed * @receive_path: HopID which we expect the remote end to transmit @@ -189,6 +191,9 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @properties: Properties exported by the remote domain * @property_block_gen: Generation of @properties * @properties_lock: Lock protecting @properties. + * @get_uuid_work: Work used to retrieve @remote_uuid + * @uuid_retries: Number of times left @remote_uuid is requested before + * giving up * @get_properties_work: Work used to get remote domain properties * @properties_retries: Number of times left to read properties * @properties_changed_work: Work used to notify the remote domain that @@ -220,6 +225,7 @@ struct tb_xdomain { const char *device_name; bool is_unplugged; bool resume; + bool needs_uuid; u16 transmit_path; u16 transmit_ring; u16 receive_path; @@ -227,6 +233,8 @@ struct tb_xdomain { struct ida service_ids; struct tb_property_dir *properties; u32 property_block_gen; + struct delayed_work get_uuid_work; + int uuid_retries; struct delayed_work get_properties_work; int properties_retries; struct delayed_work properties_changed_work; diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h index eaa1e762bf06..0c06178e4985 100644 --- a/include/linux/vmw_vmci_defs.h +++ b/include/linux/vmw_vmci_defs.h @@ -17,6 +17,7 @@ #define _VMW_VMCI_DEF_H_ #include <linux/atomic.h> +#include <linux/bits.h> /* Register offsets. */ #define VMCI_STATUS_ADDR 0x00 @@ -33,27 +34,27 @@ #define VMCI_MAX_DEVICES 1 /* Status register bits. */ -#define VMCI_STATUS_INT_ON 0x1 +#define VMCI_STATUS_INT_ON BIT(0) /* Control register bits. */ -#define VMCI_CONTROL_RESET 0x1 -#define VMCI_CONTROL_INT_ENABLE 0x2 -#define VMCI_CONTROL_INT_DISABLE 0x4 +#define VMCI_CONTROL_RESET BIT(0) +#define VMCI_CONTROL_INT_ENABLE BIT(1) +#define VMCI_CONTROL_INT_DISABLE BIT(2) /* Capabilities register bits. */ -#define VMCI_CAPS_HYPERCALL 0x1 -#define VMCI_CAPS_GUESTCALL 0x2 -#define VMCI_CAPS_DATAGRAM 0x4 -#define VMCI_CAPS_NOTIFICATIONS 0x8 -#define VMCI_CAPS_PPN64 0x10 +#define VMCI_CAPS_HYPERCALL BIT(0) +#define VMCI_CAPS_GUESTCALL BIT(1) +#define VMCI_CAPS_DATAGRAM BIT(2) +#define VMCI_CAPS_NOTIFICATIONS BIT(3) +#define VMCI_CAPS_PPN64 BIT(4) /* Interrupt Cause register bits. */ -#define VMCI_ICR_DATAGRAM 0x1 -#define VMCI_ICR_NOTIFICATION 0x2 +#define VMCI_ICR_DATAGRAM BIT(0) +#define VMCI_ICR_NOTIFICATION BIT(1) /* Interrupt Mask register bits. */ -#define VMCI_IMR_DATAGRAM 0x1 -#define VMCI_IMR_NOTIFICATION 0x2 +#define VMCI_IMR_DATAGRAM BIT(0) +#define VMCI_IMR_NOTIFICATION BIT(1) /* Maximum MSI/MSI-X interrupt vectors in the device. */ #define VMCI_MAX_INTRS 2 @@ -463,9 +464,9 @@ struct vmci_datagram { * datagram callback is invoked in a delayed context (not interrupt context). */ #define VMCI_FLAG_DG_NONE 0 -#define VMCI_FLAG_WELLKNOWN_DG_HND 0x1 -#define VMCI_FLAG_ANYCID_DG_HND 0x2 -#define VMCI_FLAG_DG_DELAYED_CB 0x4 +#define VMCI_FLAG_WELLKNOWN_DG_HND BIT(0) +#define VMCI_FLAG_ANYCID_DG_HND BIT(1) +#define VMCI_FLAG_DG_DELAYED_CB BIT(2) /* * Maximum supported size of a VMCI datagram for routable datagrams. @@ -694,7 +695,7 @@ struct vmci_qp_detach_msg { }; /* VMCI Doorbell API. */ -#define VMCI_FLAG_DELAYED_CB 0x01 +#define VMCI_FLAG_DELAYED_CB BIT(0) typedef void (*vmci_callback) (void *client_data); diff --git a/include/uapi/linux/aspeed-p2a-ctrl.h b/include/uapi/linux/aspeed-p2a-ctrl.h new file mode 100644 index 000000000000..033355552a6e --- /dev/null +++ b/include/uapi/linux/aspeed-p2a-ctrl.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ +/* + * Copyright 2019 Google Inc + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + * + * Provides a simple driver to control the ASPEED P2A interface which allows + * the host to read and write to various regions of the BMC's memory. + */ + +#ifndef _UAPI_LINUX_ASPEED_P2A_CTRL_H +#define _UAPI_LINUX_ASPEED_P2A_CTRL_H + +#include <linux/ioctl.h> +#include <linux/types.h> + +#define ASPEED_P2A_CTRL_READ_ONLY 0 +#define ASPEED_P2A_CTRL_READWRITE 1 + +/* + * This driver provides a mechanism for enabling or disabling the read-write + * property of specific windows into the ASPEED BMC's memory. + * + * A user can map a region of the BMC's memory as read-only or read-write, with + * the caveat that once any region is mapped, all regions are unlocked for + * reading. + */ + +/* + * Unlock a region of BMC physical memory for access from the host. + * + * Also used to read back the optional memory-region configuration for the + * driver. + */ +struct aspeed_p2a_ctrl_mapping { + __u64 addr; + __u32 length; + __u32 flags; +}; + +#define __ASPEED_P2A_CTRL_IOCTL_MAGIC 0xb3 + +/* + * This IOCTL is meant to configure a region or regions of memory given a + * starting address and length to be readable by the host, or + * readable-writeable. + */ +#define ASPEED_P2A_CTRL_IOCTL_SET_WINDOW _IOW(__ASPEED_P2A_CTRL_IOCTL_MAGIC, \ + 0x00, struct aspeed_p2a_ctrl_mapping) + +/* + * This IOCTL is meant to read back to the user the base address and length of + * the memory-region specified to the driver for use with mmap. + */ +#define ASPEED_P2A_CTRL_IOCTL_GET_MEMORY_CONFIG \ + _IOWR(__ASPEED_P2A_CTRL_IOCTL_MAGIC, \ + 0x01, struct aspeed_p2a_ctrl_mapping) + +#endif /* _UAPI_LINUX_ASPEED_P2A_CTRL_H */ diff --git a/include/uapi/linux/mei.h b/include/uapi/linux/mei.h index 0f681cbd38d3..c6aec86cc5de 100644 --- a/include/uapi/linux/mei.h +++ b/include/uapi/linux/mei.h @@ -1,70 +1,9 @@ /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ -/****************************************************************************** +/* + * Copyright(c) 2003-2015 Intel Corporation. All rights reserved. * Intel Management Engine Interface (Intel MEI) Linux driver * Intel MEI Interface Header - * - * This file is provided under a dual BSD/GPLv2 license. When using or - * redistributing this file, you may do so under either license. - * - * GPL LICENSE SUMMARY - * - * Copyright(c) 2003 - 2012 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of version 2 of the GNU General Public License as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110, - * USA - * - * The full GNU General Public License is included in this distribution - * in the file called LICENSE.GPL. - * - * Contact Information: - * Intel Corporation. - * linux-mei@linux.intel.com - * http://www.intel.com - * - * BSD LICENSE - * - * Copyright(c) 2003 - 2012 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - *****************************************************************************/ - + */ #ifndef _LINUX_MEI_H #define _LINUX_MEI_H diff --git a/include/uapi/misc/habanalabs.h b/include/uapi/misc/habanalabs.h index 7fd6f633534c..8ac292cf4d00 100644 --- a/include/uapi/misc/habanalabs.h +++ b/include/uapi/misc/habanalabs.h @@ -20,8 +20,8 @@ /* * Queue Numbering * - * The external queues (DMA channels + CPU) MUST be before the internal queues - * and each group (DMA channels + CPU and internal) must be contiguous inside + * The external queues (PCI DMA channels) MUST be before the internal queues + * and each group (PCI DMA channels and internal) must be contiguous inside * itself but there can be a gap between the two groups (although not * recommended) */ @@ -33,7 +33,7 @@ enum goya_queue_id { GOYA_QUEUE_ID_DMA_3, GOYA_QUEUE_ID_DMA_4, GOYA_QUEUE_ID_CPU_PQ, - GOYA_QUEUE_ID_MME, + GOYA_QUEUE_ID_MME, /* Internal queues start here */ GOYA_QUEUE_ID_TPC0, GOYA_QUEUE_ID_TPC1, GOYA_QUEUE_ID_TPC2, @@ -45,11 +45,18 @@ enum goya_queue_id { GOYA_QUEUE_ID_SIZE }; +enum hl_device_status { + HL_DEVICE_STATUS_OPERATIONAL, + HL_DEVICE_STATUS_IN_RESET, + HL_DEVICE_STATUS_MALFUNCTION +}; + /* Opcode for management ioctl */ #define HL_INFO_HW_IP_INFO 0 #define HL_INFO_HW_EVENTS 1 #define HL_INFO_DRAM_USAGE 2 #define HL_INFO_HW_IDLE 3 +#define HL_INFO_DEVICE_STATUS 4 #define HL_INFO_VERSION_MAX_LEN 128 @@ -82,6 +89,11 @@ struct hl_info_hw_idle { __u32 pad; }; +struct hl_info_device_status { + __u32 status; + __u32 pad; +}; + struct hl_info_args { /* Location of relevant struct in userspace */ __u64 return_pointer; @@ -181,7 +193,10 @@ struct hl_cs_in { }; struct hl_cs_out { - /* this holds the sequence number of the CS to pass to wait ioctl */ + /* + * seq holds the sequence number of the CS to pass to wait ioctl. All + * values are valid except for 0 and ULLONG_MAX + */ __u64 seq; /* HL_CS_STATUS_* */ __u32 status; @@ -320,6 +335,110 @@ union hl_mem_args { struct hl_mem_out out; }; +#define HL_DEBUG_MAX_AUX_VALUES 10 + +struct hl_debug_params_etr { + /* Address in memory to allocate buffer */ + __u64 buffer_address; + + /* Size of buffer to allocate */ + __u64 buffer_size; + + /* Sink operation mode: SW fifo, HW fifo, Circular buffer */ + __u32 sink_mode; + __u32 pad; +}; + +struct hl_debug_params_etf { + /* Address in memory to allocate buffer */ + __u64 buffer_address; + + /* Size of buffer to allocate */ + __u64 buffer_size; + + /* Sink operation mode: SW fifo, HW fifo, Circular buffer */ + __u32 sink_mode; + __u32 pad; +}; + +struct hl_debug_params_stm { + /* Two bit masks for HW event and Stimulus Port */ + __u64 he_mask; + __u64 sp_mask; + + /* Trace source ID */ + __u32 id; + + /* Frequency for the timestamp register */ + __u32 frequency; +}; + +struct hl_debug_params_bmon { + /* Two address ranges that the user can request to filter */ + __u64 start_addr0; + __u64 addr_mask0; + + __u64 start_addr1; + __u64 addr_mask1; + + /* Capture window configuration */ + __u32 bw_win; + __u32 win_capture; + + /* Trace source ID */ + __u32 id; + __u32 pad; +}; + +struct hl_debug_params_spmu { + /* Event types selection */ + __u64 event_types[HL_DEBUG_MAX_AUX_VALUES]; + + /* Number of event types selection */ + __u32 event_types_num; + __u32 pad; +}; + +/* Opcode for ETR component */ +#define HL_DEBUG_OP_ETR 0 +/* Opcode for ETF component */ +#define HL_DEBUG_OP_ETF 1 +/* Opcode for STM component */ +#define HL_DEBUG_OP_STM 2 +/* Opcode for FUNNEL component */ +#define HL_DEBUG_OP_FUNNEL 3 +/* Opcode for BMON component */ +#define HL_DEBUG_OP_BMON 4 +/* Opcode for SPMU component */ +#define HL_DEBUG_OP_SPMU 5 +/* Opcode for timestamp */ +#define HL_DEBUG_OP_TIMESTAMP 6 + +struct hl_debug_args { + /* + * Pointer to user input structure. + * This field is relevant to specific opcodes. + */ + __u64 input_ptr; + /* Pointer to user output structure */ + __u64 output_ptr; + /* Size of user input structure */ + __u32 input_size; + /* Size of user output structure */ + __u32 output_size; + /* HL_DEBUG_OP_* */ + __u32 op; + /* + * Register index in the component, taken from the debug_regs_index enum + * in the various ASIC header files + */ + __u32 reg_idx; + /* Enable/disable */ + __u32 enable; + /* Context ID - Currently not in use */ + __u32 ctx_id; +}; + /* * Various information operations such as: * - H/W IP information @@ -361,6 +480,12 @@ union hl_mem_args { * Each JOB will be enqueued on a specific queue, according to the user's input. * There can be more then one JOB per queue. * + * The CS IOCTL will receive three sets of JOBS. One set is for "restore" phase, + * a second set is for "execution" phase and a third set is for "store" phase. + * The JOBS on the "restore" phase are enqueued only after context-switch + * (or if its the first CS for this context). The user can also order the + * driver to run the "restore" phase explicitly + * * There are two types of queues - external and internal. External queues * are DMA queues which transfer data from/to the Host. All other queues are * internal. The driver will get completion notifications from the device only @@ -377,19 +502,18 @@ union hl_mem_args { * relevant queues. Therefore, the user mustn't assume the CS has been completed * or has even started to execute. * - * Upon successful enqueue, the IOCTL returns an opaque handle which the user + * Upon successful enqueue, the IOCTL returns a sequence number which the user * can use with the "Wait for CS" IOCTL to check whether the handle's CS * external JOBS have been completed. Note that if the CS has internal JOBS * which can execute AFTER the external JOBS have finished, the driver might * report that the CS has finished executing BEFORE the internal JOBS have * actually finish executing. * - * The CS IOCTL will receive three sets of JOBS. One set is for "restore" phase, - * a second set is for "execution" phase and a third set is for "store" phase. - * The JOBS on the "restore" phase are enqueued only after context-switch - * (or if its the first CS for this context). The user can also order the - * driver to run the "restore" phase explicitly - * + * Even though the sequence number increments per CS, the user can NOT + * automatically assume that if CS with sequence number N finished, then CS + * with sequence number N-1 also finished. The user can make this assumption if + * and only if CS N and CS N-1 are exactly the same (same CBs for the same + * queues). */ #define HL_IOCTL_CS \ _IOWR('H', 0x03, union hl_cs_args) @@ -444,7 +568,20 @@ union hl_mem_args { #define HL_IOCTL_MEMORY \ _IOWR('H', 0x05, union hl_mem_args) +/* + * Debug + * - Enable/disable the ETR/ETF/FUNNEL/STM/BMON/SPMU debug traces + * + * This IOCTL allows the user to get debug traces from the chip. + * + * The user needs to provide the register index and essential data such as + * buffer address and size. + * + */ +#define HL_IOCTL_DEBUG \ + _IOWR('H', 0x06, struct hl_debug_args) + #define HL_COMMAND_START 0x01 -#define HL_COMMAND_END 0x06 +#define HL_COMMAND_END 0x07 #endif /* HABANALABS_H_ */ diff --git a/lib/siphash.c b/lib/siphash.c index 3ae58b4edad6..c47bb6ff2149 100644 --- a/lib/siphash.c +++ b/lib/siphash.c @@ -68,11 +68,11 @@ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) bytemask_from_count(left))); #else switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; + case 7: b |= ((u64)end[6]) << 48; /* fall through */ + case 6: b |= ((u64)end[5]) << 40; /* fall through */ + case 5: b |= ((u64)end[4]) << 32; /* fall through */ case 4: b |= le32_to_cpup(data); break; - case 3: b |= ((u64)end[2]) << 16; + case 3: b |= ((u64)end[2]) << 16; /* fall through */ case 2: b |= le16_to_cpup(data); break; case 1: b |= end[0]; } @@ -101,11 +101,11 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) bytemask_from_count(left))); #else switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; + case 7: b |= ((u64)end[6]) << 48; /* fall through */ + case 6: b |= ((u64)end[5]) << 40; /* fall through */ + case 5: b |= ((u64)end[4]) << 32; /* fall through */ case 4: b |= get_unaligned_le32(end); break; - case 3: b |= ((u64)end[2]) << 16; + case 3: b |= ((u64)end[2]) << 16; /* fall through */ case 2: b |= get_unaligned_le16(end); break; case 1: b |= end[0]; } @@ -268,11 +268,11 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) bytemask_from_count(left))); #else switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; + case 7: b |= ((u64)end[6]) << 48; /* fall through */ + case 6: b |= ((u64)end[5]) << 40; /* fall through */ + case 5: b |= ((u64)end[4]) << 32; /* fall through */ case 4: b |= le32_to_cpup(data); break; - case 3: b |= ((u64)end[2]) << 16; + case 3: b |= ((u64)end[2]) << 16; /* fall through */ case 2: b |= le16_to_cpup(data); break; case 1: b |= end[0]; } @@ -301,11 +301,11 @@ u32 __hsiphash_unaligned(const void *data, size_t len, bytemask_from_count(left))); #else switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; + case 7: b |= ((u64)end[6]) << 48; /* fall through */ + case 6: b |= ((u64)end[5]) << 40; /* fall through */ + case 5: b |= ((u64)end[4]) << 32; /* fall through */ case 4: b |= get_unaligned_le32(end); break; - case 3: b |= ((u64)end[2]) << 16; + case 3: b |= ((u64)end[2]) << 16; /* fall through */ case 2: b |= get_unaligned_le16(end); break; case 1: b |= end[0]; } @@ -431,7 +431,7 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) v0 ^= m; } switch (left) { - case 3: b |= ((u32)end[2]) << 16; + case 3: b |= ((u32)end[2]) << 16; /* fall through */ case 2: b |= le16_to_cpup(data); break; case 1: b |= end[0]; } @@ -454,7 +454,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, v0 ^= m; } switch (left) { - case 3: b |= ((u32)end[2]) << 16; + case 3: b |= ((u32)end[2]) << 16; /* fall through */ case 2: b |= get_unaligned_le16(end); break; case 1: b |= end[0]; } diff --git a/tools/include/linux/coresight-pmu.h b/tools/include/linux/coresight-pmu.h index a1a959ba24ff..b0e35eec6499 100644 --- a/tools/include/linux/coresight-pmu.h +++ b/tools/include/linux/coresight-pmu.h @@ -12,11 +12,13 @@ /* ETMv3.5/PTM's ETMCR config bit */ #define ETM_OPT_CYCACC 12 +#define ETM_OPT_CTXTID 14 #define ETM_OPT_TS 28 #define ETM_OPT_RETSTK 29 /* ETMv4 CONFIGR programming bits for the ETM OPTs */ #define ETM4_CFG_BIT_CYCACC 4 +#define ETM4_CFG_BIT_CTXTID 6 #define ETM4_CFG_BIT_TS 11 #define ETM4_CFG_BIT_RETSTK 12 |