summaryrefslogtreecommitdiffstats
path: root/drivers/pci/controller
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2023-11-03 01:05:18 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2023-11-03 01:05:18 +0100
commit27beb3ca347fa29fef5c23b351120239b8cf0612 (patch)
treecf92e40319e07a459710d2819c430ab5ee358c88 /drivers/pci/controller
parentMerge tag '6.7-rc-ksmbd-server-fixes' of git://git.samba.org/ksmbd (diff)
parentMerge branch 'pci/misc' (diff)
downloadlinux-27beb3ca347fa29fef5c23b351120239b8cf0612.tar.xz
linux-27beb3ca347fa29fef5c23b351120239b8cf0612.zip
Merge tag 'pci-v6.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci
Pull pci updates from Bjorn Helgaas: "Enumeration: - Use acpi_evaluate_dsm_typed() instead of open-coding _DSM evaluation to learn device characteristics (Andy Shevchenko) - Tidy multi-function header checks using new PCI_HEADER_TYPE_MASK definition (Ilpo Järvinen) - Simplify config access error checking in various drivers (Ilpo Järvinen) - Use pcie_capability_clear_word() (not pcie_capability_clear_and_set_word()) when only clearing (Ilpo Järvinen) - Add pci_get_base_class() to simplify finding devices using base class only (ignoring subclass and programming interface) (Sui Jingfeng) - Add pci_is_vga(), which includes ancient PCI_CLASS_NOT_DEFINED_VGA devices from before the Class Code was added to PCI (Sui Jingfeng) - Use pci_is_vga() for vgaarb, sysfs "boot_vga", virtio, qxl to include ancient VGA devices (Sui Jingfeng) Resource management: - Make pci_assign_unassigned_resources() non-init because sparc uses it after init (Randy Dunlap) Driver binding: - Retain .remove() and .probe() callbacks (previously __init) because sysfs may cause them to be called later (Uwe Kleine-König) - Prevent xHCI driver from claiming AMD VanGogh USB3 DRD device, so it can be claimed by dwc3 instead (Vicki Pfau) PCI device hotplug: - Add Ampere Altra Attention Indicator extension driver for acpiphp (D Scott Phillips) Power management: - Quirk VideoPropulsion Torrent QN16e with longer delay after reset (Lukas Wunner) - Prevent users from overriding drivers that say we shouldn't use D3cold (Lukas Wunner) - Avoid PME from D3hot/D3cold for AMD Rembrandt and Phoenix USB4 because wakeup interrupts from those states don't work if amd-pmc has put the platform in a hardware sleep state (Mario Limonciello) IOMMU: - Disable ATS for Intel IPU E2000 devices with invalidation message endianness erratum (Bartosz Pawlowski) Error handling: - Factor out interrupt enable/disable into helpers (Kai-Heng Feng) Peer-to-peer DMA: - Fix flexible-array usage in struct pci_p2pdma_pagemap in case we ever use pagemaps with multiple entries (Gustavo A. R. Silva) ASPM: - Revert a change that broke when drivers disabled L1 and users later enabled an L1.x substate via sysfs, and fix a similar issue when users disabled L1 via sysfs (Heiner Kallweit) Endpoint framework: - Fix double free in __pci_epc_create() (Dan Carpenter) - Use IS_ERR_OR_NULL() to simplify endpoint core (Ruan Jinjie) Cadence PCIe controller driver: - Drop unused "is_rc" member (Li Chen) Freescale Layerscape PCIe controller driver: - Enable 64-bit addressing in endpoint mode (Guanhua Gao) Intel VMD host bridge driver: - Fix multi-function header check (Ilpo Järvinen) Microsoft Hyper-V host bridge driver: - Annotate struct hv_dr_state with __counted_by (Kees Cook) NVIDIA Tegra194 PCIe controller driver: - Drop setting of LNKCAP_MLW (max link width) since dw_pcie_setup() already does this via dw_pcie_link_set_max_link_width() (Yoshihiro Shimoda) Qualcomm PCIe controller driver: - Use PCIE_SPEED2MBS_ENC() to simplify encoding of link speed (Manivannan Sadhasivam) - Add a .write_dbi2() callback so DBI2 register writes, e.g., for setting the BAR size, work correctly (Manivannan Sadhasivam) - Enable ASPM for platforms that use 1.9.0 ops, because the PCI core doesn't enable ASPM states that haven't been enabled by the firmware (Manivannan Sadhasivam) Renesas R-Car Gen4 PCIe controller driver: - Add DesignWare core support (set max link width, EDMA_UNROLL flag, .pre_init(), .deinit(), etc) for use by R-Car Gen4 driver (Yoshihiro Shimoda) - Add driver and DT schema for DesignWare-based Renesas R-Car Gen4 controller in both host and endpoint mode (Yoshihiro Shimoda) Xilinx NWL PCIe controller driver: - Update ECAM size to support 256 buses (Thippeswamy Havalige) - Stop setting bridge primary/secondary/subordinate bus numbers, since PCI core does this (Thippeswamy Havalige) Xilinx XDMA controller driver: - Add driver and DT schema for Zynq UltraScale+ MPSoCs devices with Xilinx XDMA Soft IP (Thippeswamy Havalige) Miscellaneous: - Use FIELD_GET()/FIELD_PREP() to simplify and reduce use of _SHIFT macros (Ilpo Järvinen, Bjorn Helgaas) - Remove logic_outb(), _outw(), outl() duplicate declarations (John Sanpe) - Replace unnecessary UTF-8 in Kconfig help text because menuconfig doesn't render it correctly (Liu Song)" * tag 'pci-v6.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (102 commits) PCI: qcom-ep: Add dedicated callback for writing to DBI2 registers PCI: Simplify pcie_capability_clear_and_set_word() to ..._clear_word() PCI: endpoint: Fix double free in __pci_epc_create() PCI: xilinx-xdma: Add Xilinx XDMA Root Port driver dt-bindings: PCI: xilinx-xdma: Add schemas for Xilinx XDMA PCIe Root Port Bridge PCI: xilinx-cpm: Move IRQ definitions to a common header PCI: xilinx-nwl: Modify ECAM size to enable support for 256 buses PCI: xilinx-nwl: Rename the NWL_ECAM_VALUE_DEFAULT macro dt-bindings: PCI: xilinx-nwl: Modify ECAM size in the DT example PCI: xilinx-nwl: Remove redundant code that sets Type 1 header fields PCI: hotplug: Add Ampere Altra Attention Indicator extension driver PCI/AER: Factor out interrupt toggling into helpers PCI: acpiphp: Allow built-in drivers for Attention Indicators PCI/portdrv: Use FIELD_GET() PCI/VC: Use FIELD_GET() PCI/PTM: Use FIELD_GET() PCI/PME: Use FIELD_GET() PCI/ATS: Use FIELD_GET() PCI/ATS: Show PASID Capability register width in bitmasks PCI/ASPM: Fix L1 substate handling in aspm_attr_store_common() ...
Diffstat (limited to 'drivers/pci/controller')
-rw-r--r--drivers/pci/controller/Kconfig11
-rw-r--r--drivers/pci/controller/Makefile1
-rw-r--r--drivers/pci/controller/cadence/pcie-cadence-ep.c9
-rw-r--r--drivers/pci/controller/cadence/pcie-cadence-plat.c5
-rw-r--r--drivers/pci/controller/dwc/Kconfig25
-rw-r--r--drivers/pci/controller/dwc/Makefile1
-rw-r--r--drivers/pci/controller/dwc/pci-exynos.c4
-rw-r--r--drivers/pci/controller/dwc/pci-keystone.c8
-rw-r--r--drivers/pci/controller/dwc/pci-layerscape-ep.c2
-rw-r--r--drivers/pci/controller/dwc/pci-layerscape.c2
-rw-r--r--drivers/pci/controller/dwc/pcie-designware-ep.c52
-rw-r--r--drivers/pci/controller/dwc/pcie-designware-host.c3
-rw-r--r--drivers/pci/controller/dwc/pcie-designware.c102
-rw-r--r--drivers/pci/controller/dwc/pcie-designware.h9
-rw-r--r--drivers/pci/controller/dwc/pcie-kirin.c4
-rw-r--r--drivers/pci/controller/dwc/pcie-qcom-ep.c48
-rw-r--r--drivers/pci/controller/dwc/pcie-qcom.c52
-rw-r--r--drivers/pci/controller/dwc/pcie-rcar-gen4.c527
-rw-r--r--drivers/pci/controller/dwc/pcie-tegra194.c27
-rw-r--r--drivers/pci/controller/mobiveil/pcie-mobiveil-host.c2
-rw-r--r--drivers/pci/controller/pci-hyperv.c2
-rw-r--r--drivers/pci/controller/pci-mvebu.c2
-rw-r--r--drivers/pci/controller/pci-xgene.c7
-rw-r--r--drivers/pci/controller/pcie-iproc.c2
-rw-r--r--drivers/pci/controller/pcie-rcar-ep.c2
-rw-r--r--drivers/pci/controller/pcie-rcar-host.c2
-rw-r--r--drivers/pci/controller/pcie-xilinx-common.h31
-rw-r--r--drivers/pci/controller/pcie-xilinx-cpm.c38
-rw-r--r--drivers/pci/controller/pcie-xilinx-dma-pl.c814
-rw-r--r--drivers/pci/controller/pcie-xilinx-nwl.c18
-rw-r--r--drivers/pci/controller/vmd.c10
31 files changed, 1624 insertions, 198 deletions
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index c0c3f2824990..e534c02ee34f 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -324,6 +324,17 @@ config PCIE_XILINX
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
+config PCIE_XILINX_DMA_PL
+ bool "Xilinx DMA PL PCIe host bridge support"
+ depends on ARCH_ZYNQMP || COMPILE_TEST
+ depends on PCI_MSI
+ select PCI_HOST_COMMON
+ help
+ Say 'Y' here if you want kernel support for the Xilinx PL DMA
+ PCIe host bridge. The controller is a Soft IP which can act as
+ Root Port. If your system provides Xilinx PCIe host controller
+ bridge DMA as Soft IP say 'Y'; if you are not sure, say 'N'.
+
config PCIE_XILINX_NWL
bool "Xilinx NWL PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index 37c8663de7fe..f2b19e6174af 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
obj-$(CONFIG_PCIE_XILINX_CPM) += pcie-xilinx-cpm.o
+obj-$(CONFIG_PCIE_XILINX_DMA_PL) += pcie-xilinx-dma-pl.o
obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o
obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index b8b655d4047e..3142feb8ac19 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -3,6 +3,7 @@
// Cadence PCIe endpoint controller driver.
// Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com>
+#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/of.h>
@@ -262,7 +263,7 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
* Get the Multiple Message Enable bitfield from the Message Control
* register.
*/
- mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+ mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
return mme;
}
@@ -394,7 +395,7 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
return -EINVAL;
/* Get the number of enabled MSIs */
- mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+ mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
msi_count = 1 << mme;
if (!interrupt_num || interrupt_num > msi_count)
return -EINVAL;
@@ -449,7 +450,7 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
return -EINVAL;
/* Get the number of enabled MSIs */
- mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+ mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
msi_count = 1 << mme;
if (!interrupt_num || interrupt_num > msi_count)
return -EINVAL;
@@ -506,7 +507,7 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
reg = cap + PCI_MSIX_TABLE;
tbl_offset = cdns_pcie_ep_fn_readl(pcie, fn, reg);
- bir = tbl_offset & PCI_MSIX_TABLE_BIR;
+ bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
msix_tbl = epf->epf_bar[bir]->addr + tbl_offset;
diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
index 371ffc1f00f8..0456845dabb9 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
@@ -17,12 +17,9 @@
/**
* struct cdns_plat_pcie - private data for this PCIe platform driver
* @pcie: Cadence PCIe controller
- * @is_rc: Set to 1 indicates the PCIe controller mode is Root Complex,
- * if 0 it is in Endpoint mode.
*/
struct cdns_plat_pcie {
struct cdns_pcie *pcie;
- bool is_rc;
};
struct cdns_plat_pcie_of_data {
@@ -76,7 +73,6 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
rc->pcie.dev = dev;
rc->pcie.ops = &cdns_plat_ops;
cdns_plat_pcie->pcie = &rc->pcie;
- cdns_plat_pcie->is_rc = is_rc;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
if (ret) {
@@ -104,7 +100,6 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
ep->pcie.dev = dev;
ep->pcie.ops = &cdns_plat_ops;
cdns_plat_pcie->pcie = &ep->pcie;
- cdns_plat_pcie->is_rc = is_rc;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
if (ret) {
diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
index ab96da43e0c2..5ac021dbd46a 100644
--- a/drivers/pci/controller/dwc/Kconfig
+++ b/drivers/pci/controller/dwc/Kconfig
@@ -286,6 +286,31 @@ config PCIE_QCOM_EP
to work in endpoint mode. The PCIe controller uses the DesignWare core
plus Qualcomm-specific hardware wrappers.
+config PCIE_RCAR_GEN4
+ tristate
+
+config PCIE_RCAR_GEN4_HOST
+ tristate "Renesas R-Car Gen4 PCIe controller (host mode)"
+ depends on ARCH_RENESAS || COMPILE_TEST
+ depends on PCI_MSI
+ select PCIE_DW_HOST
+ select PCIE_RCAR_GEN4
+ help
+ Say Y here if you want PCIe controller (host mode) on R-Car Gen4 SoCs.
+ To compile this driver as a module, choose M here: the module will be
+ called pcie-rcar-gen4.ko. This uses the DesignWare core.
+
+config PCIE_RCAR_GEN4_EP
+ tristate "Renesas R-Car Gen4 PCIe controller (endpoint mode)"
+ depends on ARCH_RENESAS || COMPILE_TEST
+ depends on PCI_ENDPOINT
+ select PCIE_DW_EP
+ select PCIE_RCAR_GEN4
+ help
+ Say Y here if you want PCIe controller (endpoint mode) on R-Car Gen4
+ SoCs. To compile this driver as a module, choose M here: the module
+ will be called pcie-rcar-gen4.ko. This uses the DesignWare core.
+
config PCIE_ROCKCHIP_DW_HOST
bool "Rockchip DesignWare PCIe controller"
select PCIE_DW
diff --git a/drivers/pci/controller/dwc/Makefile b/drivers/pci/controller/dwc/Makefile
index bf5c311875a1..bac103faa523 100644
--- a/drivers/pci/controller/dwc/Makefile
+++ b/drivers/pci/controller/dwc/Makefile
@@ -26,6 +26,7 @@ obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o
+obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o
# The following drivers are for devices that use the generic ACPI
# pci_root.c driver but don't support standard ECAM config access.
diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c
index 6319082301d6..c6bede346932 100644
--- a/drivers/pci/controller/dwc/pci-exynos.c
+++ b/drivers/pci/controller/dwc/pci-exynos.c
@@ -375,7 +375,7 @@ fail_probe:
return ret;
}
-static int __exit exynos_pcie_remove(struct platform_device *pdev)
+static int exynos_pcie_remove(struct platform_device *pdev)
{
struct exynos_pcie *ep = platform_get_drvdata(pdev);
@@ -431,7 +431,7 @@ static const struct of_device_id exynos_pcie_of_match[] = {
static struct platform_driver exynos_pcie_driver = {
.probe = exynos_pcie_probe,
- .remove = __exit_p(exynos_pcie_remove),
+ .remove = exynos_pcie_remove,
.driver = {
.name = "exynos-pcie",
.of_match_table = exynos_pcie_of_match,
diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
index 49aea6ce3e87..0def919f89fa 100644
--- a/drivers/pci/controller/dwc/pci-keystone.c
+++ b/drivers/pci/controller/dwc/pci-keystone.c
@@ -1100,7 +1100,7 @@ static const struct of_device_id ks_pcie_of_match[] = {
{ },
};
-static int __init ks_pcie_probe(struct platform_device *pdev)
+static int ks_pcie_probe(struct platform_device *pdev)
{
const struct dw_pcie_host_ops *host_ops;
const struct dw_pcie_ep_ops *ep_ops;
@@ -1302,7 +1302,7 @@ err_link:
return ret;
}
-static int __exit ks_pcie_remove(struct platform_device *pdev)
+static int ks_pcie_remove(struct platform_device *pdev)
{
struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
struct device_link **link = ks_pcie->link;
@@ -1318,9 +1318,9 @@ static int __exit ks_pcie_remove(struct platform_device *pdev)
return 0;
}
-static struct platform_driver ks_pcie_driver __refdata = {
+static struct platform_driver ks_pcie_driver = {
.probe = ks_pcie_probe,
- .remove = __exit_p(ks_pcie_remove),
+ .remove = ks_pcie_remove,
.driver = {
.name = "keystone-pcie",
.of_match_table = ks_pcie_of_match,
diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c
index b1faf41a2fae..3d3c50ef4b6f 100644
--- a/drivers/pci/controller/dwc/pci-layerscape-ep.c
+++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c
@@ -266,6 +266,8 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian");
+ dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+
platform_set_drvdata(pdev, pcie);
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
diff --git a/drivers/pci/controller/dwc/pci-layerscape.c b/drivers/pci/controller/dwc/pci-layerscape.c
index b931d597656f..37956e09c65b 100644
--- a/drivers/pci/controller/dwc/pci-layerscape.c
+++ b/drivers/pci/controller/dwc/pci-layerscape.c
@@ -58,7 +58,7 @@ static bool ls_pcie_is_bridge(struct ls_pcie *pcie)
u32 header_type;
header_type = ioread8(pci->dbi_base + PCI_HEADER_TYPE);
- header_type &= 0x7f;
+ header_type &= PCI_HEADER_TYPE_MASK;
return header_type == PCI_HEADER_TYPE_BRIDGE;
}
diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
index f9182f8d552f..f6207989fc6a 100644
--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
+++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
@@ -6,6 +6,7 @@
* Author: Kishon Vijay Abraham I <kishon@ti.com>
*/
+#include <linux/bitfield.h>
#include <linux/of.h>
#include <linux/platform_device.h>
@@ -52,21 +53,35 @@ static unsigned int dw_pcie_ep_func_select(struct dw_pcie_ep *ep, u8 func_no)
return func_offset;
}
+static unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep, u8 func_no)
+{
+ unsigned int dbi2_offset = 0;
+
+ if (ep->ops->get_dbi2_offset)
+ dbi2_offset = ep->ops->get_dbi2_offset(ep, func_no);
+ else if (ep->ops->func_conf_select) /* for backward compatibility */
+ dbi2_offset = ep->ops->func_conf_select(ep, func_no);
+
+ return dbi2_offset;
+}
+
static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, u8 func_no,
enum pci_barno bar, int flags)
{
- u32 reg;
- unsigned int func_offset = 0;
+ unsigned int func_offset, dbi2_offset;
struct dw_pcie_ep *ep = &pci->ep;
+ u32 reg, reg_dbi2;
func_offset = dw_pcie_ep_func_select(ep, func_no);
+ dbi2_offset = dw_pcie_ep_get_dbi2_offset(ep, func_no);
reg = func_offset + PCI_BASE_ADDRESS_0 + (4 * bar);
+ reg_dbi2 = dbi2_offset + PCI_BASE_ADDRESS_0 + (4 * bar);
dw_pcie_dbi_ro_wr_en(pci);
- dw_pcie_writel_dbi2(pci, reg, 0x0);
+ dw_pcie_writel_dbi2(pci, reg_dbi2, 0x0);
dw_pcie_writel_dbi(pci, reg, 0x0);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
- dw_pcie_writel_dbi2(pci, reg + 4, 0x0);
+ dw_pcie_writel_dbi2(pci, reg_dbi2 + 4, 0x0);
dw_pcie_writel_dbi(pci, reg + 4, 0x0);
}
dw_pcie_dbi_ro_wr_dis(pci);
@@ -228,16 +243,18 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+ unsigned int func_offset, dbi2_offset;
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
int flags = epf_bar->flags;
- unsigned int func_offset = 0;
+ u32 reg, reg_dbi2;
int ret, type;
- u32 reg;
func_offset = dw_pcie_ep_func_select(ep, func_no);
+ dbi2_offset = dw_pcie_ep_get_dbi2_offset(ep, func_no);
reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset;
+ reg_dbi2 = PCI_BASE_ADDRESS_0 + (4 * bar) + dbi2_offset;
if (!(flags & PCI_BASE_ADDRESS_SPACE))
type = PCIE_ATU_TYPE_MEM;
@@ -253,11 +270,11 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
dw_pcie_dbi_ro_wr_en(pci);
- dw_pcie_writel_dbi2(pci, reg, lower_32_bits(size - 1));
+ dw_pcie_writel_dbi2(pci, reg_dbi2, lower_32_bits(size - 1));
dw_pcie_writel_dbi(pci, reg, flags);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
- dw_pcie_writel_dbi2(pci, reg + 4, upper_32_bits(size - 1));
+ dw_pcie_writel_dbi2(pci, reg_dbi2 + 4, upper_32_bits(size - 1));
dw_pcie_writel_dbi(pci, reg + 4, 0);
}
@@ -334,7 +351,7 @@ static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
if (!(val & PCI_MSI_FLAGS_ENABLE))
return -EINVAL;
- val = (val & PCI_MSI_FLAGS_QSIZE) >> 4;
+ val = FIELD_GET(PCI_MSI_FLAGS_QSIZE, val);
return val;
}
@@ -357,7 +374,7 @@ static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS;
val = dw_pcie_readw_dbi(pci, reg);
val &= ~PCI_MSI_FLAGS_QMASK;
- val |= (interrupts << 1) & PCI_MSI_FLAGS_QMASK;
+ val |= FIELD_PREP(PCI_MSI_FLAGS_QMASK, interrupts);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writew_dbi(pci, reg, val);
dw_pcie_dbi_ro_wr_dis(pci);
@@ -584,7 +601,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
reg = ep_func->msix_cap + func_offset + PCI_MSIX_TABLE;
tbl_offset = dw_pcie_readl_dbi(pci, reg);
- bir = (tbl_offset & PCI_MSIX_TABLE_BIR);
+ bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
msix_tbl = ep->epf_bar[bir]->addr + tbl_offset;
@@ -621,7 +638,11 @@ void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
epc->mem->window.page_size);
pci_epc_mem_exit(epc);
+
+ if (ep->ops->deinit)
+ ep->ops->deinit(ep);
}
+EXPORT_SYMBOL_GPL(dw_pcie_ep_exit);
static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
{
@@ -723,6 +744,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->phys_base = res->start;
ep->addr_size = resource_size(res);
+ if (ep->ops->pre_init)
+ ep->ops->pre_init(ep);
+
dw_pcie_version_detect(pci);
dw_pcie_iatu_detect(pci);
@@ -777,7 +801,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->page_size);
if (ret < 0) {
dev_err(dev, "Failed to initialize address space\n");
- return ret;
+ goto err_ep_deinit;
}
ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
@@ -814,6 +838,10 @@ err_free_epc_mem:
err_exit_epc_mem:
pci_epc_mem_exit(epc);
+err_ep_deinit:
+ if (ep->ops->deinit)
+ ep->ops->deinit(ep);
+
return ret;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index a7170fd0e847..7991f0e179b2 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -502,6 +502,9 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
goto err_stop_link;
+ if (pp->ops->host_post_init)
+ pp->ops->host_post_init(pp);
+
return 0;
err_stop_link:
diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
index 1c1c7348972b..250cf7f40b85 100644
--- a/drivers/pci/controller/dwc/pcie-designware.c
+++ b/drivers/pci/controller/dwc/pcie-designware.c
@@ -365,6 +365,7 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
if (ret)
dev_err(pci->dev, "write DBI address failed\n");
}
+EXPORT_SYMBOL_GPL(dw_pcie_write_dbi2);
static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir,
u32 index)
@@ -732,6 +733,53 @@ static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen)
}
+static void dw_pcie_link_set_max_link_width(struct dw_pcie *pci, u32 num_lanes)
+{
+ u32 lnkcap, lwsc, plc;
+ u8 cap;
+
+ if (!num_lanes)
+ return;
+
+ /* Set the number of lanes */
+ plc = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
+ plc &= ~PORT_LINK_FAST_LINK_MODE;
+ plc &= ~PORT_LINK_MODE_MASK;
+
+ /* Set link width speed control register */
+ lwsc = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
+ lwsc &= ~PORT_LOGIC_LINK_WIDTH_MASK;
+ switch (num_lanes) {
+ case 1:
+ plc |= PORT_LINK_MODE_1_LANES;
+ lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES;
+ break;
+ case 2:
+ plc |= PORT_LINK_MODE_2_LANES;
+ lwsc |= PORT_LOGIC_LINK_WIDTH_2_LANES;
+ break;
+ case 4:
+ plc |= PORT_LINK_MODE_4_LANES;
+ lwsc |= PORT_LOGIC_LINK_WIDTH_4_LANES;
+ break;
+ case 8:
+ plc |= PORT_LINK_MODE_8_LANES;
+ lwsc |= PORT_LOGIC_LINK_WIDTH_8_LANES;
+ break;
+ default:
+ dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes);
+ return;
+ }
+ dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, plc);
+ dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, lwsc);
+
+ cap = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
+ lnkcap = dw_pcie_readl_dbi(pci, cap + PCI_EXP_LNKCAP);
+ lnkcap &= ~PCI_EXP_LNKCAP_MLW;
+ lnkcap |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, num_lanes);
+ dw_pcie_writel_dbi(pci, cap + PCI_EXP_LNKCAP, lnkcap);
+}
+
void dw_pcie_iatu_detect(struct dw_pcie *pci)
{
int max_region, ob, ib;
@@ -840,8 +888,14 @@ static int dw_pcie_edma_find_chip(struct dw_pcie *pci)
* Indirect eDMA CSRs access has been completely removed since v5.40a
* thus no space is now reserved for the eDMA channels viewport and
* former DMA CTRL register is no longer fixed to FFs.
+ *
+ * Note that Renesas R-Car S4-8's PCIe controllers for unknown reason
+ * have zeros in the eDMA CTRL register even though the HW-manual
+ * explicitly states there must FFs if the unrolled mapping is enabled.
+ * For such cases the low-level drivers are supposed to manually
+ * activate the unrolled mapping to bypass the auto-detection procedure.
*/
- if (dw_pcie_ver_is_ge(pci, 540A))
+ if (dw_pcie_ver_is_ge(pci, 540A) || dw_pcie_cap_is(pci, EDMA_UNROLL))
val = 0xFFFFFFFF;
else
val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL);
@@ -1013,49 +1067,5 @@ void dw_pcie_setup(struct dw_pcie *pci)
val |= PORT_LINK_DLL_LINK_EN;
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
- if (!pci->num_lanes) {
- dev_dbg(pci->dev, "Using h/w default number of lanes\n");
- return;
- }
-
- /* Set the number of lanes */
- val &= ~PORT_LINK_FAST_LINK_MODE;
- val &= ~PORT_LINK_MODE_MASK;
- switch (pci->num_lanes) {
- case 1:
- val |= PORT_LINK_MODE_1_LANES;
- break;
- case 2:
- val |= PORT_LINK_MODE_2_LANES;
- break;
- case 4:
- val |= PORT_LINK_MODE_4_LANES;
- break;
- case 8:
- val |= PORT_LINK_MODE_8_LANES;
- break;
- default:
- dev_err(pci->dev, "num-lanes %u: invalid value\n", pci->num_lanes);
- return;
- }
- dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
-
- /* Set link width speed control register */
- val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
- val &= ~PORT_LOGIC_LINK_WIDTH_MASK;
- switch (pci->num_lanes) {
- case 1:
- val |= PORT_LOGIC_LINK_WIDTH_1_LANES;
- break;
- case 2:
- val |= PORT_LOGIC_LINK_WIDTH_2_LANES;
- break;
- case 4:
- val |= PORT_LOGIC_LINK_WIDTH_4_LANES;
- break;
- case 8:
- val |= PORT_LOGIC_LINK_WIDTH_8_LANES;
- break;
- }
- dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+ dw_pcie_link_set_max_link_width(pci, pci->num_lanes);
}
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index ef0b2efa9f93..55ff76e3d384 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -51,8 +51,9 @@
/* DWC PCIe controller capabilities */
#define DW_PCIE_CAP_REQ_RES 0
-#define DW_PCIE_CAP_IATU_UNROLL 1
-#define DW_PCIE_CAP_CDM_CHECK 2
+#define DW_PCIE_CAP_EDMA_UNROLL 1
+#define DW_PCIE_CAP_IATU_UNROLL 2
+#define DW_PCIE_CAP_CDM_CHECK 3
#define dw_pcie_cap_is(_pci, _cap) \
test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps)
@@ -301,6 +302,7 @@ enum dw_pcie_ltssm {
struct dw_pcie_host_ops {
int (*host_init)(struct dw_pcie_rp *pp);
void (*host_deinit)(struct dw_pcie_rp *pp);
+ void (*host_post_init)(struct dw_pcie_rp *pp);
int (*msi_host_init)(struct dw_pcie_rp *pp);
void (*pme_turn_off)(struct dw_pcie_rp *pp);
};
@@ -329,7 +331,9 @@ struct dw_pcie_rp {
};
struct dw_pcie_ep_ops {
+ void (*pre_init)(struct dw_pcie_ep *ep);
void (*ep_init)(struct dw_pcie_ep *ep);
+ void (*deinit)(struct dw_pcie_ep *ep);
int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no,
enum pci_epc_irq_type type, u16 interrupt_num);
const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep);
@@ -341,6 +345,7 @@ struct dw_pcie_ep_ops {
* driver.
*/
unsigned int (*func_conf_select)(struct dw_pcie_ep *ep, u8 func_no);
+ unsigned int (*get_dbi2_offset)(struct dw_pcie_ep *ep, u8 func_no);
};
struct dw_pcie_ep_func {
diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c
index d93bc2906950..2ee146767971 100644
--- a/drivers/pci/controller/dwc/pcie-kirin.c
+++ b/drivers/pci/controller/dwc/pcie-kirin.c
@@ -741,7 +741,7 @@ err:
return ret;
}
-static int __exit kirin_pcie_remove(struct platform_device *pdev)
+static int kirin_pcie_remove(struct platform_device *pdev)
{
struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev);
@@ -818,7 +818,7 @@ static int kirin_pcie_probe(struct platform_device *pdev)
static struct platform_driver kirin_pcie_driver = {
.probe = kirin_pcie_probe,
- .remove = __exit_p(kirin_pcie_remove),
+ .remove = kirin_pcie_remove,
.driver = {
.name = "kirin-pcie",
.of_match_table = kirin_pcie_match,
diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
index 8bd8107690a6..9e58f055199a 100644
--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
+++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
@@ -23,6 +23,7 @@
#include <linux/reset.h>
#include <linux/module.h>
+#include "../../pci.h"
#include "pcie-designware.h"
/* PARF registers */
@@ -123,6 +124,7 @@
/* ELBI registers */
#define ELBI_SYS_STTS 0x08
+#define ELBI_CS2_ENABLE 0xa4
/* DBI registers */
#define DBI_CON_STATUS 0x44
@@ -135,10 +137,8 @@
#define CORE_RESET_TIME_US_MAX 1005
#define WAKE_DELAY_US 2000 /* 2 ms */
-#define PCIE_GEN1_BW_MBPS 250
-#define PCIE_GEN2_BW_MBPS 500
-#define PCIE_GEN3_BW_MBPS 985
-#define PCIE_GEN4_BW_MBPS 1969
+#define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \
+ Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]))
#define to_pcie_ep(x) dev_get_drvdata((x)->dev)
@@ -263,10 +263,25 @@ static void qcom_pcie_dw_stop_link(struct dw_pcie *pci)
disable_irq(pcie_ep->perst_irq);
}
+static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base,
+ u32 reg, size_t size, u32 val)
+{
+ struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
+ int ret;
+
+ writel(1, pcie_ep->elbi + ELBI_CS2_ENABLE);
+
+ ret = dw_pcie_write(pci->dbi_base2 + reg, size, val);
+ if (ret)
+ dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret);
+
+ writel(0, pcie_ep->elbi + ELBI_CS2_ENABLE);
+}
+
static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep)
{
struct dw_pcie *pci = &pcie_ep->pci;
- u32 offset, status, bw;
+ u32 offset, status;
int speed, width;
int ret;
@@ -279,25 +294,7 @@ static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
- switch (speed) {
- case 1:
- bw = MBps_to_icc(PCIE_GEN1_BW_MBPS);
- break;
- case 2:
- bw = MBps_to_icc(PCIE_GEN2_BW_MBPS);
- break;
- case 3:
- bw = MBps_to_icc(PCIE_GEN3_BW_MBPS);
- break;
- default:
- dev_warn(pci->dev, "using default GEN4 bandwidth\n");
- fallthrough;
- case 4:
- bw = MBps_to_icc(PCIE_GEN4_BW_MBPS);
- break;
- }
-
- ret = icc_set_bw(pcie_ep->icc_mem, 0, width * bw);
+ ret = icc_set_bw(pcie_ep->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
if (ret)
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret);
@@ -335,7 +332,7 @@ static int qcom_pcie_enable_resources(struct qcom_pcie_ep *pcie_ep)
* Set an initial peak bandwidth corresponding to single-lane Gen 1
* for the pcie-mem path.
*/
- ret = icc_set_bw(pcie_ep->icc_mem, 0, MBps_to_icc(PCIE_GEN1_BW_MBPS));
+ ret = icc_set_bw(pcie_ep->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret);
@@ -519,6 +516,7 @@ static const struct dw_pcie_ops pci_ops = {
.link_up = qcom_pcie_dw_link_up,
.start_link = qcom_pcie_dw_start_link,
.stop_link = qcom_pcie_dw_stop_link,
+ .write_dbi2 = qcom_pcie_dw_write_dbi2,
};
static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev,
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 64420ecc24d1..6902e97719d1 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -147,6 +147,9 @@
#define QCOM_PCIE_CRC8_POLYNOMIAL (BIT(2) | BIT(1) | BIT(0))
+#define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \
+ Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]))
+
#define QCOM_PCIE_1_0_0_MAX_CLOCKS 4
struct qcom_pcie_resources_1_0_0 {
struct clk_bulk_data clks[QCOM_PCIE_1_0_0_MAX_CLOCKS];
@@ -218,6 +221,7 @@ struct qcom_pcie_ops {
int (*get_resources)(struct qcom_pcie *pcie);
int (*init)(struct qcom_pcie *pcie);
int (*post_init)(struct qcom_pcie *pcie);
+ void (*host_post_init)(struct qcom_pcie *pcie);
void (*deinit)(struct qcom_pcie *pcie);
void (*ltssm_enable)(struct qcom_pcie *pcie);
int (*config_sid)(struct qcom_pcie *pcie);
@@ -962,6 +966,22 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
return 0;
}
+static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata)
+{
+ /* Downstream devices need to be in D0 state before enabling PCI PM substates */
+ pci_set_power_state(pdev, PCI_D0);
+ pci_enable_link_state(pdev, PCIE_LINK_STATE_ALL);
+
+ return 0;
+}
+
+static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie)
+{
+ struct dw_pcie_rp *pp = &pcie->pci->pp;
+
+ pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL);
+}
+
static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
@@ -1214,9 +1234,19 @@ static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp)
pcie->cfg->ops->deinit(pcie);
}
+static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct qcom_pcie *pcie = to_qcom_pcie(pci);
+
+ if (pcie->cfg->ops->host_post_init)
+ pcie->cfg->ops->host_post_init(pcie);
+}
+
static const struct dw_pcie_host_ops qcom_pcie_dw_ops = {
.host_init = qcom_pcie_host_init,
.host_deinit = qcom_pcie_host_deinit,
+ .host_post_init = qcom_pcie_host_post_init,
};
/* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */
@@ -1278,6 +1308,7 @@ static const struct qcom_pcie_ops ops_1_9_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0,
+ .host_post_init = qcom_pcie_host_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
.config_sid = qcom_pcie_config_sid_1_9_0,
@@ -1345,7 +1376,7 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
* Set an initial peak bandwidth corresponding to single-lane Gen 1
* for the pcie-mem path.
*/
- ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
+ ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret);
@@ -1358,7 +1389,7 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
- u32 offset, status, bw;
+ u32 offset, status;
int speed, width;
int ret;
@@ -1375,22 +1406,7 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
- switch (speed) {
- case 1:
- bw = MBps_to_icc(250);
- break;
- case 2:
- bw = MBps_to_icc(500);
- break;
- default:
- WARN_ON_ONCE(1);
- fallthrough;
- case 3:
- bw = MBps_to_icc(985);
- break;
- }
-
- ret = icc_set_bw(pcie->icc_mem, 0, width * bw);
+ ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret);
diff --git a/drivers/pci/controller/dwc/pcie-rcar-gen4.c b/drivers/pci/controller/dwc/pcie-rcar-gen4.c
new file mode 100644
index 000000000000..3bc45e513b3d
--- /dev/null
+++ b/drivers/pci/controller/dwc/pcie-rcar-gen4.c
@@ -0,0 +1,527 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * PCIe controller driver for Renesas R-Car Gen4 Series SoCs
+ * Copyright (C) 2022-2023 Renesas Electronics Corporation
+ */
+
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/reset.h>
+
+#include "../../pci.h"
+#include "pcie-designware.h"
+
+/* Renesas-specific */
+/* PCIe Mode Setting Register 0 */
+#define PCIEMSR0 0x0000
+#define BIFUR_MOD_SET_ON BIT(0)
+#define DEVICE_TYPE_EP 0
+#define DEVICE_TYPE_RC BIT(4)
+
+/* PCIe Interrupt Status 0 */
+#define PCIEINTSTS0 0x0084
+
+/* PCIe Interrupt Status 0 Enable */
+#define PCIEINTSTS0EN 0x0310
+#define MSI_CTRL_INT BIT(26)
+#define SMLH_LINK_UP BIT(7)
+#define RDLH_LINK_UP BIT(6)
+
+/* PCIe DMA Interrupt Status Enable */
+#define PCIEDMAINTSTSEN 0x0314
+#define PCIEDMAINTSTSEN_INIT GENMASK(15, 0)
+
+/* PCIe Reset Control Register 1 */
+#define PCIERSTCTRL1 0x0014
+#define APP_HOLD_PHY_RST BIT(16)
+#define APP_LTSSM_ENABLE BIT(0)
+
+#define RCAR_NUM_SPEED_CHANGE_RETRIES 10
+#define RCAR_MAX_LINK_SPEED 4
+
+#define RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET 0x1000
+#define RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET 0x800
+
+struct rcar_gen4_pcie {
+ struct dw_pcie dw;
+ void __iomem *base;
+ struct platform_device *pdev;
+ enum dw_pcie_device_mode mode;
+};
+#define to_rcar_gen4_pcie(_dw) container_of(_dw, struct rcar_gen4_pcie, dw)
+
+/* Common */
+static void rcar_gen4_pcie_ltssm_enable(struct rcar_gen4_pcie *rcar,
+ bool enable)
+{
+ u32 val;
+
+ val = readl(rcar->base + PCIERSTCTRL1);
+ if (enable) {
+ val |= APP_LTSSM_ENABLE;
+ val &= ~APP_HOLD_PHY_RST;
+ } else {
+ /*
+ * Since the datasheet of R-Car doesn't mention how to assert
+ * the APP_HOLD_PHY_RST, don't assert it again. Otherwise,
+ * hang-up issue happened in the dw_edma_core_off() when
+ * the controller didn't detect a PCI device.
+ */
+ val &= ~APP_LTSSM_ENABLE;
+ }
+ writel(val, rcar->base + PCIERSTCTRL1);
+}
+
+static int rcar_gen4_pcie_link_up(struct dw_pcie *dw)
+{
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+ u32 val, mask;
+
+ val = readl(rcar->base + PCIEINTSTS0);
+ mask = RDLH_LINK_UP | SMLH_LINK_UP;
+
+ return (val & mask) == mask;
+}
+
+/*
+ * Manually initiate the speed change. Return 0 if change succeeded; otherwise
+ * -ETIMEDOUT.
+ */
+static int rcar_gen4_pcie_speed_change(struct dw_pcie *dw)
+{
+ u32 val;
+ int i;
+
+ val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL);
+ val &= ~PORT_LOGIC_SPEED_CHANGE;
+ dw_pcie_writel_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+
+ val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL);
+ val |= PORT_LOGIC_SPEED_CHANGE;
+ dw_pcie_writel_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
+
+ for (i = 0; i < RCAR_NUM_SPEED_CHANGE_RETRIES; i++) {
+ val = dw_pcie_readl_dbi(dw, PCIE_LINK_WIDTH_SPEED_CONTROL);
+ if (!(val & PORT_LOGIC_SPEED_CHANGE))
+ return 0;
+ usleep_range(10000, 11000);
+ }
+
+ return -ETIMEDOUT;
+}
+
+/*
+ * Enable LTSSM of this controller and manually initiate the speed change.
+ * Always return 0.
+ */
+static int rcar_gen4_pcie_start_link(struct dw_pcie *dw)
+{
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+ int i, changes;
+
+ rcar_gen4_pcie_ltssm_enable(rcar, true);
+
+ /*
+ * Require direct speed change with retrying here if the link_gen is
+ * PCIe Gen2 or higher.
+ */
+ changes = min_not_zero(dw->link_gen, RCAR_MAX_LINK_SPEED) - 1;
+
+ /*
+ * Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained.
+ * So, this needs remaining times for up to PCIe Gen4 if RC mode.
+ */
+ if (changes && rcar->mode == DW_PCIE_RC_TYPE)
+ changes--;
+
+ for (i = 0; i < changes; i++) {
+ /* It may not be connected in EP mode yet. So, break the loop */
+ if (rcar_gen4_pcie_speed_change(dw))
+ break;
+ }
+
+ return 0;
+}
+
+static void rcar_gen4_pcie_stop_link(struct dw_pcie *dw)
+{
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+
+ rcar_gen4_pcie_ltssm_enable(rcar, false);
+}
+
+static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
+{
+ struct dw_pcie *dw = &rcar->dw;
+ u32 val;
+ int ret;
+
+ ret = clk_bulk_prepare_enable(DW_PCIE_NUM_CORE_CLKS, dw->core_clks);
+ if (ret) {
+ dev_err(dw->dev, "Enabling core clocks failed\n");
+ return ret;
+ }
+
+ if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc))
+ reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
+
+ val = readl(rcar->base + PCIEMSR0);
+ if (rcar->mode == DW_PCIE_RC_TYPE) {
+ val |= DEVICE_TYPE_RC;
+ } else if (rcar->mode == DW_PCIE_EP_TYPE) {
+ val |= DEVICE_TYPE_EP;
+ } else {
+ ret = -EINVAL;
+ goto err_unprepare;
+ }
+
+ if (dw->num_lanes < 4)
+ val |= BIFUR_MOD_SET_ON;
+
+ writel(val, rcar->base + PCIEMSR0);
+
+ ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
+ if (ret)
+ goto err_unprepare;
+
+ return 0;
+
+err_unprepare:
+ clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, dw->core_clks);
+
+ return ret;
+}
+
+static void rcar_gen4_pcie_common_deinit(struct rcar_gen4_pcie *rcar)
+{
+ struct dw_pcie *dw = &rcar->dw;
+
+ reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
+ clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, dw->core_clks);
+}
+
+static int rcar_gen4_pcie_prepare(struct rcar_gen4_pcie *rcar)
+{
+ struct device *dev = rcar->dw.dev;
+ int err;
+
+ pm_runtime_enable(dev);
+ err = pm_runtime_resume_and_get(dev);
+ if (err < 0) {
+ dev_err(dev, "Runtime resume failed\n");
+ pm_runtime_disable(dev);
+ }
+
+ return err;
+}
+
+static void rcar_gen4_pcie_unprepare(struct rcar_gen4_pcie *rcar)
+{
+ struct device *dev = rcar->dw.dev;
+
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+}
+
+static int rcar_gen4_pcie_get_resources(struct rcar_gen4_pcie *rcar)
+{
+ /* Renesas-specific registers */
+ rcar->base = devm_platform_ioremap_resource_byname(rcar->pdev, "app");
+
+ return PTR_ERR_OR_ZERO(rcar->base);
+}
+
+static const struct dw_pcie_ops dw_pcie_ops = {
+ .start_link = rcar_gen4_pcie_start_link,
+ .stop_link = rcar_gen4_pcie_stop_link,
+ .link_up = rcar_gen4_pcie_link_up,
+};
+
+static struct rcar_gen4_pcie *rcar_gen4_pcie_alloc(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct rcar_gen4_pcie *rcar;
+
+ rcar = devm_kzalloc(dev, sizeof(*rcar), GFP_KERNEL);
+ if (!rcar)
+ return ERR_PTR(-ENOMEM);
+
+ rcar->dw.ops = &dw_pcie_ops;
+ rcar->dw.dev = dev;
+ rcar->pdev = pdev;
+ dw_pcie_cap_set(&rcar->dw, EDMA_UNROLL);
+ dw_pcie_cap_set(&rcar->dw, REQ_RES);
+ platform_set_drvdata(pdev, rcar);
+
+ return rcar;
+}
+
+/* Host mode */
+static int rcar_gen4_pcie_host_init(struct dw_pcie_rp *pp)
+{
+ struct dw_pcie *dw = to_dw_pcie_from_pp(pp);
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+ int ret;
+ u32 val;
+
+ gpiod_set_value_cansleep(dw->pe_rst, 1);
+
+ ret = rcar_gen4_pcie_common_init(rcar);
+ if (ret)
+ return ret;
+
+ /*
+ * According to the section 3.5.7.2 "RC Mode" in DWC PCIe Dual Mode
+ * Rev.5.20a and 3.5.6.1 "RC mode" in DWC PCIe RC databook v5.20a, we
+ * should disable two BARs to avoid unnecessary memory assignment
+ * during device enumeration.
+ */
+ dw_pcie_writel_dbi2(dw, PCI_BASE_ADDRESS_0, 0x0);
+ dw_pcie_writel_dbi2(dw, PCI_BASE_ADDRESS_1, 0x0);
+
+ /* Enable MSI interrupt signal */
+ val = readl(rcar->base + PCIEINTSTS0EN);
+ val |= MSI_CTRL_INT;
+ writel(val, rcar->base + PCIEINTSTS0EN);
+
+ msleep(PCIE_T_PVPERL_MS); /* pe_rst requires 100msec delay */
+
+ gpiod_set_value_cansleep(dw->pe_rst, 0);
+
+ return 0;
+}
+
+static void rcar_gen4_pcie_host_deinit(struct dw_pcie_rp *pp)
+{
+ struct dw_pcie *dw = to_dw_pcie_from_pp(pp);
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+
+ gpiod_set_value_cansleep(dw->pe_rst, 1);
+ rcar_gen4_pcie_common_deinit(rcar);
+}
+
+static const struct dw_pcie_host_ops rcar_gen4_pcie_host_ops = {
+ .host_init = rcar_gen4_pcie_host_init,
+ .host_deinit = rcar_gen4_pcie_host_deinit,
+};
+
+static int rcar_gen4_add_dw_pcie_rp(struct rcar_gen4_pcie *rcar)
+{
+ struct dw_pcie_rp *pp = &rcar->dw.pp;
+
+ if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_HOST))
+ return -ENODEV;
+
+ pp->num_vectors = MAX_MSI_IRQS;
+ pp->ops = &rcar_gen4_pcie_host_ops;
+
+ return dw_pcie_host_init(pp);
+}
+
+static void rcar_gen4_remove_dw_pcie_rp(struct rcar_gen4_pcie *rcar)
+{
+ dw_pcie_host_deinit(&rcar->dw.pp);
+}
+
+/* Endpoint mode */
+static void rcar_gen4_pcie_ep_pre_init(struct dw_pcie_ep *ep)
+{
+ struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+ int ret;
+
+ ret = rcar_gen4_pcie_common_init(rcar);
+ if (ret)
+ return;
+
+ writel(PCIEDMAINTSTSEN_INIT, rcar->base + PCIEDMAINTSTSEN);
+}
+
+static void rcar_gen4_pcie_ep_init(struct dw_pcie_ep *ep)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+ enum pci_barno bar;
+
+ for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
+ dw_pcie_ep_reset_bar(pci, bar);
+}
+
+static void rcar_gen4_pcie_ep_deinit(struct dw_pcie_ep *ep)
+{
+ struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
+ struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
+
+ writel(0, rcar->base + PCIEDMAINTSTSEN);
+ rcar_gen4_pcie_common_deinit(rcar);
+}
+
+static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
+ enum pci_epc_irq_type type,
+ u16 interrupt_num)
+{
+ struct dw_pcie *dw = to_dw_pcie_from_ep(ep);
+
+ switch (type) {
+ case PCI_EPC_IRQ_LEGACY:
+ return dw_pcie_ep_raise_legacy_irq(ep, func_no);
+ case PCI_EPC_IRQ_MSI:
+ return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
+ default:
+ dev_err(dw->dev, "Unknown IRQ type\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
+ .linkup_notifier = false,
+ .msi_capable = true,
+ .msix_capable = false,
+ .reserved_bar = 1 << BAR_1 | 1 << BAR_3 | 1 << BAR_5,
+ .align = SZ_1M,
+};
+
+static const struct pci_epc_features*
+rcar_gen4_pcie_ep_get_features(struct dw_pcie_ep *ep)
+{
+ return &rcar_gen4_pcie_epc_features;
+}
+
+static unsigned int rcar_gen4_pcie_ep_func_conf_select(struct dw_pcie_ep *ep,
+ u8 func_no)
+{
+ return func_no * RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET;
+}
+
+static unsigned int rcar_gen4_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
+ u8 func_no)
+{
+ return func_no * RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET;
+}
+
+static const struct dw_pcie_ep_ops pcie_ep_ops = {
+ .pre_init = rcar_gen4_pcie_ep_pre_init,
+ .ep_init = rcar_gen4_pcie_ep_init,
+ .deinit = rcar_gen4_pcie_ep_deinit,
+ .raise_irq = rcar_gen4_pcie_ep_raise_irq,
+ .get_features = rcar_gen4_pcie_ep_get_features,
+ .func_conf_select = rcar_gen4_pcie_ep_func_conf_select,
+ .get_dbi2_offset = rcar_gen4_pcie_ep_get_dbi2_offset,
+};
+
+static int rcar_gen4_add_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
+{
+ struct dw_pcie_ep *ep = &rcar->dw.ep;
+
+ if (!IS_ENABLED(CONFIG_PCIE_RCAR_GEN4_EP))
+ return -ENODEV;
+
+ ep->ops = &pcie_ep_ops;
+
+ return dw_pcie_ep_init(ep);
+}
+
+static void rcar_gen4_remove_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
+{
+ dw_pcie_ep_exit(&rcar->dw.ep);
+}
+
+/* Common */
+static int rcar_gen4_add_dw_pcie(struct rcar_gen4_pcie *rcar)
+{
+ rcar->mode = (enum dw_pcie_device_mode)of_device_get_match_data(&rcar->pdev->dev);
+
+ switch (rcar->mode) {
+ case DW_PCIE_RC_TYPE:
+ return rcar_gen4_add_dw_pcie_rp(rcar);
+ case DW_PCIE_EP_TYPE:
+ return rcar_gen4_add_dw_pcie_ep(rcar);
+ default:
+ return -EINVAL;
+ }
+}
+
+static int rcar_gen4_pcie_probe(struct platform_device *pdev)
+{
+ struct rcar_gen4_pcie *rcar;
+ int err;
+
+ rcar = rcar_gen4_pcie_alloc(pdev);
+ if (IS_ERR(rcar))
+ return PTR_ERR(rcar);
+
+ err = rcar_gen4_pcie_get_resources(rcar);
+ if (err)
+ return err;
+
+ err = rcar_gen4_pcie_prepare(rcar);
+ if (err)
+ return err;
+
+ err = rcar_gen4_add_dw_pcie(rcar);
+ if (err)
+ goto err_unprepare;
+
+ return 0;
+
+err_unprepare:
+ rcar_gen4_pcie_unprepare(rcar);
+
+ return err;
+}
+
+static void rcar_gen4_remove_dw_pcie(struct rcar_gen4_pcie *rcar)
+{
+ switch (rcar->mode) {
+ case DW_PCIE_RC_TYPE:
+ rcar_gen4_remove_dw_pcie_rp(rcar);
+ break;
+ case DW_PCIE_EP_TYPE:
+ rcar_gen4_remove_dw_pcie_ep(rcar);
+ break;
+ default:
+ break;
+ }
+}
+
+static void rcar_gen4_pcie_remove(struct platform_device *pdev)
+{
+ struct rcar_gen4_pcie *rcar = platform_get_drvdata(pdev);
+
+ rcar_gen4_remove_dw_pcie(rcar);
+ rcar_gen4_pcie_unprepare(rcar);
+}
+
+static const struct of_device_id rcar_gen4_pcie_of_match[] = {
+ {
+ .compatible = "renesas,rcar-gen4-pcie",
+ .data = (void *)DW_PCIE_RC_TYPE,
+ },
+ {
+ .compatible = "renesas,rcar-gen4-pcie-ep",
+ .data = (void *)DW_PCIE_EP_TYPE,
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, rcar_gen4_pcie_of_match);
+
+static struct platform_driver rcar_gen4_pcie_driver = {
+ .driver = {
+ .name = "pcie-rcar-gen4",
+ .of_match_table = rcar_gen4_pcie_of_match,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ },
+ .probe = rcar_gen4_pcie_probe,
+ .remove_new = rcar_gen4_pcie_remove,
+};
+module_platform_driver(rcar_gen4_pcie_driver);
+
+MODULE_DESCRIPTION("Renesas R-Car Gen4 PCIe controller driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
index 4bba31502ce1..0fe113598ebb 100644
--- a/drivers/pci/controller/dwc/pcie-tegra194.c
+++ b/drivers/pci/controller/dwc/pcie-tegra194.c
@@ -9,6 +9,7 @@
* Author: Vidya Sagar <vidyas@nvidia.com>
*/
+#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
@@ -125,7 +126,7 @@
#define APPL_LTR_MSG_1 0xC4
#define LTR_MSG_REQ BIT(15)
-#define LTR_MST_NO_SNOOP_SHIFT 16
+#define LTR_NOSNOOP_MSG_REQ BIT(31)
#define APPL_LTR_MSG_2 0xC8
#define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3)
@@ -321,9 +322,9 @@ static void tegra_pcie_icc_set(struct tegra_pcie_dw *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, val);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val);
- val = width * (PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]) / BITS_PER_BYTE);
+ val = width * PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]);
- if (icc_set_bw(pcie->icc_path, MBps_to_icc(val), 0))
+ if (icc_set_bw(pcie->icc_path, Mbps_to_icc(val), 0))
dev_err(pcie->dev, "can't set bw[%u]\n", val);
if (speed >= ARRAY_SIZE(pcie_gen_freq))
@@ -346,8 +347,7 @@ static void apply_bad_link_workaround(struct dw_pcie_rp *pp)
*/
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
if (val & PCI_EXP_LNKSTA_LBMS) {
- current_link_width = (val & PCI_EXP_LNKSTA_NLW) >>
- PCI_EXP_LNKSTA_NLW_SHIFT;
+ current_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val);
if (pcie->init_link_width > current_link_width) {
dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
@@ -496,8 +496,12 @@ static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
ktime_t timeout;
/* 110us for both snoop and no-snoop */
- val = 110 | (2 << PCI_LTR_SCALE_SHIFT) | LTR_MSG_REQ;
- val |= (val << LTR_MST_NO_SNOOP_SHIFT);
+ val = FIELD_PREP(PCI_LTR_VALUE_MASK, 110) |
+ FIELD_PREP(PCI_LTR_SCALE_MASK, 2) |
+ LTR_MSG_REQ |
+ FIELD_PREP(PCI_LTR_NOSNOOP_VALUE, 110) |
+ FIELD_PREP(PCI_LTR_NOSNOOP_SCALE, 2) |
+ LTR_NOSNOOP_MSG_REQ;
appl_writel(pcie, val, APPL_LTR_MSG_1);
/* Send LTR upstream */
@@ -760,8 +764,7 @@ static void tegra_pcie_enable_system_interrupts(struct dw_pcie_rp *pp)
val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
PCI_EXP_LNKSTA);
- pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
- PCI_EXP_LNKSTA_NLW_SHIFT;
+ pcie->init_link_width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val_w);
val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL);
@@ -917,12 +920,6 @@ static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp)
AMBA_ERROR_RESPONSE_CRS_SHIFT);
dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
- /* Configure Max lane width from DT */
- val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
- val &= ~PCI_EXP_LNKCAP_MLW;
- val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
- dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
-
/* Clear Slot Clock Configuration bit if SRNS configuration */
if (pcie->enable_srns) {
val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
diff --git a/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c b/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
index 45b97a4b14db..32951f7d6d6d 100644
--- a/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
+++ b/drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
@@ -539,7 +539,7 @@ static bool mobiveil_pcie_is_bridge(struct mobiveil_pcie *pcie)
u32 header_type;
header_type = mobiveil_csr_readb(pcie, PCI_HEADER_TYPE);
- header_type &= 0x7f;
+ header_type &= PCI_HEADER_TYPE_MASK;
return header_type == PCI_HEADER_TYPE_BRIDGE;
}
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index bed3cefdaf19..30c7dfeccb16 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -545,7 +545,7 @@ struct hv_pcidev_description {
struct hv_dr_state {
struct list_head list_entry;
u32 device_count;
- struct hv_pcidev_description func[];
+ struct hv_pcidev_description func[] __counted_by(device_count);
};
struct hv_pci_dev {
diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
index 60810a1fbfb7..29fe09c99e7d 100644
--- a/drivers/pci/controller/pci-mvebu.c
+++ b/drivers/pci/controller/pci-mvebu.c
@@ -264,7 +264,7 @@ static void mvebu_pcie_setup_hw(struct mvebu_pcie_port *port)
*/
lnkcap = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP);
lnkcap &= ~PCI_EXP_LNKCAP_MLW;
- lnkcap |= (port->is_x4 ? 4 : 1) << 4;
+ lnkcap |= FIELD_PREP(PCI_EXP_LNKCAP_MLW, port->is_x4 ? 4 : 1);
mvebu_writel(port, lnkcap, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP);
/* Disable Root Bridge I/O space, memory space and bus mastering. */
diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
index 887b4941ff32..8e457fa450a2 100644
--- a/drivers/pci/controller/pci-xgene.c
+++ b/drivers/pci/controller/pci-xgene.c
@@ -163,10 +163,11 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val)
{
struct xgene_pcie *port = pcie_bus_to_port(bus);
+ int ret;
- if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) !=
- PCIBIOS_SUCCESSFUL)
- return PCIBIOS_DEVICE_NOT_FOUND;
+ ret = pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val);
+ if (ret != PCIBIOS_SUCCESSFUL)
+ return ret;
/*
* The v1 controller has a bug in its Configuration Request Retry
diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c
index bd1c98b68851..97f739a2c9f8 100644
--- a/drivers/pci/controller/pcie-iproc.c
+++ b/drivers/pci/controller/pcie-iproc.c
@@ -783,7 +783,7 @@ static int iproc_pcie_check_link(struct iproc_pcie *pcie)
/* make sure we are not in EP mode */
iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type);
- if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) {
+ if ((hdr_type & PCI_HEADER_TYPE_MASK) != PCI_HEADER_TYPE_BRIDGE) {
dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type);
return -EFAULT;
}
diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c
index f9682df1da61..7034c0ff23d0 100644
--- a/drivers/pci/controller/pcie-rcar-ep.c
+++ b/drivers/pci/controller/pcie-rcar-ep.c
@@ -43,7 +43,7 @@ static void rcar_pcie_ep_hw_init(struct rcar_pcie *pcie)
rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP);
rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS),
PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ENDPOINT << 4);
- rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f,
+ rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), PCI_HEADER_TYPE_MASK,
PCI_HEADER_TYPE_NORMAL);
/* Write out the physical slot number = 0 */
diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c
index 88975e40ee2f..bf7cc0b6a695 100644
--- a/drivers/pci/controller/pcie-rcar-host.c
+++ b/drivers/pci/controller/pcie-rcar-host.c
@@ -460,7 +460,7 @@ static int rcar_pcie_hw_init(struct rcar_pcie *pcie)
rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP);
rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS),
PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4);
- rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f,
+ rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), PCI_HEADER_TYPE_MASK,
PCI_HEADER_TYPE_BRIDGE);
/* Enable data link layer active state reporting */
diff --git a/drivers/pci/controller/pcie-xilinx-common.h b/drivers/pci/controller/pcie-xilinx-common.h
new file mode 100644
index 000000000000..1832770f3308
--- /dev/null
+++ b/drivers/pci/controller/pcie-xilinx-common.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * (C) Copyright 2023, Xilinx, Inc.
+ */
+
+#include <linux/pci.h>
+#include <linux/pci-ecam.h>
+#include <linux/platform_device.h>
+
+/* Interrupt registers definitions */
+#define XILINX_PCIE_INTR_LINK_DOWN 0
+#define XILINX_PCIE_INTR_HOT_RESET 3
+#define XILINX_PCIE_INTR_CFG_PCIE_TIMEOUT 4
+#define XILINX_PCIE_INTR_CFG_TIMEOUT 8
+#define XILINX_PCIE_INTR_CORRECTABLE 9
+#define XILINX_PCIE_INTR_NONFATAL 10
+#define XILINX_PCIE_INTR_FATAL 11
+#define XILINX_PCIE_INTR_CFG_ERR_POISON 12
+#define XILINX_PCIE_INTR_PME_TO_ACK_RCVD 15
+#define XILINX_PCIE_INTR_INTX 16
+#define XILINX_PCIE_INTR_PM_PME_RCVD 17
+#define XILINX_PCIE_INTR_MSI 17
+#define XILINX_PCIE_INTR_SLV_UNSUPP 20
+#define XILINX_PCIE_INTR_SLV_UNEXP 21
+#define XILINX_PCIE_INTR_SLV_COMPL 22
+#define XILINX_PCIE_INTR_SLV_ERRP 23
+#define XILINX_PCIE_INTR_SLV_CMPABT 24
+#define XILINX_PCIE_INTR_SLV_ILLBUR 25
+#define XILINX_PCIE_INTR_MST_DECERR 26
+#define XILINX_PCIE_INTR_MST_SLVERR 27
+#define XILINX_PCIE_INTR_SLV_PCIE_TIMEOUT 28
diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c
index 4a787a941674..a0f5e1d67b04 100644
--- a/drivers/pci/controller/pcie-xilinx-cpm.c
+++ b/drivers/pci/controller/pcie-xilinx-cpm.c
@@ -16,11 +16,9 @@
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
-#include <linux/pci.h>
-#include <linux/platform_device.h>
-#include <linux/pci-ecam.h>
#include "../pci.h"
+#include "pcie-xilinx-common.h"
/* Register definitions */
#define XILINX_CPM_PCIE_REG_IDR 0x00000E10
@@ -38,29 +36,7 @@
#define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8
#define XILINX_CPM_PCIE_IR_LOCAL BIT(0)
-/* Interrupt registers definitions */
-#define XILINX_CPM_PCIE_INTR_LINK_DOWN 0
-#define XILINX_CPM_PCIE_INTR_HOT_RESET 3
-#define XILINX_CPM_PCIE_INTR_CFG_PCIE_TIMEOUT 4
-#define XILINX_CPM_PCIE_INTR_CFG_TIMEOUT 8
-#define XILINX_CPM_PCIE_INTR_CORRECTABLE 9
-#define XILINX_CPM_PCIE_INTR_NONFATAL 10
-#define XILINX_CPM_PCIE_INTR_FATAL 11
-#define XILINX_CPM_PCIE_INTR_CFG_ERR_POISON 12
-#define XILINX_CPM_PCIE_INTR_PME_TO_ACK_RCVD 15
-#define XILINX_CPM_PCIE_INTR_INTX 16
-#define XILINX_CPM_PCIE_INTR_PM_PME_RCVD 17
-#define XILINX_CPM_PCIE_INTR_SLV_UNSUPP 20
-#define XILINX_CPM_PCIE_INTR_SLV_UNEXP 21
-#define XILINX_CPM_PCIE_INTR_SLV_COMPL 22
-#define XILINX_CPM_PCIE_INTR_SLV_ERRP 23
-#define XILINX_CPM_PCIE_INTR_SLV_CMPABT 24
-#define XILINX_CPM_PCIE_INTR_SLV_ILLBUR 25
-#define XILINX_CPM_PCIE_INTR_MST_DECERR 26
-#define XILINX_CPM_PCIE_INTR_MST_SLVERR 27
-#define XILINX_CPM_PCIE_INTR_SLV_PCIE_TIMEOUT 28
-
-#define IMR(x) BIT(XILINX_CPM_PCIE_INTR_ ##x)
+#define IMR(x) BIT(XILINX_PCIE_INTR_ ##x)
#define XILINX_CPM_PCIE_IMR_ALL_MASK \
( \
@@ -323,7 +299,7 @@ static void xilinx_cpm_pcie_event_flow(struct irq_desc *desc)
}
#define _IC(x, s) \
- [XILINX_CPM_PCIE_INTR_ ## x] = { __stringify(x), s }
+ [XILINX_PCIE_INTR_ ## x] = { __stringify(x), s }
static const struct {
const char *sym;
@@ -359,9 +335,9 @@ static irqreturn_t xilinx_cpm_pcie_intr_handler(int irq, void *dev_id)
d = irq_domain_get_irq_data(port->cpm_domain, irq);
switch (d->hwirq) {
- case XILINX_CPM_PCIE_INTR_CORRECTABLE:
- case XILINX_CPM_PCIE_INTR_NONFATAL:
- case XILINX_CPM_PCIE_INTR_FATAL:
+ case XILINX_PCIE_INTR_CORRECTABLE:
+ case XILINX_PCIE_INTR_NONFATAL:
+ case XILINX_PCIE_INTR_FATAL:
cpm_pcie_clear_err_interrupts(port);
fallthrough;
@@ -466,7 +442,7 @@ static int xilinx_cpm_setup_irq(struct xilinx_cpm_pcie *port)
}
port->intx_irq = irq_create_mapping(port->cpm_domain,
- XILINX_CPM_PCIE_INTR_INTX);
+ XILINX_PCIE_INTR_INTX);
if (!port->intx_irq) {
dev_err(dev, "Failed to map INTx interrupt\n");
return -ENXIO;
diff --git a/drivers/pci/controller/pcie-xilinx-dma-pl.c b/drivers/pci/controller/pcie-xilinx-dma-pl.c
new file mode 100644
index 000000000000..2f7d676c683c
--- /dev/null
+++ b/drivers/pci/controller/pcie-xilinx-dma-pl.c
@@ -0,0 +1,814 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * PCIe host controller driver for Xilinx XDMA PCIe Bridge
+ *
+ * Copyright (C) 2023 Xilinx, Inc. All rights reserved.
+ */
+#include <linux/bitfield.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+
+#include "../pci.h"
+#include "pcie-xilinx-common.h"
+
+/* Register definitions */
+#define XILINX_PCIE_DMA_REG_IDR 0x00000138
+#define XILINX_PCIE_DMA_REG_IMR 0x0000013c
+#define XILINX_PCIE_DMA_REG_PSCR 0x00000144
+#define XILINX_PCIE_DMA_REG_RPSC 0x00000148
+#define XILINX_PCIE_DMA_REG_MSIBASE1 0x0000014c
+#define XILINX_PCIE_DMA_REG_MSIBASE2 0x00000150
+#define XILINX_PCIE_DMA_REG_RPEFR 0x00000154
+#define XILINX_PCIE_DMA_REG_IDRN 0x00000160
+#define XILINX_PCIE_DMA_REG_IDRN_MASK 0x00000164
+#define XILINX_PCIE_DMA_REG_MSI_LOW 0x00000170
+#define XILINX_PCIE_DMA_REG_MSI_HI 0x00000174
+#define XILINX_PCIE_DMA_REG_MSI_LOW_MASK 0x00000178
+#define XILINX_PCIE_DMA_REG_MSI_HI_MASK 0x0000017c
+
+#define IMR(x) BIT(XILINX_PCIE_INTR_ ##x)
+
+#define XILINX_PCIE_INTR_IMR_ALL_MASK \
+ ( \
+ IMR(LINK_DOWN) | \
+ IMR(HOT_RESET) | \
+ IMR(CFG_TIMEOUT) | \
+ IMR(CORRECTABLE) | \
+ IMR(NONFATAL) | \
+ IMR(FATAL) | \
+ IMR(INTX) | \
+ IMR(MSI) | \
+ IMR(SLV_UNSUPP) | \
+ IMR(SLV_UNEXP) | \
+ IMR(SLV_COMPL) | \
+ IMR(SLV_ERRP) | \
+ IMR(SLV_CMPABT) | \
+ IMR(SLV_ILLBUR) | \
+ IMR(MST_DECERR) | \
+ IMR(MST_SLVERR) | \
+ )
+
+#define XILINX_PCIE_DMA_IMR_ALL_MASK 0x0ff30fe9
+#define XILINX_PCIE_DMA_IDR_ALL_MASK 0xffffffff
+#define XILINX_PCIE_DMA_IDRN_MASK GENMASK(19, 16)
+
+/* Root Port Error Register definitions */
+#define XILINX_PCIE_DMA_RPEFR_ERR_VALID BIT(18)
+#define XILINX_PCIE_DMA_RPEFR_REQ_ID GENMASK(15, 0)
+#define XILINX_PCIE_DMA_RPEFR_ALL_MASK 0xffffffff
+
+/* Root Port Interrupt Register definitions */
+#define XILINX_PCIE_DMA_IDRN_SHIFT 16
+
+/* Root Port Status/control Register definitions */
+#define XILINX_PCIE_DMA_REG_RPSC_BEN BIT(0)
+
+/* Phy Status/Control Register definitions */
+#define XILINX_PCIE_DMA_REG_PSCR_LNKUP BIT(11)
+
+/* Number of MSI IRQs */
+#define XILINX_NUM_MSI_IRQS 64
+
+struct xilinx_msi {
+ struct irq_domain *msi_domain;
+ unsigned long *bitmap;
+ struct irq_domain *dev_domain;
+ struct mutex lock; /* Protect bitmap variable */
+ int irq_msi0;
+ int irq_msi1;
+};
+
+/**
+ * struct pl_dma_pcie - PCIe port information
+ * @dev: Device pointer
+ * @reg_base: IO Mapped Register Base
+ * @irq: Interrupt number
+ * @cfg: Holds mappings of config space window
+ * @phys_reg_base: Physical address of reg base
+ * @intx_domain: Legacy IRQ domain pointer
+ * @pldma_domain: PL DMA IRQ domain pointer
+ * @resources: Bus Resources
+ * @msi: MSI information
+ * @intx_irq: INTx error interrupt number
+ * @lock: Lock protecting shared register access
+ */
+struct pl_dma_pcie {
+ struct device *dev;
+ void __iomem *reg_base;
+ int irq;
+ struct pci_config_window *cfg;
+ phys_addr_t phys_reg_base;
+ struct irq_domain *intx_domain;
+ struct irq_domain *pldma_domain;
+ struct list_head resources;
+ struct xilinx_msi msi;
+ int intx_irq;
+ raw_spinlock_t lock;
+};
+
+static inline u32 pcie_read(struct pl_dma_pcie *port, u32 reg)
+{
+ return readl(port->reg_base + reg);
+}
+
+static inline void pcie_write(struct pl_dma_pcie *port, u32 val, u32 reg)
+{
+ writel(val, port->reg_base + reg);
+}
+
+static inline bool xilinx_pl_dma_pcie_link_up(struct pl_dma_pcie *port)
+{
+ return (pcie_read(port, XILINX_PCIE_DMA_REG_PSCR) &
+ XILINX_PCIE_DMA_REG_PSCR_LNKUP) ? true : false;
+}
+
+static void xilinx_pl_dma_pcie_clear_err_interrupts(struct pl_dma_pcie *port)
+{
+ unsigned long val = pcie_read(port, XILINX_PCIE_DMA_REG_RPEFR);
+
+ if (val & XILINX_PCIE_DMA_RPEFR_ERR_VALID) {
+ dev_dbg(port->dev, "Requester ID %lu\n",
+ val & XILINX_PCIE_DMA_RPEFR_REQ_ID);
+ pcie_write(port, XILINX_PCIE_DMA_RPEFR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_RPEFR);
+ }
+}
+
+static bool xilinx_pl_dma_pcie_valid_device(struct pci_bus *bus,
+ unsigned int devfn)
+{
+ struct pl_dma_pcie *port = bus->sysdata;
+
+ if (!pci_is_root_bus(bus)) {
+ /*
+ * Checking whether the link is up is the last line of
+ * defense, and this check is inherently racy by definition.
+ * Sending a PIO request to a downstream device when the link is
+ * down causes an unrecoverable error, and a reset of the entire
+ * PCIe controller will be needed. We can reduce the likelihood
+ * of that unrecoverable error by checking whether the link is
+ * up, but we can't completely prevent it because the link may
+ * go down between the link-up check and the PIO request.
+ */
+ if (!xilinx_pl_dma_pcie_link_up(port))
+ return false;
+ } else if (devfn > 0)
+ /* Only one device down on each root port */
+ return false;
+
+ return true;
+}
+
+static void __iomem *xilinx_pl_dma_pcie_map_bus(struct pci_bus *bus,
+ unsigned int devfn, int where)
+{
+ struct pl_dma_pcie *port = bus->sysdata;
+
+ if (!xilinx_pl_dma_pcie_valid_device(bus, devfn))
+ return NULL;
+
+ return port->reg_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
+}
+
+/* PCIe operations */
+static struct pci_ecam_ops xilinx_pl_dma_pcie_ops = {
+ .pci_ops = {
+ .map_bus = xilinx_pl_dma_pcie_map_bus,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+ }
+};
+
+static void xilinx_pl_dma_pcie_enable_msi(struct pl_dma_pcie *port)
+{
+ phys_addr_t msi_addr = port->phys_reg_base;
+
+ pcie_write(port, upper_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE1);
+ pcie_write(port, lower_32_bits(msi_addr), XILINX_PCIE_DMA_REG_MSIBASE2);
+}
+
+static void xilinx_mask_intx_irq(struct irq_data *data)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data);
+ unsigned long flags;
+ u32 mask, val;
+
+ mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT);
+ raw_spin_lock_irqsave(&port->lock, flags);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK);
+ pcie_write(port, (val & (~mask)), XILINX_PCIE_DMA_REG_IDRN_MASK);
+ raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void xilinx_unmask_intx_irq(struct irq_data *data)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(data);
+ unsigned long flags;
+ u32 mask, val;
+
+ mask = BIT(data->hwirq + XILINX_PCIE_DMA_IDRN_SHIFT);
+ raw_spin_lock_irqsave(&port->lock, flags);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IDRN_MASK);
+ pcie_write(port, (val | mask), XILINX_PCIE_DMA_REG_IDRN_MASK);
+ raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static struct irq_chip xilinx_leg_irq_chip = {
+ .name = "pl_dma:INTx",
+ .irq_mask = xilinx_mask_intx_irq,
+ .irq_unmask = xilinx_unmask_intx_irq,
+};
+
+static int xilinx_pl_dma_pcie_intx_map(struct irq_domain *domain,
+ unsigned int irq, irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &xilinx_leg_irq_chip, handle_level_irq);
+ irq_set_chip_data(irq, domain->host_data);
+ irq_set_status_flags(irq, IRQ_LEVEL);
+
+ return 0;
+}
+
+/* INTx IRQ Domain operations */
+static const struct irq_domain_ops intx_domain_ops = {
+ .map = xilinx_pl_dma_pcie_intx_map,
+};
+
+static irqreturn_t xilinx_pl_dma_pcie_msi_handler_high(int irq, void *args)
+{
+ struct xilinx_msi *msi;
+ unsigned long status;
+ u32 bit, virq;
+ struct pl_dma_pcie *port = args;
+
+ msi = &port->msi;
+
+ while ((status = pcie_read(port, XILINX_PCIE_DMA_REG_MSI_HI)) != 0) {
+ for_each_set_bit(bit, &status, 32) {
+ pcie_write(port, 1 << bit, XILINX_PCIE_DMA_REG_MSI_HI);
+ bit = bit + 32;
+ virq = irq_find_mapping(msi->dev_domain, bit);
+ if (virq)
+ generic_handle_irq(virq);
+ }
+ }
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t xilinx_pl_dma_pcie_msi_handler_low(int irq, void *args)
+{
+ struct pl_dma_pcie *port = args;
+ struct xilinx_msi *msi;
+ unsigned long status;
+ u32 bit, virq;
+
+ msi = &port->msi;
+
+ while ((status = pcie_read(port, XILINX_PCIE_DMA_REG_MSI_LOW)) != 0) {
+ for_each_set_bit(bit, &status, 32) {
+ pcie_write(port, 1 << bit, XILINX_PCIE_DMA_REG_MSI_LOW);
+ virq = irq_find_mapping(msi->dev_domain, bit);
+ if (virq)
+ generic_handle_irq(virq);
+ }
+ }
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t xilinx_pl_dma_pcie_event_flow(int irq, void *args)
+{
+ struct pl_dma_pcie *port = args;
+ unsigned long val;
+ int i;
+
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IDR);
+ val &= pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
+ for_each_set_bit(i, &val, 32)
+ generic_handle_domain_irq(port->pldma_domain, i);
+
+ pcie_write(port, val, XILINX_PCIE_DMA_REG_IDR);
+
+ return IRQ_HANDLED;
+}
+
+#define _IC(x, s) \
+ [XILINX_PCIE_INTR_ ## x] = { __stringify(x), s }
+
+static const struct {
+ const char *sym;
+ const char *str;
+} intr_cause[32] = {
+ _IC(LINK_DOWN, "Link Down"),
+ _IC(HOT_RESET, "Hot reset"),
+ _IC(CFG_TIMEOUT, "ECAM access timeout"),
+ _IC(CORRECTABLE, "Correctable error message"),
+ _IC(NONFATAL, "Non fatal error message"),
+ _IC(FATAL, "Fatal error message"),
+ _IC(SLV_UNSUPP, "Slave unsupported request"),
+ _IC(SLV_UNEXP, "Slave unexpected completion"),
+ _IC(SLV_COMPL, "Slave completion timeout"),
+ _IC(SLV_ERRP, "Slave Error Poison"),
+ _IC(SLV_CMPABT, "Slave Completer Abort"),
+ _IC(SLV_ILLBUR, "Slave Illegal Burst"),
+ _IC(MST_DECERR, "Master decode error"),
+ _IC(MST_SLVERR, "Master slave error"),
+};
+
+static irqreturn_t xilinx_pl_dma_pcie_intr_handler(int irq, void *dev_id)
+{
+ struct pl_dma_pcie *port = (struct pl_dma_pcie *)dev_id;
+ struct device *dev = port->dev;
+ struct irq_data *d;
+
+ d = irq_domain_get_irq_data(port->pldma_domain, irq);
+ switch (d->hwirq) {
+ case XILINX_PCIE_INTR_CORRECTABLE:
+ case XILINX_PCIE_INTR_NONFATAL:
+ case XILINX_PCIE_INTR_FATAL:
+ xilinx_pl_dma_pcie_clear_err_interrupts(port);
+ fallthrough;
+
+ default:
+ if (intr_cause[d->hwirq].str)
+ dev_warn(dev, "%s\n", intr_cause[d->hwirq].str);
+ else
+ dev_warn(dev, "Unknown IRQ %ld\n", d->hwirq);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static struct irq_chip xilinx_msi_irq_chip = {
+ .name = "pl_dma:PCIe MSI",
+ .irq_enable = pci_msi_unmask_irq,
+ .irq_disable = pci_msi_mask_irq,
+ .irq_mask = pci_msi_mask_irq,
+ .irq_unmask = pci_msi_unmask_irq,
+};
+
+static struct msi_domain_info xilinx_msi_domain_info = {
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+ MSI_FLAG_MULTI_PCI_MSI),
+ .chip = &xilinx_msi_irq_chip,
+};
+
+static void xilinx_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
+ phys_addr_t msi_addr = pcie->phys_reg_base;
+
+ msg->address_lo = lower_32_bits(msi_addr);
+ msg->address_hi = upper_32_bits(msi_addr);
+ msg->data = data->hwirq;
+}
+
+static int xilinx_msi_set_affinity(struct irq_data *irq_data,
+ const struct cpumask *mask, bool force)
+{
+ return -EINVAL;
+}
+
+static struct irq_chip xilinx_irq_chip = {
+ .name = "pl_dma:MSI",
+ .irq_compose_msi_msg = xilinx_compose_msi_msg,
+ .irq_set_affinity = xilinx_msi_set_affinity,
+};
+
+static int xilinx_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs, void *args)
+{
+ struct pl_dma_pcie *pcie = domain->host_data;
+ struct xilinx_msi *msi = &pcie->msi;
+ int bit, i;
+
+ mutex_lock(&msi->lock);
+ bit = bitmap_find_free_region(msi->bitmap, XILINX_NUM_MSI_IRQS,
+ get_count_order(nr_irqs));
+ if (bit < 0) {
+ mutex_unlock(&msi->lock);
+ return -ENOSPC;
+ }
+
+ for (i = 0; i < nr_irqs; i++) {
+ irq_domain_set_info(domain, virq + i, bit + i, &xilinx_irq_chip,
+ domain->host_data, handle_simple_irq,
+ NULL, NULL);
+ }
+ mutex_unlock(&msi->lock);
+
+ return 0;
+}
+
+static void xilinx_irq_domain_free(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs)
+{
+ struct irq_data *data = irq_domain_get_irq_data(domain, virq);
+ struct pl_dma_pcie *pcie = irq_data_get_irq_chip_data(data);
+ struct xilinx_msi *msi = &pcie->msi;
+
+ mutex_lock(&msi->lock);
+ bitmap_release_region(msi->bitmap, data->hwirq,
+ get_count_order(nr_irqs));
+ mutex_unlock(&msi->lock);
+}
+
+static const struct irq_domain_ops dev_msi_domain_ops = {
+ .alloc = xilinx_irq_domain_alloc,
+ .free = xilinx_irq_domain_free,
+};
+
+static void xilinx_pl_dma_pcie_free_irq_domains(struct pl_dma_pcie *port)
+{
+ struct xilinx_msi *msi = &port->msi;
+
+ if (port->intx_domain) {
+ irq_domain_remove(port->intx_domain);
+ port->intx_domain = NULL;
+ }
+
+ if (msi->dev_domain) {
+ irq_domain_remove(msi->dev_domain);
+ msi->dev_domain = NULL;
+ }
+
+ if (msi->msi_domain) {
+ irq_domain_remove(msi->msi_domain);
+ msi->msi_domain = NULL;
+ }
+}
+
+static int xilinx_pl_dma_pcie_init_msi_irq_domain(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct xilinx_msi *msi = &port->msi;
+ int size = BITS_TO_LONGS(XILINX_NUM_MSI_IRQS) * sizeof(long);
+ struct fwnode_handle *fwnode = of_node_to_fwnode(port->dev->of_node);
+
+ msi->dev_domain = irq_domain_add_linear(NULL, XILINX_NUM_MSI_IRQS,
+ &dev_msi_domain_ops, port);
+ if (!msi->dev_domain)
+ goto out;
+
+ msi->msi_domain = pci_msi_create_irq_domain(fwnode,
+ &xilinx_msi_domain_info,
+ msi->dev_domain);
+ if (!msi->msi_domain)
+ goto out;
+
+ mutex_init(&msi->lock);
+ msi->bitmap = kzalloc(size, GFP_KERNEL);
+ if (!msi->bitmap)
+ goto out;
+
+ raw_spin_lock_init(&port->lock);
+ xilinx_pl_dma_pcie_enable_msi(port);
+
+ return 0;
+
+out:
+ xilinx_pl_dma_pcie_free_irq_domains(port);
+ dev_err(dev, "Failed to allocate MSI IRQ domains\n");
+
+ return -ENOMEM;
+}
+
+/*
+ * INTx error interrupts are Xilinx controller specific interrupt, used to
+ * notify user about errors such as cfg timeout, slave unsupported requests,
+ * fatal and non fatal error etc.
+ */
+
+static irqreturn_t xilinx_pl_dma_pcie_intx_flow(int irq, void *args)
+{
+ unsigned long val;
+ int i;
+ struct pl_dma_pcie *port = args;
+
+ val = FIELD_GET(XILINX_PCIE_DMA_IDRN_MASK,
+ pcie_read(port, XILINX_PCIE_DMA_REG_IDRN));
+
+ for_each_set_bit(i, &val, PCI_NUM_INTX)
+ generic_handle_domain_irq(port->intx_domain, i);
+ return IRQ_HANDLED;
+}
+
+static void xilinx_pl_dma_pcie_mask_event_irq(struct irq_data *d)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d);
+ u32 val;
+
+ raw_spin_lock(&port->lock);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
+ val &= ~BIT(d->hwirq);
+ pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR);
+ raw_spin_unlock(&port->lock);
+}
+
+static void xilinx_pl_dma_pcie_unmask_event_irq(struct irq_data *d)
+{
+ struct pl_dma_pcie *port = irq_data_get_irq_chip_data(d);
+ u32 val;
+
+ raw_spin_lock(&port->lock);
+ val = pcie_read(port, XILINX_PCIE_DMA_REG_IMR);
+ val |= BIT(d->hwirq);
+ pcie_write(port, val, XILINX_PCIE_DMA_REG_IMR);
+ raw_spin_unlock(&port->lock);
+}
+
+static struct irq_chip xilinx_pl_dma_pcie_event_irq_chip = {
+ .name = "pl_dma:RC-Event",
+ .irq_mask = xilinx_pl_dma_pcie_mask_event_irq,
+ .irq_unmask = xilinx_pl_dma_pcie_unmask_event_irq,
+};
+
+static int xilinx_pl_dma_pcie_event_map(struct irq_domain *domain,
+ unsigned int irq, irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &xilinx_pl_dma_pcie_event_irq_chip,
+ handle_level_irq);
+ irq_set_chip_data(irq, domain->host_data);
+ irq_set_status_flags(irq, IRQ_LEVEL);
+
+ return 0;
+}
+
+static const struct irq_domain_ops event_domain_ops = {
+ .map = xilinx_pl_dma_pcie_event_map,
+};
+
+/**
+ * xilinx_pl_dma_pcie_init_irq_domain - Initialize IRQ domain
+ * @port: PCIe port information
+ *
+ * Return: '0' on success and error value on failure.
+ */
+static int xilinx_pl_dma_pcie_init_irq_domain(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct device_node *node = dev->of_node;
+ struct device_node *pcie_intc_node;
+ int ret;
+
+ /* Setup INTx */
+ pcie_intc_node = of_get_child_by_name(node, "interrupt-controller");
+ if (!pcie_intc_node) {
+ dev_err(dev, "No PCIe Intc node found\n");
+ return -EINVAL;
+ }
+
+ port->pldma_domain = irq_domain_add_linear(pcie_intc_node, 32,
+ &event_domain_ops, port);
+ if (!port->pldma_domain)
+ return -ENOMEM;
+
+ irq_domain_update_bus_token(port->pldma_domain, DOMAIN_BUS_NEXUS);
+
+ port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
+ &intx_domain_ops, port);
+ if (!port->intx_domain) {
+ dev_err(dev, "Failed to get a INTx IRQ domain\n");
+ return PTR_ERR(port->intx_domain);
+ }
+
+ irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
+
+ ret = xilinx_pl_dma_pcie_init_msi_irq_domain(port);
+ if (ret != 0) {
+ irq_domain_remove(port->intx_domain);
+ return -ENOMEM;
+ }
+
+ of_node_put(pcie_intc_node);
+ raw_spin_lock_init(&port->lock);
+
+ return 0;
+}
+
+static int xilinx_pl_dma_pcie_setup_irq(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ int i, irq, err;
+
+ port->irq = platform_get_irq(pdev, 0);
+ if (port->irq < 0)
+ return port->irq;
+
+ for (i = 0; i < ARRAY_SIZE(intr_cause); i++) {
+ int err;
+
+ if (!intr_cause[i].str)
+ continue;
+
+ irq = irq_create_mapping(port->pldma_domain, i);
+ if (!irq) {
+ dev_err(dev, "Failed to map interrupt\n");
+ return -ENXIO;
+ }
+
+ err = devm_request_irq(dev, irq,
+ xilinx_pl_dma_pcie_intr_handler,
+ IRQF_SHARED | IRQF_NO_THREAD,
+ intr_cause[i].sym, port);
+ if (err) {
+ dev_err(dev, "Failed to request IRQ %d\n", irq);
+ return err;
+ }
+ }
+
+ port->intx_irq = irq_create_mapping(port->pldma_domain,
+ XILINX_PCIE_INTR_INTX);
+ if (!port->intx_irq) {
+ dev_err(dev, "Failed to map INTx interrupt\n");
+ return -ENXIO;
+ }
+
+ err = devm_request_irq(dev, port->intx_irq, xilinx_pl_dma_pcie_intx_flow,
+ IRQF_SHARED | IRQF_NO_THREAD, NULL, port);
+ if (err) {
+ dev_err(dev, "Failed to request INTx IRQ %d\n", irq);
+ return err;
+ }
+
+ err = devm_request_irq(dev, port->irq, xilinx_pl_dma_pcie_event_flow,
+ IRQF_SHARED | IRQF_NO_THREAD, NULL, port);
+ if (err) {
+ dev_err(dev, "Failed to request event IRQ %d\n", irq);
+ return err;
+ }
+
+ return 0;
+}
+
+static void xilinx_pl_dma_pcie_init_port(struct pl_dma_pcie *port)
+{
+ if (xilinx_pl_dma_pcie_link_up(port))
+ dev_info(port->dev, "PCIe Link is UP\n");
+ else
+ dev_info(port->dev, "PCIe Link is DOWN\n");
+
+ /* Disable all interrupts */
+ pcie_write(port, ~XILINX_PCIE_DMA_IDR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_IMR);
+
+ /* Clear pending interrupts */
+ pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_IDR) &
+ XILINX_PCIE_DMA_IMR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_IDR);
+
+ /* Needed for MSI DECODE MODE */
+ pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_MSI_LOW_MASK);
+ pcie_write(port, XILINX_PCIE_DMA_IDR_ALL_MASK,
+ XILINX_PCIE_DMA_REG_MSI_HI_MASK);
+
+ /* Set the Bridge enable bit */
+ pcie_write(port, pcie_read(port, XILINX_PCIE_DMA_REG_RPSC) |
+ XILINX_PCIE_DMA_REG_RPSC_BEN,
+ XILINX_PCIE_DMA_REG_RPSC);
+}
+
+static int xilinx_request_msi_irq(struct pl_dma_pcie *port)
+{
+ struct device *dev = port->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ int ret;
+
+ port->msi.irq_msi0 = platform_get_irq_byname(pdev, "msi0");
+ if (port->msi.irq_msi0 <= 0) {
+ dev_err(dev, "Unable to find msi0 IRQ line\n");
+ return port->msi.irq_msi0;
+ }
+
+ ret = devm_request_irq(dev, port->msi.irq_msi0, xilinx_pl_dma_pcie_msi_handler_low,
+ IRQF_SHARED | IRQF_NO_THREAD, "xlnx-pcie-dma-pl",
+ port);
+ if (ret) {
+ dev_err(dev, "Failed to register interrupt\n");
+ return ret;
+ }
+
+ port->msi.irq_msi1 = platform_get_irq_byname(pdev, "msi1");
+ if (port->msi.irq_msi1 <= 0) {
+ dev_err(dev, "Unable to find msi1 IRQ line\n");
+ return port->msi.irq_msi1;
+ }
+
+ ret = devm_request_irq(dev, port->msi.irq_msi1, xilinx_pl_dma_pcie_msi_handler_high,
+ IRQF_SHARED | IRQF_NO_THREAD, "xlnx-pcie-dma-pl",
+ port);
+ if (ret) {
+ dev_err(dev, "Failed to register interrupt\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int xilinx_pl_dma_pcie_parse_dt(struct pl_dma_pcie *port,
+ struct resource *bus_range)
+{
+ struct device *dev = port->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *res;
+ int err;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(dev, "Missing \"reg\" property\n");
+ return -ENXIO;
+ }
+ port->phys_reg_base = res->start;
+
+ port->cfg = pci_ecam_create(dev, res, bus_range, &xilinx_pl_dma_pcie_ops);
+ if (IS_ERR(port->cfg))
+ return PTR_ERR(port->cfg);
+
+ port->reg_base = port->cfg->win;
+
+ err = xilinx_request_msi_irq(port);
+ if (err) {
+ pci_ecam_free(port->cfg);
+ return err;
+ }
+
+ return 0;
+}
+
+static int xilinx_pl_dma_pcie_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct pl_dma_pcie *port;
+ struct pci_host_bridge *bridge;
+ struct resource_entry *bus;
+ int err;
+
+ bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port));
+ if (!bridge)
+ return -ENODEV;
+
+ port = pci_host_bridge_priv(bridge);
+
+ port->dev = dev;
+
+ bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (!bus)
+ return -ENODEV;
+
+ err = xilinx_pl_dma_pcie_parse_dt(port, bus->res);
+ if (err) {
+ dev_err(dev, "Parsing DT failed\n");
+ return err;
+ }
+
+ xilinx_pl_dma_pcie_init_port(port);
+
+ err = xilinx_pl_dma_pcie_init_irq_domain(port);
+ if (err)
+ goto err_irq_domain;
+
+ err = xilinx_pl_dma_pcie_setup_irq(port);
+
+ bridge->sysdata = port;
+ bridge->ops = &xilinx_pl_dma_pcie_ops.pci_ops;
+
+ err = pci_host_probe(bridge);
+ if (err < 0)
+ goto err_host_bridge;
+
+ return 0;
+
+err_host_bridge:
+ xilinx_pl_dma_pcie_free_irq_domains(port);
+
+err_irq_domain:
+ pci_ecam_free(port->cfg);
+ return err;
+}
+
+static const struct of_device_id xilinx_pl_dma_pcie_of_match[] = {
+ {
+ .compatible = "xlnx,xdma-host-3.00",
+ },
+ {}
+};
+
+static struct platform_driver xilinx_pl_dma_pcie_driver = {
+ .driver = {
+ .name = "xilinx-xdma-pcie",
+ .of_match_table = xilinx_pl_dma_pcie_of_match,
+ .suppress_bind_attrs = true,
+ },
+ .probe = xilinx_pl_dma_pcie_probe,
+};
+
+builtin_platform_driver(xilinx_pl_dma_pcie_driver);
diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c
index 176686bdb15c..e307aceba5c9 100644
--- a/drivers/pci/controller/pcie-xilinx-nwl.c
+++ b/drivers/pci/controller/pcie-xilinx-nwl.c
@@ -126,7 +126,7 @@
#define E_ECAM_CR_ENABLE BIT(0)
#define E_ECAM_SIZE_LOC GENMASK(20, 16)
#define E_ECAM_SIZE_SHIFT 16
-#define NWL_ECAM_VALUE_DEFAULT 12
+#define NWL_ECAM_MAX_SIZE 16
#define CFG_DMA_REG_BAR GENMASK(2, 0)
#define CFG_PCIE_CACHE GENMASK(7, 0)
@@ -165,8 +165,6 @@ struct nwl_pcie {
u32 ecam_size;
int irq_intx;
int irq_misc;
- u32 ecam_value;
- u8 last_busno;
struct nwl_msi msi;
struct irq_domain *legacy_irq_domain;
struct clk *clk;
@@ -625,7 +623,7 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
{
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
- u32 breg_val, ecam_val, first_busno = 0;
+ u32 breg_val, ecam_val;
int err;
breg_val = nwl_bridge_readl(pcie, E_BREG_CAPABILITIES) & BREG_PRESENT;
@@ -675,7 +673,7 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
E_ECAM_CR_ENABLE, E_ECAM_CONTROL);
nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) |
- (pcie->ecam_value << E_ECAM_SIZE_SHIFT),
+ (NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT),
E_ECAM_CONTROL);
nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base),
@@ -683,15 +681,6 @@ static int nwl_pcie_bridge_init(struct nwl_pcie *pcie)
nwl_bridge_writel(pcie, upper_32_bits(pcie->phys_ecam_base),
E_ECAM_BASE_HI);
- /* Get bus range */
- ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL);
- pcie->last_busno = (ecam_val & E_ECAM_SIZE_LOC) >> E_ECAM_SIZE_SHIFT;
- /* Write primary, secondary and subordinate bus numbers */
- ecam_val = first_busno;
- ecam_val |= (first_busno + 1) << 8;
- ecam_val |= (pcie->last_busno << E_ECAM_SIZE_SHIFT);
- writel(ecam_val, (pcie->ecam_base + PCI_PRIMARY_BUS));
-
if (nwl_pcie_link_up(pcie))
dev_info(dev, "Link is UP\n");
else
@@ -792,7 +781,6 @@ static int nwl_pcie_probe(struct platform_device *pdev)
pcie = pci_host_bridge_priv(bridge);
pcie->dev = dev;
- pcie->ecam_value = NWL_ECAM_VALUE_DEFAULT;
err = nwl_pcie_parse_dt(pcie, pdev);
if (err) {
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index ad56df98b8e6..94ba61fe1c44 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -525,10 +525,9 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, 0), 0);
- hdr_type = readb(base + PCI_HEADER_TYPE) &
- PCI_HEADER_TYPE_MASK;
+ hdr_type = readb(base + PCI_HEADER_TYPE);
- functions = (hdr_type & 0x80) ? 8 : 1;
+ functions = (hdr_type & PCI_HEADER_TYPE_MFD) ? 8 : 1;
for (fn = 0; fn < functions; fn++) {
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, fn), 0);
@@ -1078,10 +1077,7 @@ static int vmd_resume(struct device *dev)
struct vmd_dev *vmd = pci_get_drvdata(pdev);
int err, i;
- if (vmd->irq_domain)
- vmd_set_msi_remapping(vmd, true);
- else
- vmd_set_msi_remapping(vmd, false);
+ vmd_set_msi_remapping(vmd, !!vmd->irq_domain);
for (i = 0; i < vmd->msix_count; i++) {
err = devm_request_irq(dev, vmd->irqs[i].virq,