diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-12-03 22:58:22 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-12-03 22:58:22 +0100 |
commit | c3bed3b20e40ab44b98ac5f0471a5bd92a802f5a (patch) | |
tree | 73813bd3b2a851917f6f6a4dcbed22c305bef3b4 /drivers | |
parent | Merge tag 'rtc-5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni... (diff) | |
parent | Merge branch 'pci/trivial' (diff) | |
download | linux-c3bed3b20e40ab44b98ac5f0471a5bd92a802f5a.tar.xz linux-c3bed3b20e40ab44b98ac5f0471a5bd92a802f5a.zip |
Merge tag 'pci-v5.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Warn if a host bridge has no NUMA info (Yunsheng Lin)
- Add PCI_STD_NUM_BARS for the number of standard BARs (Denis
Efremov)
Resource management:
- Fix boot-time Embedded Controller GPE storm caused by incorrect
resource assignment after ACPI Bus Check Notification (Mika
Westerberg)
- Protect pci_reassign_bridge_resources() against concurrent
addition/removal (Benjamin Herrenschmidt)
- Fix bridge dma_ranges resource list cleanup (Rob Herring)
- Add "pci=hpmmiosize" and "pci=hpmmioprefsize" parameters to control
the MMIO and prefetchable MMIO window sizes of hotplug bridges
independently (Nicholas Johnson)
- Fix MMIO/MMIO_PREF window assignment that assigned more space than
desired (Nicholas Johnson)
- Only enforce bus numbers from bridge EA if the bridge has EA
devices downstream (Subbaraya Sundeep)
- Consolidate DT "dma-ranges" parsing and convert all host drivers to
use shared parsing (Rob Herring)
Error reporting:
- Restore AER capability after resume (Mayurkumar Patel)
- Add PoisonTLPBlocked AER counter (Rajat Jain)
- Use for_each_set_bit() to simplify AER code (Andy Shevchenko)
- Fix AER kernel-doc (Andy Shevchenko)
- Add "pcie_ports=dpc-native" parameter to allow native use of DPC
even if platform didn't grant control over AER (Olof Johansson)
Hotplug:
- Avoid returning prematurely from sysfs requests to enable or
disable a PCIe hotplug slot (Lukas Wunner)
- Don't disable interrupts twice when suspending hotplug ports (Mika
Westerberg)
- Fix deadlocks when PCIe ports are hot-removed while suspended (Mika
Westerberg)
Power management:
- Remove unnecessary ASPM locking (Bjorn Helgaas)
- Add support for disabling L1 PM Substates (Heiner Kallweit)
- Allow re-enabling Clock PM after it has been disabled (Heiner
Kallweit)
- Add sysfs attributes for controlling ASPM link states (Heiner
Kallweit)
- Remove CONFIG_PCIEASPM_DEBUG, including "link_state" and "clk_ctl"
sysfs files (Heiner Kallweit)
- Avoid AMD FCH XHCI USB PME# from D0 defect that prevents wakeup on
USB 2.0 or 1.1 connect events (Kai-Heng Feng)
- Move power state check out of pci_msi_supported() (Bjorn Helgaas)
- Fix incorrect MSI-X masking on resume and revert related nvme quirk
for Kingston NVME SSD running FW E8FK11.T (Jian-Hong Pan)
- Always return devices to D0 when thawing to fix hibernation with
drivers like mlx4 that used legacy power management (previously we
only did it for drivers with new power management ops) (Dexuan Cui)
- Clear PCIe PME Status even for legacy power management (Bjorn
Helgaas)
- Fix PCI PM documentation errors (Bjorn Helgaas)
- Use dev_printk() for more power management messages (Bjorn Helgaas)
- Apply D2 delay as milliseconds, not microseconds (Bjorn Helgaas)
- Convert xen-platform from legacy to generic power management (Bjorn
Helgaas)
- Removed unused .resume_early() and .suspend_late() legacy power
management hooks (Bjorn Helgaas)
- Rearrange power management code for clarity (Rafael J. Wysocki)
- Decode power states more clearly ("4" or "D4" really refers to
"D3cold") (Bjorn Helgaas)
- Notice when reading PM Control register returns an error (~0)
instead of interpreting it as being in D3hot (Bjorn Helgaas)
- Add missing link delays required by the PCIe spec (Mika Westerberg)
Virtualization:
- Move pci_prg_resp_pasid_required() to CONFIG_PCI_PRI (Bjorn
Helgaas)
- Allow VFs to use PRI (the PF PRI is shared by the VFs, but the code
previously didn't recognize that) (Kuppuswamy Sathyanarayanan)
- Allow VFs to use PASID (the PF PASID capability is shared by the
VFs, but the code previously didn't recognize that) (Kuppuswamy
Sathyanarayanan)
- Disconnect PF and VF ATS enablement, since ATS in PFs and
associated VFs can be enabled independently (Kuppuswamy
Sathyanarayanan)
- Cache PRI and PASID capability offsets (Kuppuswamy Sathyanarayanan)
- Cache the PRI PRG Response PASID Required bit (Bjorn Helgaas)
- Consolidate ATS declarations in linux/pci-ats.h (Krzysztof
Wilczynski)
- Remove unused PRI and PASID stubs (Bjorn Helgaas)
- Removed unnecessary EXPORT_SYMBOL_GPL() from ATS, PRI, and PASID
interfaces that are only used by built-in IOMMU drivers (Bjorn
Helgaas)
- Hide PRI and PASID state restoration functions used only inside the
PCI core (Bjorn Helgaas)
- Add a DMA alias quirk for the Intel VCA NTB (Slawomir Pawlowski)
- Serialize sysfs sriov_numvfs reads vs writes (Pierre Crégut)
- Update Cavium ACS quirk for ThunderX2 and ThunderX3 (George
Cherian)
- Fix the UPDCR register address in the Intel ACS quirk (Steffen
Liebergeld)
- Unify ACS quirk implementations (Bjorn Helgaas)
Amlogic Meson host bridge driver:
- Fix meson PERST# GPIO polarity problem (Remi Pommarel)
- Add DT bindings for Amlogic Meson G12A (Neil Armstrong)
- Fix meson clock names to match DT bindings (Neil Armstrong)
- Add meson support for Amlogic G12A SoC with separate shared PHY
(Neil Armstrong)
- Add meson extended PCIe PHY functions for Amlogic G12A USB3+PCIe
combo PHY (Neil Armstrong)
- Add arm64 DT for Amlogic G12A PCIe controller node (Neil Armstrong)
- Add commented-out description of VIM3 USB3/PCIe mux in arm64 DT
(Neil Armstrong)
Broadcom iProc host bridge driver:
- Invalidate iProc PAXB address mapping before programming it
(Abhishek Shah)
- Fix iproc-msi and mvebu __iomem annotations (Ben Dooks)
Cadence host bridge driver:
- Refactor Cadence PCIe host controller to use as a library for both
host and endpoint (Tom Joseph)
Freescale Layerscape host bridge driver:
- Add layerscape LS1028a support (Xiaowei Bao)
Intel VMD host bridge driver:
- Add VMD bus 224-255 restriction decode (Jon Derrick)
- Add VMD 8086:9A0B device ID (Jon Derrick)
- Remove Keith from VMD maintainer list (Keith Busch)
Marvell ARMADA 3700 / Aardvark host bridge driver:
- Use LTSSM state to build link training flag since Aardvark doesn't
implement the Link Training bit (Remi Pommarel)
- Delay before training Aardvark link in case PERST# was asserted
before the driver probe (Remi Pommarel)
- Fix Aardvark issues with Root Control reads and writes (Remi
Pommarel)
- Don't rely on jiffies in Aardvark config access path since
interrupts may be disabled (Remi Pommarel)
- Fix Aardvark big-endian support (Grzegorz Jaszczyk)
Marvell ARMADA 370 / XP host bridge driver:
- Make mvebu_pci_bridge_emul_ops static (Ben Dooks)
Microsoft Hyper-V host bridge driver:
- Add hibernation support for Hyper-V virtual PCI devices (Dexuan
Cui)
- Track Hyper-V pci_protocol_version per-hbus, not globally (Dexuan
Cui)
- Avoid kmemleak false positive on hv hbus buffer (Dexuan Cui)
Mobiveil host bridge driver:
- Change mobiveil csr_read()/write() function names that conflict
with riscv arch functions (Kefeng Wang)
NVIDIA Tegra host bridge driver:
- Fix Tegra CLKREQ dependency programming (Vidya Sagar)
Renesas R-Car host bridge driver:
- Remove unnecessary header include from rcar (Andrew Murray)
- Tighten register index checking for rcar inbound range programming
(Marek Vasut)
- Fix rcar inbound range alignment calculation to improve packing of
multiple entries (Marek Vasut)
- Update rcar MACCTLR setting to match documentation (Yoshihiro
Shimoda)
- Clear bit 0 of MACCTLR before PCIETCTLR.CFINIT per manual
(Yoshihiro Shimoda)
- Add Marek Vasut and Yoshihiro Shimoda as R-Car maintainers (Simon
Horman)
Rockchip host bridge driver:
- Make rockchip 0V9 and 1V8 power regulators non-optional (Robin
Murphy)
Socionext UniPhier host bridge driver:
- Set uniphier to host (RC) mode always (Kunihiko Hayashi)
Endpoint drivers:
- Fix endpoint driver sign extension problem when shifting page
number to phys_addr_t (Alan Mikhak)
Misc:
- Add NumaChip SPDX header (Krzysztof Wilczynski)
- Replace EXTRA_CFLAGS with ccflags-y (Krzysztof Wilczynski)
- Remove unused includes (Krzysztof Wilczynski)
- Removed unused sysfs attribute groups (Ben Dooks)
- Remove PTM and ASPM dependencies on PCIEPORTBUS (Bjorn Helgaas)
- Add PCIe Link Control 2 register field definitions to replace magic
numbers in AMDGPU and Radeon CIK/SI (Bjorn Helgaas)
- Fix incorrect Link Control 2 Transmit Margin usage in AMDGPU and
Radeon CIK/SI PCIe Gen3 link training (Bjorn Helgaas)
- Use pcie_capability_read_word() instead of pci_read_config_word()
in AMDGPU and Radeon CIK/SI (Frederick Lawler)
- Remove unused pci_irq_get_node() Greg Kroah-Hartman)
- Make asm/msi.h mandatory and simplify PCI_MSI_IRQ_DOMAIN Kconfig
(Palmer Dabbelt, Michal Simek)
- Read all 64 bits of Switchtec part_event_bitmap (Logan Gunthorpe)
- Fix erroneous intel-iommu dependency on CONFIG_AMD_IOMMU (Bjorn
Helgaas)
- Fix bridge emulation big-endian support (Grzegorz Jaszczyk)
- Fix dwc find_next_bit() usage (Niklas Cassel)
- Fix pcitest.c fd leak (Hewenliang)
- Fix typos and comments (Bjorn Helgaas)
- Fix Kconfig whitespace errors (Krzysztof Kozlowski)"
* tag 'pci-v5.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (160 commits)
PCI: Remove PCI_MSI_IRQ_DOMAIN architecture whitelist
asm-generic: Make msi.h a mandatory include/asm header
Revert "nvme: Add quirk for Kingston NVME SSD running FW E8FK11.T"
PCI/MSI: Fix incorrect MSI-X masking on resume
PCI/MSI: Move power state check out of pci_msi_supported()
PCI/MSI: Remove unused pci_irq_get_node()
PCI: hv: Avoid a kmemleak false positive caused by the hbus buffer
PCI: hv: Change pci_protocol_version to per-hbus
PCI: hv: Add hibernation support
PCI: hv: Reorganize the code in preparation of hibernation
MAINTAINERS: Remove Keith from VMD maintainer
PCI/ASPM: Remove PCIEASPM_DEBUG Kconfig option and related code
PCI/ASPM: Add sysfs attributes for controlling ASPM link states
PCI: Fix indentation
drm/radeon: Prefer pcie_capability_read_word()
drm/radeon: Replace numbers with PCI_EXP_LNKCTL2 definitions
drm/radeon: Correct Transmit Margin masks
drm/amdgpu: Prefer pcie_capability_read_word()
PCI: uniphier: Set mode register to host mode
drm/amdgpu: Replace numbers with PCI_EXP_LNKCTL2 definitions
...
Diffstat (limited to 'drivers')
112 files changed, 2593 insertions, 1979 deletions
diff --git a/drivers/ata/pata_atp867x.c b/drivers/ata/pata_atp867x.c index cfd0cf2cbca6..e01a3a6e4d46 100644 --- a/drivers/ata/pata_atp867x.c +++ b/drivers/ata/pata_atp867x.c @@ -422,7 +422,7 @@ static int atp867x_ata_pci_sff_init_host(struct ata_host *host) #ifdef ATP867X_DEBUG atp867x_check_res(pdev); - for (i = 0; i < PCI_ROM_RESOURCE; i++) + for (i = 0; i < PCI_STD_NUM_BARS; i++) printk(KERN_DEBUG "ATP867X: iomap[%d]=0x%llx\n", i, (unsigned long long)(host->iomap[i])); #endif diff --git a/drivers/ata/sata_nv.c b/drivers/ata/sata_nv.c index 65ec8dff1c51..f3e62f5528bd 100644 --- a/drivers/ata/sata_nv.c +++ b/drivers/ata/sata_nv.c @@ -2329,7 +2329,7 @@ static int nv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) // Make sure this is a SATA controller by counting the number of bars // (NVIDIA SATA controllers will always have six bars). Otherwise, // it's an IDE controller and we ignore it. - for (bar = 0; bar < 6; bar++) + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) if (pci_resource_start(pdev, bar) == 0) return -ENODEV; diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c index 2d64d270725d..7a43993544c1 100644 --- a/drivers/gpu/drm/amd/amdgpu/cik.c +++ b/drivers/gpu/drm/amd/amdgpu/cik.c @@ -1441,7 +1441,6 @@ static int cik_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk) static void cik_pcie_gen3_enable(struct amdgpu_device *adev) { struct pci_dev *root = adev->pdev->bus->self; - int bridge_pos, gpu_pos; u32 speed_cntl, current_data_rate; int i; u16 tmp16; @@ -1476,12 +1475,7 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) DRM_INFO("enabling PCIE gen 2 link speeds, disable with amdgpu.pcie_gen2=0\n"); } - bridge_pos = pci_pcie_cap(root); - if (!bridge_pos) - return; - - gpu_pos = pci_pcie_cap(adev->pdev); - if (!gpu_pos) + if (!pci_is_pcie(root) || !pci_is_pcie(adev->pdev)) return; if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) { @@ -1491,14 +1485,17 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) u16 bridge_cfg2, gpu_cfg2; u32 max_lw, current_lw, tmp; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, + &gpu_cfg); tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, + tmp16); tmp = RREG32_PCIE(ixPCIE_LC_STATUS1); max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >> @@ -1522,15 +1519,23 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) for (i = 0; i < 10; i++) { /* check status */ - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_DEVSTA, &tmp16); + pcie_capability_read_word(adev->pdev, + PCI_EXP_DEVSTA, + &tmp16); if (tmp16 & PCI_EXP_DEVSTA_TRPND) break; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL, + &gpu_cfg); - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &bridge_cfg2); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &gpu_cfg2); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &bridge_cfg2); + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL2, + &gpu_cfg2); tmp = RREG32_PCIE(ixPCIE_LC_CNTL4); tmp |= PCIE_LC_CNTL4__LC_SET_QUIESCE_MASK; @@ -1543,26 +1548,45 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) msleep(100); /* linkctl */ - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, + tmp16); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(adev->pdev, + PCI_EXP_LNKCTL, + tmp16); /* linkctl2 */ - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (bridge_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, tmp16); - - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (gpu_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (bridge_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(root, + PCI_EXP_LNKCTL2, + tmp16); + + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (gpu_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(adev->pdev, + PCI_EXP_LNKCTL2, + tmp16); tmp = RREG32_PCIE(ixPCIE_LC_CNTL4); tmp &= ~PCIE_LC_CNTL4__LC_SET_QUIESCE_MASK; @@ -1577,15 +1601,16 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) speed_cntl &= ~PCIE_LC_SPEED_CNTL__LC_FORCE_DIS_SW_SPEED_CHANGE_MASK; WREG32_PCIE(ixPCIE_LC_SPEED_CNTL, speed_cntl); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~0xf; + pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL2, &tmp16); + tmp16 &= ~PCI_EXP_LNKCTL2_TLS; + if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) - tmp16 |= 3; /* gen3 */ + tmp16 |= PCI_EXP_LNKCTL2_TLS_8_0GT; /* gen3 */ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) - tmp16 |= 2; /* gen2 */ + tmp16 |= PCI_EXP_LNKCTL2_TLS_5_0GT; /* gen2 */ else - tmp16 |= 1; /* gen1 */ - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + tmp16 |= PCI_EXP_LNKCTL2_TLS_2_5GT; /* gen1 */ + pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL2, tmp16); speed_cntl = RREG32_PCIE(ixPCIE_LC_SPEED_CNTL); speed_cntl |= PCIE_LC_SPEED_CNTL__LC_INITIATE_LINK_SPEED_CHANGE_MASK; diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c index 29024e64c886..f2d70a47a3af 100644 --- a/drivers/gpu/drm/amd/amdgpu/si.c +++ b/drivers/gpu/drm/amd/amdgpu/si.c @@ -1644,7 +1644,6 @@ static void si_init_golden_registers(struct amdgpu_device *adev) static void si_pcie_gen3_enable(struct amdgpu_device *adev) { struct pci_dev *root = adev->pdev->bus->self; - int bridge_pos, gpu_pos; u32 speed_cntl, current_data_rate; int i; u16 tmp16; @@ -1679,12 +1678,7 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) DRM_INFO("enabling PCIE gen 2 link speeds, disable with amdgpu.pcie_gen2=0\n"); } - bridge_pos = pci_pcie_cap(root); - if (!bridge_pos) - return; - - gpu_pos = pci_pcie_cap(adev->pdev); - if (!gpu_pos) + if (!pci_is_pcie(root) || !pci_is_pcie(adev->pdev)) return; if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) { @@ -1693,14 +1687,17 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) u16 bridge_cfg2, gpu_cfg2; u32 max_lw, current_lw, tmp; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, + &gpu_cfg); tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, + tmp16); tmp = RREG32_PCIE(PCIE_LC_STATUS1); max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; @@ -1717,15 +1714,23 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) } for (i = 0; i < 10; i++) { - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_DEVSTA, &tmp16); + pcie_capability_read_word(adev->pdev, + PCI_EXP_DEVSTA, + &tmp16); if (tmp16 & PCI_EXP_DEVSTA_TRPND) break; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL, + &gpu_cfg); - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &bridge_cfg2); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &gpu_cfg2); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &bridge_cfg2); + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL2, + &gpu_cfg2); tmp = RREG32_PCIE_PORT(PCIE_LC_CNTL4); tmp |= LC_SET_QUIESCE; @@ -1737,25 +1742,44 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) mdelay(100); - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, + tmp16); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); - - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (bridge_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, tmp16); - - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (gpu_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + pcie_capability_write_word(adev->pdev, + PCI_EXP_LNKCTL, + tmp16); + + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (bridge_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(root, + PCI_EXP_LNKCTL2, + tmp16); + + pcie_capability_read_word(adev->pdev, + PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (gpu_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(adev->pdev, + PCI_EXP_LNKCTL2, + tmp16); tmp = RREG32_PCIE_PORT(PCIE_LC_CNTL4); tmp &= ~LC_SET_QUIESCE; @@ -1768,15 +1792,16 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) speed_cntl &= ~LC_FORCE_DIS_SW_SPEED_CHANGE; WREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL, speed_cntl); - pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~0xf; + pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL2, &tmp16); + tmp16 &= ~PCI_EXP_LNKCTL2_TLS; + if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) - tmp16 |= 3; + tmp16 |= PCI_EXP_LNKCTL2_TLS_8_0GT; /* gen3 */ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) - tmp16 |= 2; + tmp16 |= PCI_EXP_LNKCTL2_TLS_5_0GT; /* gen2 */ else - tmp16 |= 1; - pci_write_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + tmp16 |= PCI_EXP_LNKCTL2_TLS_2_5GT; /* gen1 */ + pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL2, tmp16); speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL); speed_cntl |= LC_INITIATE_LINK_SPEED_CHANGE; diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c index acabeaf28732..40a7e702c2a9 100644 --- a/drivers/gpu/drm/radeon/cik.c +++ b/drivers/gpu/drm/radeon/cik.c @@ -9500,7 +9500,6 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) { struct pci_dev *root = rdev->pdev->bus->self; enum pci_bus_speed speed_cap; - int bridge_pos, gpu_pos; u32 speed_cntl, current_data_rate; int i; u16 tmp16; @@ -9542,12 +9541,7 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) DRM_INFO("enabling PCIE gen 2 link speeds, disable with radeon.pcie_gen2=0\n"); } - bridge_pos = pci_pcie_cap(root); - if (!bridge_pos) - return; - - gpu_pos = pci_pcie_cap(rdev->pdev); - if (!gpu_pos) + if (!pci_is_pcie(root) || !pci_is_pcie(rdev->pdev)) return; if (speed_cap == PCIE_SPEED_8_0GT) { @@ -9557,14 +9551,17 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) u16 bridge_cfg2, gpu_cfg2; u32 max_lw, current_lw, tmp; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL, + &gpu_cfg); tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL, + tmp16); tmp = RREG32_PCIE_PORT(PCIE_LC_STATUS1); max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; @@ -9582,15 +9579,23 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) for (i = 0; i < 10; i++) { /* check status */ - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_DEVSTA, &tmp16); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_DEVSTA, + &tmp16); if (tmp16 & PCI_EXP_DEVSTA_TRPND) break; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL, + &gpu_cfg); - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &bridge_cfg2); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &gpu_cfg2); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &bridge_cfg2); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL2, + &gpu_cfg2); tmp = RREG32_PCIE_PORT(PCIE_LC_CNTL4); tmp |= LC_SET_QUIESCE; @@ -9603,26 +9608,45 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) msleep(100); /* linkctl */ - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, + tmp16); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(rdev->pdev, + PCI_EXP_LNKCTL, + tmp16); /* linkctl2 */ - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (bridge_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, tmp16); - - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (gpu_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (bridge_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(root, + PCI_EXP_LNKCTL2, + tmp16); + + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (gpu_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(rdev->pdev, + PCI_EXP_LNKCTL2, + tmp16); tmp = RREG32_PCIE_PORT(PCIE_LC_CNTL4); tmp &= ~LC_SET_QUIESCE; @@ -9636,15 +9660,15 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) speed_cntl &= ~LC_FORCE_DIS_SW_SPEED_CHANGE; WREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL, speed_cntl); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~0xf; + pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL2, &tmp16); + tmp16 &= ~PCI_EXP_LNKCTL2_TLS; if (speed_cap == PCIE_SPEED_8_0GT) - tmp16 |= 3; /* gen3 */ + tmp16 |= PCI_EXP_LNKCTL2_TLS_8_0GT; /* gen3 */ else if (speed_cap == PCIE_SPEED_5_0GT) - tmp16 |= 2; /* gen2 */ + tmp16 |= PCI_EXP_LNKCTL2_TLS_5_0GT; /* gen2 */ else - tmp16 |= 1; /* gen1 */ - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + tmp16 |= PCI_EXP_LNKCTL2_TLS_2_5GT; /* gen1 */ + pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL2, tmp16); speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL); speed_cntl |= LC_INITIATE_LINK_SPEED_CHANGE; diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c index 1d8efb0eefdb..d7eea75b2c27 100644 --- a/drivers/gpu/drm/radeon/si.c +++ b/drivers/gpu/drm/radeon/si.c @@ -3257,7 +3257,7 @@ static void si_gpu_init(struct radeon_device *rdev) /* XXX what about 12? */ rdev->config.si.tile_config |= (3 << 0); break; - } + } switch ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) { case 0: /* four banks */ rdev->config.si.tile_config |= 0 << 4; @@ -7087,7 +7087,6 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) { struct pci_dev *root = rdev->pdev->bus->self; enum pci_bus_speed speed_cap; - int bridge_pos, gpu_pos; u32 speed_cntl, current_data_rate; int i; u16 tmp16; @@ -7129,12 +7128,7 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) DRM_INFO("enabling PCIE gen 2 link speeds, disable with radeon.pcie_gen2=0\n"); } - bridge_pos = pci_pcie_cap(root); - if (!bridge_pos) - return; - - gpu_pos = pci_pcie_cap(rdev->pdev); - if (!gpu_pos) + if (!pci_is_pcie(root) || !pci_is_pcie(rdev->pdev)) return; if (speed_cap == PCIE_SPEED_8_0GT) { @@ -7144,14 +7138,17 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) u16 bridge_cfg2, gpu_cfg2; u32 max_lw, current_lw, tmp; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL, + &gpu_cfg); tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL, + tmp16); tmp = RREG32_PCIE(PCIE_LC_STATUS1); max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; @@ -7169,15 +7166,23 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) for (i = 0; i < 10; i++) { /* check status */ - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_DEVSTA, &tmp16); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_DEVSTA, + &tmp16); if (tmp16 & PCI_EXP_DEVSTA_TRPND) break; - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &bridge_cfg); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, &gpu_cfg); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &bridge_cfg); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL, + &gpu_cfg); - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &bridge_cfg2); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &gpu_cfg2); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &bridge_cfg2); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL2, + &gpu_cfg2); tmp = RREG32_PCIE_PORT(PCIE_LC_CNTL4); tmp |= LC_SET_QUIESCE; @@ -7190,26 +7195,46 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) msleep(100); /* linkctl */ - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(root, + PCI_EXP_LNKCTL, + tmp16); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, &tmp16); + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL, + &tmp16); tmp16 &= ~PCI_EXP_LNKCTL_HAWD; tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL, tmp16); + pcie_capability_write_word(rdev->pdev, + PCI_EXP_LNKCTL, + tmp16); /* linkctl2 */ - pci_read_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (bridge_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(root, bridge_pos + PCI_EXP_LNKCTL2, tmp16); - - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~((1 << 4) | (7 << 9)); - tmp16 |= (gpu_cfg2 & ((1 << 4) | (7 << 9))); - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (bridge_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(root, + PCI_EXP_LNKCTL2, + tmp16); + + pcie_capability_read_word(rdev->pdev, + PCI_EXP_LNKCTL2, + &tmp16); + tmp16 &= ~(PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN); + tmp16 |= (gpu_cfg2 & + (PCI_EXP_LNKCTL2_ENTER_COMP | + PCI_EXP_LNKCTL2_TX_MARGIN)); + pcie_capability_write_word(rdev->pdev, + PCI_EXP_LNKCTL2, + tmp16); tmp = RREG32_PCIE_PORT(PCIE_LC_CNTL4); tmp &= ~LC_SET_QUIESCE; @@ -7223,15 +7248,15 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) speed_cntl &= ~LC_FORCE_DIS_SW_SPEED_CHANGE; WREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL, speed_cntl); - pci_read_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16); - tmp16 &= ~0xf; + pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL2, &tmp16); + tmp16 &= ~PCI_EXP_LNKCTL2_TLS; if (speed_cap == PCIE_SPEED_8_0GT) - tmp16 |= 3; /* gen3 */ + tmp16 |= PCI_EXP_LNKCTL2_TLS_8_0GT; /* gen3 */ else if (speed_cap == PCIE_SPEED_5_0GT) - tmp16 |= 2; /* gen2 */ + tmp16 |= PCI_EXP_LNKCTL2_TLS_5_0GT; /* gen2 */ else - tmp16 |= 1; /* gen1 */ - pci_write_config_word(rdev->pdev, gpu_pos + PCI_EXP_LNKCTL2, tmp16); + tmp16 |= PCI_EXP_LNKCTL2_TLS_2_5GT; /* gen1 */ + pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL2, tmp16); speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL); speed_cntl |= LC_INITIATE_LINK_SPEED_CHANGE; diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index c0b871229d81..0b9d78a0f3ac 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -212,6 +212,7 @@ config INTEL_IOMMU_SVM bool "Support for Shared Virtual Memory with Intel IOMMU" depends on INTEL_IOMMU && X86 select PCI_PASID + select PCI_PRI select MMU_NOTIFIER help Shared Virtual Memory (SVM) provides a facility for devices diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c index 614a93aa5305..026ad2b29dcd 100644 --- a/drivers/iommu/of_iommu.c +++ b/drivers/iommu/of_iommu.c @@ -8,6 +8,8 @@ #include <linux/export.h> #include <linux/iommu.h> #include <linux/limits.h> +#include <linux/pci.h> +#include <linux/msi.h> #include <linux/of.h> #include <linux/of_iommu.h> #include <linux/of_pci.h> diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c index e88e75c22b6a..fbec07d634ad 100644 --- a/drivers/irqchip/irq-gic-v2m.c +++ b/drivers/irqchip/irq-gic-v2m.c @@ -17,6 +17,7 @@ #include <linux/irq.h> #include <linux/irqdomain.h> #include <linux/kernel.h> +#include <linux/pci.h> #include <linux/msi.h> #include <linux/of_address.h> #include <linux/of_pci.h> diff --git a/drivers/irqchip/irq-gic-v3-its-pci-msi.c b/drivers/irqchip/irq-gic-v3-its-pci-msi.c index 229d586c3d7a..87711e0f8014 100644 --- a/drivers/irqchip/irq-gic-v3-its-pci-msi.c +++ b/drivers/irqchip/irq-gic-v3-its-pci-msi.c @@ -5,6 +5,7 @@ */ #include <linux/acpi_iort.h> +#include <linux/pci.h> #include <linux/msi.h> #include <linux/of.h> #include <linux/of_irq.h> diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c index b36142df0295..0a9c5ddf2f59 100644 --- a/drivers/memstick/host/jmb38x_ms.c +++ b/drivers/memstick/host/jmb38x_ms.c @@ -848,7 +848,7 @@ static int jmb38x_ms_count_slots(struct pci_dev *pdev) { int cnt, rc = 0; - for (cnt = 0; cnt < PCI_ROM_RESOURCE; ++cnt) { + for (cnt = 0; cnt < PCI_STD_NUM_BARS; ++cnt) { if (!(IORESOURCE_MEM & pci_resource_flags(pdev, cnt))) break; diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c index 6e208a060a58..a5e317073d95 100644 --- a/drivers/misc/pci_endpoint_test.c +++ b/drivers/misc/pci_endpoint_test.c @@ -94,7 +94,7 @@ enum pci_barno { struct pci_endpoint_test { struct pci_dev *pdev; void __iomem *base; - void __iomem *bar[6]; + void __iomem *bar[PCI_STD_NUM_BARS]; struct completion irq_raised; int last_irq; int num_irqs; @@ -687,7 +687,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev, if (!pci_endpoint_test_request_irq(test)) goto err_disable_irq; - for (bar = BAR_0; bar <= BAR_5; bar++) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { base = pci_ioremap_bar(pdev, bar); if (!base) { @@ -740,7 +740,7 @@ err_ida_remove: ida_simple_remove(&pci_endpoint_test_ida, id); err_iounmap: - for (bar = BAR_0; bar <= BAR_5; bar++) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { if (test->bar[bar]) pci_iounmap(pdev, test->bar[bar]); } @@ -771,7 +771,7 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev) misc_deregister(&test->miscdev); kfree(misc_device->name); ida_simple_remove(&pci_endpoint_test_ida, id); - for (bar = BAR_0; bar <= BAR_5; bar++) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { if (test->bar[bar]) pci_iounmap(pdev, test->bar[bar]); } diff --git a/drivers/net/ethernet/intel/e1000/e1000.h b/drivers/net/ethernet/intel/e1000/e1000.h index c40729b2c184..7fad2f24dcad 100644 --- a/drivers/net/ethernet/intel/e1000/e1000.h +++ b/drivers/net/ethernet/intel/e1000/e1000.h @@ -45,7 +45,6 @@ #define BAR_0 0 #define BAR_1 1 -#define BAR_5 5 #define INTEL_E1000_ETHERNET_DEVICE(device_id) {\ PCI_DEVICE(PCI_VENDOR_ID_INTEL, device_id)} diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c index 416da9619928..aca97b084003 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_main.c +++ b/drivers/net/ethernet/intel/e1000/e1000_main.c @@ -977,7 +977,7 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_ioremap; if (adapter->need_ioport) { - for (i = BAR_1; i <= BAR_5; i++) { + for (i = BAR_1; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_len(pdev, i) == 0) continue; if (pci_resource_flags(pdev, i) & IORESOURCE_IO) { diff --git a/drivers/net/ethernet/intel/ixgb/ixgb.h b/drivers/net/ethernet/intel/ixgb/ixgb.h index e85271b68410..681d44cc9784 100644 --- a/drivers/net/ethernet/intel/ixgb/ixgb.h +++ b/drivers/net/ethernet/intel/ixgb/ixgb.h @@ -42,7 +42,6 @@ #define BAR_0 0 #define BAR_1 1 -#define BAR_5 5 struct ixgb_adapter; #include "ixgb_hw.h" diff --git a/drivers/net/ethernet/intel/ixgb/ixgb_main.c b/drivers/net/ethernet/intel/ixgb/ixgb_main.c index 0940a0da16f2..3d8c051dd327 100644 --- a/drivers/net/ethernet/intel/ixgb/ixgb_main.c +++ b/drivers/net/ethernet/intel/ixgb/ixgb_main.c @@ -412,7 +412,7 @@ ixgb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_ioremap; } - for (i = BAR_1; i <= BAR_5; i++) { + for (i = BAR_1; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_len(pdev, i) == 0) continue; if (pci_resource_flags(pdev, i) & IORESOURCE_IO) { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c index 292045f4581f..8237dbc3e991 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c @@ -489,7 +489,7 @@ static int stmmac_pci_probe(struct pci_dev *pdev, } /* Get the base address of device */ - for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_len(pdev, i) == 0) continue; ret = pcim_iomap_regions(pdev, BIT(i), pci_name(pdev)); @@ -532,7 +532,7 @@ static void stmmac_pci_remove(struct pci_dev *pdev) if (priv->plat->stmmac_clk) clk_unregister_fixed_rate(priv->plat->stmmac_clk); - for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_len(pdev, i) == 0) continue; pcim_iounmap_regions(pdev, BIT(i)); diff --git a/drivers/net/ethernet/synopsys/dwc-xlgmac-pci.c b/drivers/net/ethernet/synopsys/dwc-xlgmac-pci.c index 386bafe74c3f..fa8604d7b797 100644 --- a/drivers/net/ethernet/synopsys/dwc-xlgmac-pci.c +++ b/drivers/net/ethernet/synopsys/dwc-xlgmac-pci.c @@ -34,7 +34,7 @@ static int xlgmac_probe(struct pci_dev *pcidev, const struct pci_device_id *id) return ret; } - for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_len(pcidev, i) == 0) continue; ret = pcim_iomap_regions(pcidev, BIT(i), XLGMAC_DRV_NAME); diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 6ec589268b9d..dfe37a525f3a 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2412,16 +2412,6 @@ static const struct nvme_core_quirk_entry core_quirks[] = { .vid = 0x14a4, .fr = "22301111", .quirks = NVME_QUIRK_SIMPLE_SUSPEND, - }, - { - /* - * This Kingston E8FK11.T firmware version has no interrupt - * after resume with actions related to suspend to idle - * https://bugzilla.kernel.org/show_bug.cgi?id=204887 - */ - .vid = 0x2646, - .fr = "E8FK11.T", - .quirks = NVME_QUIRK_SIMPLE_SUSPEND, } }; diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index a304f5ea11b9..4bef5c2bae9f 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -52,7 +52,7 @@ config PCI_MSI If you don't know what to do here, say Y. config PCI_MSI_IRQ_DOMAIN - def_bool ARC || ARM || ARM64 || X86 || RISCV + def_bool y depends on PCI_MSI select GENERIC_MSI_IRQ_DOMAIN @@ -106,14 +106,14 @@ config PCI_PF_STUB When in doubt, say N. config XEN_PCIDEV_FRONTEND - tristate "Xen PCI Frontend" - depends on X86 && XEN - select PCI_XEN + tristate "Xen PCI Frontend" + depends on X86 && XEN + select PCI_XEN select XEN_XENBUS_FRONTEND - default y - help - The PCI device frontend driver allows the kernel to import arbitrary - PCI devices from a PCI backend to support PCI driver domains. + default y + help + The PCI device frontend driver allows the kernel to import arbitrary + PCI devices from a PCI backend to support PCI driver domains. config PCI_ATS bool @@ -180,12 +180,12 @@ config PCI_LABEL select NLS config PCI_HYPERV - tristate "Hyper-V PCI Frontend" - depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS + tristate "Hyper-V PCI Frontend" + depends on X86_64 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS select PCI_HYPERV_INTERFACE - help - The PCI device frontend driver allows the kernel to import arbitrary - PCI devices from a PCI backend to support PCI driver domains. + help + The PCI device frontend driver allows the kernel to import arbitrary + PCI devices from a PCI backend to support PCI driver domains. source "drivers/pci/hotplug/Kconfig" source "drivers/pci/controller/Kconfig" diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 28cdd8c0213a..522d2b974e91 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -7,6 +7,8 @@ obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o \ pci-sysfs.o rom.o setup-res.o irq.o vpd.o \ setup-bus.o vc.o mmap.o setup-irq.o +obj-$(CONFIG_PCI) += pcie/ + ifdef CONFIG_PCI obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_SYSFS) += slot.o @@ -15,7 +17,6 @@ endif obj-$(CONFIG_OF) += of.o obj-$(CONFIG_PCI_QUIRKS) += quirks.o -obj-$(CONFIG_PCIEPORTBUS) += pcie/ obj-$(CONFIG_HOTPLUG_PCI) += hotplug/ obj-$(CONFIG_PCI_MSI) += msi.o obj-$(CONFIG_PCI_ATS) += ats.o diff --git a/drivers/pci/access.c b/drivers/pci/access.c index 2fccb5762c76..79c4a2ef269a 100644 --- a/drivers/pci/access.c +++ b/drivers/pci/access.c @@ -355,7 +355,7 @@ static inline bool pcie_cap_has_sltctl(const struct pci_dev *dev) pcie_caps_reg(dev) & PCI_EXP_FLAGS_SLOT; } -static inline bool pcie_cap_has_rtctl(const struct pci_dev *dev) +bool pcie_cap_has_rtctl(const struct pci_dev *dev) { int type = pci_pcie_type(dev); diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c index e18499243f84..982b46f0a54d 100644 --- a/drivers/pci/ats.c +++ b/drivers/pci/ats.c @@ -60,8 +60,6 @@ int pci_enable_ats(struct pci_dev *dev, int ps) pdev = pci_physfn(dev); if (pdev->ats_stu != ps) return -EINVAL; - - atomic_inc(&pdev->ats_ref_cnt); /* count enabled VFs */ } else { dev->ats_stu = ps; ctrl |= PCI_ATS_CTRL_STU(dev->ats_stu - PCI_ATS_MIN_STU); @@ -71,7 +69,6 @@ int pci_enable_ats(struct pci_dev *dev, int ps) dev->ats_enabled = 1; return 0; } -EXPORT_SYMBOL_GPL(pci_enable_ats); /** * pci_disable_ats - disable the ATS capability @@ -79,27 +76,17 @@ EXPORT_SYMBOL_GPL(pci_enable_ats); */ void pci_disable_ats(struct pci_dev *dev) { - struct pci_dev *pdev; u16 ctrl; if (WARN_ON(!dev->ats_enabled)) return; - if (atomic_read(&dev->ats_ref_cnt)) - return; /* VFs still enabled */ - - if (dev->is_virtfn) { - pdev = pci_physfn(dev); - atomic_dec(&pdev->ats_ref_cnt); - } - pci_read_config_word(dev, dev->ats_cap + PCI_ATS_CTRL, &ctrl); ctrl &= ~PCI_ATS_CTRL_ENABLE; pci_write_config_word(dev, dev->ats_cap + PCI_ATS_CTRL, ctrl); dev->ats_enabled = 0; } -EXPORT_SYMBOL_GPL(pci_disable_ats); void pci_restore_ats_state(struct pci_dev *dev) { @@ -113,7 +100,6 @@ void pci_restore_ats_state(struct pci_dev *dev) ctrl |= PCI_ATS_CTRL_STU(dev->ats_stu - PCI_ATS_MIN_STU); pci_write_config_word(dev, dev->ats_cap + PCI_ATS_CTRL, ctrl); } -EXPORT_SYMBOL_GPL(pci_restore_ats_state); /** * pci_ats_queue_depth - query the ATS Invalidate Queue Depth @@ -140,7 +126,6 @@ int pci_ats_queue_depth(struct pci_dev *dev) pci_read_config_word(dev, dev->ats_cap + PCI_ATS_CAP, &cap); return PCI_ATS_CAP_QDEP(cap) ? PCI_ATS_CAP_QDEP(cap) : PCI_ATS_MAX_QDEP; } -EXPORT_SYMBOL_GPL(pci_ats_queue_depth); /** * pci_ats_page_aligned - Return Page Aligned Request bit status. @@ -167,9 +152,22 @@ int pci_ats_page_aligned(struct pci_dev *pdev) return 0; } -EXPORT_SYMBOL_GPL(pci_ats_page_aligned); #ifdef CONFIG_PCI_PRI +void pci_pri_init(struct pci_dev *pdev) +{ + u16 status; + + pdev->pri_cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); + + if (!pdev->pri_cap) + return; + + pci_read_config_word(pdev, pdev->pri_cap + PCI_PRI_STATUS, &status); + if (status & PCI_PRI_STATUS_PASID) + pdev->pasid_required = 1; +} + /** * pci_enable_pri - Enable PRI capability * @ pdev: PCI device structure @@ -180,32 +178,41 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs) { u16 control, status; u32 max_requests; - int pos; + int pri = pdev->pri_cap; + + /* + * VFs must not implement the PRI Capability. If their PF + * implements PRI, it is shared by the VFs, so if the PF PRI is + * enabled, it is also enabled for the VF. + */ + if (pdev->is_virtfn) { + if (pci_physfn(pdev)->pri_enabled) + return 0; + return -EINVAL; + } if (WARN_ON(pdev->pri_enabled)) return -EBUSY; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); - if (!pos) + if (!pri) return -EINVAL; - pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status); + pci_read_config_word(pdev, pri + PCI_PRI_STATUS, &status); if (!(status & PCI_PRI_STATUS_STOPPED)) return -EBUSY; - pci_read_config_dword(pdev, pos + PCI_PRI_MAX_REQ, &max_requests); + pci_read_config_dword(pdev, pri + PCI_PRI_MAX_REQ, &max_requests); reqs = min(max_requests, reqs); pdev->pri_reqs_alloc = reqs; - pci_write_config_dword(pdev, pos + PCI_PRI_ALLOC_REQ, reqs); + pci_write_config_dword(pdev, pri + PCI_PRI_ALLOC_REQ, reqs); control = PCI_PRI_CTRL_ENABLE; - pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); + pci_write_config_word(pdev, pri + PCI_PRI_CTRL, control); pdev->pri_enabled = 1; return 0; } -EXPORT_SYMBOL_GPL(pci_enable_pri); /** * pci_disable_pri - Disable PRI capability @@ -216,18 +223,21 @@ EXPORT_SYMBOL_GPL(pci_enable_pri); void pci_disable_pri(struct pci_dev *pdev) { u16 control; - int pos; + int pri = pdev->pri_cap; + + /* VFs share the PF PRI */ + if (pdev->is_virtfn) + return; if (WARN_ON(!pdev->pri_enabled)) return; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); - if (!pos) + if (!pri) return; - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); + pci_read_config_word(pdev, pri + PCI_PRI_CTRL, &control); control &= ~PCI_PRI_CTRL_ENABLE; - pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); + pci_write_config_word(pdev, pri + PCI_PRI_CTRL, control); pdev->pri_enabled = 0; } @@ -241,19 +251,20 @@ void pci_restore_pri_state(struct pci_dev *pdev) { u16 control = PCI_PRI_CTRL_ENABLE; u32 reqs = pdev->pri_reqs_alloc; - int pos; + int pri = pdev->pri_cap; + + if (pdev->is_virtfn) + return; if (!pdev->pri_enabled) return; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); - if (!pos) + if (!pri) return; - pci_write_config_dword(pdev, pos + PCI_PRI_ALLOC_REQ, reqs); - pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); + pci_write_config_dword(pdev, pri + PCI_PRI_ALLOC_REQ, reqs); + pci_write_config_word(pdev, pri + PCI_PRI_CTRL, control); } -EXPORT_SYMBOL_GPL(pci_restore_pri_state); /** * pci_reset_pri - Resets device's PRI state @@ -265,24 +276,45 @@ EXPORT_SYMBOL_GPL(pci_restore_pri_state); int pci_reset_pri(struct pci_dev *pdev) { u16 control; - int pos; + int pri = pdev->pri_cap; + + if (pdev->is_virtfn) + return 0; if (WARN_ON(pdev->pri_enabled)) return -EBUSY; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); - if (!pos) + if (!pri) return -EINVAL; control = PCI_PRI_CTRL_RESET; - pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); + pci_write_config_word(pdev, pri + PCI_PRI_CTRL, control); return 0; } -EXPORT_SYMBOL_GPL(pci_reset_pri); + +/** + * pci_prg_resp_pasid_required - Return PRG Response PASID Required bit + * status. + * @pdev: PCI device structure + * + * Returns 1 if PASID is required in PRG Response Message, 0 otherwise. + */ +int pci_prg_resp_pasid_required(struct pci_dev *pdev) +{ + if (pdev->is_virtfn) + pdev = pci_physfn(pdev); + + return pdev->pasid_required; +} #endif /* CONFIG_PCI_PRI */ #ifdef CONFIG_PCI_PASID +void pci_pasid_init(struct pci_dev *pdev) +{ + pdev->pasid_cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); +} + /** * pci_enable_pasid - Enable the PASID capability * @pdev: PCI device structure @@ -295,7 +327,17 @@ EXPORT_SYMBOL_GPL(pci_reset_pri); int pci_enable_pasid(struct pci_dev *pdev, int features) { u16 control, supported; - int pos; + int pasid = pdev->pasid_cap; + + /* + * VFs must not implement the PASID Capability, but if a PF + * supports PASID, its VFs share the PF PASID configuration. + */ + if (pdev->is_virtfn) { + if (pci_physfn(pdev)->pasid_enabled) + return 0; + return -EINVAL; + } if (WARN_ON(pdev->pasid_enabled)) return -EBUSY; @@ -303,11 +345,10 @@ int pci_enable_pasid(struct pci_dev *pdev, int features) if (!pdev->eetlp_prefix_path) return -EINVAL; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); - if (!pos) + if (!pasid) return -EINVAL; - pci_read_config_word(pdev, pos + PCI_PASID_CAP, &supported); + pci_read_config_word(pdev, pasid + PCI_PASID_CAP, &supported); supported &= PCI_PASID_CAP_EXEC | PCI_PASID_CAP_PRIV; /* User wants to enable anything unsupported? */ @@ -317,13 +358,12 @@ int pci_enable_pasid(struct pci_dev *pdev, int features) control = PCI_PASID_CTRL_ENABLE | features; pdev->pasid_features = features; - pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control); + pci_write_config_word(pdev, pasid + PCI_PASID_CTRL, control); pdev->pasid_enabled = 1; return 0; } -EXPORT_SYMBOL_GPL(pci_enable_pasid); /** * pci_disable_pasid - Disable the PASID capability @@ -332,20 +372,22 @@ EXPORT_SYMBOL_GPL(pci_enable_pasid); void pci_disable_pasid(struct pci_dev *pdev) { u16 control = 0; - int pos; + int pasid = pdev->pasid_cap; + + /* VFs share the PF PASID configuration */ + if (pdev->is_virtfn) + return; if (WARN_ON(!pdev->pasid_enabled)) return; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); - if (!pos) + if (!pasid) return; - pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control); + pci_write_config_word(pdev, pasid + PCI_PASID_CTRL, control); pdev->pasid_enabled = 0; } -EXPORT_SYMBOL_GPL(pci_disable_pasid); /** * pci_restore_pasid_state - Restore PASID capabilities @@ -354,19 +396,20 @@ EXPORT_SYMBOL_GPL(pci_disable_pasid); void pci_restore_pasid_state(struct pci_dev *pdev) { u16 control; - int pos; + int pasid = pdev->pasid_cap; + + if (pdev->is_virtfn) + return; if (!pdev->pasid_enabled) return; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); - if (!pos) + if (!pasid) return; control = PCI_PASID_CTRL_ENABLE | pdev->pasid_features; - pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control); + pci_write_config_word(pdev, pasid + PCI_PASID_CTRL, control); } -EXPORT_SYMBOL_GPL(pci_restore_pasid_state); /** * pci_pasid_features - Check which PASID features are supported @@ -381,49 +424,20 @@ EXPORT_SYMBOL_GPL(pci_restore_pasid_state); int pci_pasid_features(struct pci_dev *pdev) { u16 supported; - int pos; + int pasid = pdev->pasid_cap; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); - if (!pos) + if (pdev->is_virtfn) + pdev = pci_physfn(pdev); + + if (!pasid) return -EINVAL; - pci_read_config_word(pdev, pos + PCI_PASID_CAP, &supported); + pci_read_config_word(pdev, pasid + PCI_PASID_CAP, &supported); supported &= PCI_PASID_CAP_EXEC | PCI_PASID_CAP_PRIV; return supported; } -EXPORT_SYMBOL_GPL(pci_pasid_features); - -/** - * pci_prg_resp_pasid_required - Return PRG Response PASID Required bit - * status. - * @pdev: PCI device structure - * - * Returns 1 if PASID is required in PRG Response Message, 0 otherwise. - * - * Even though the PRG response PASID status is read from PRI Status - * Register, since this API will mainly be used by PASID users, this - * function is defined within #ifdef CONFIG_PCI_PASID instead of - * CONFIG_PCI_PRI. - */ -int pci_prg_resp_pasid_required(struct pci_dev *pdev) -{ - u16 status; - int pos; - - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); - if (!pos) - return 0; - - pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status); - - if (status & PCI_PRI_STATUS_PASID) - return 1; - - return 0; -} -EXPORT_SYMBOL_GPL(pci_prg_resp_pasid_required); #define PASID_NUMBER_SHIFT 8 #define PASID_NUMBER_MASK (0x1f << PASID_NUMBER_SHIFT) @@ -437,17 +451,18 @@ EXPORT_SYMBOL_GPL(pci_prg_resp_pasid_required); int pci_max_pasids(struct pci_dev *pdev) { u16 supported; - int pos; + int pasid = pdev->pasid_cap; - pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); - if (!pos) + if (pdev->is_virtfn) + pdev = pci_physfn(pdev); + + if (!pasid) return -EINVAL; - pci_read_config_word(pdev, pos + PCI_PASID_CAP, &supported); + pci_read_config_word(pdev, pasid + PCI_PASID_CAP, &supported); supported = (supported & PASID_NUMBER_MASK) >> PASID_NUMBER_SHIFT; return (1 << supported); } -EXPORT_SYMBOL_GPL(pci_max_pasids); #endif /* CONFIG_PCI_PASID */ diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig index 70e078238899..c77069c8ee5d 100644 --- a/drivers/pci/controller/Kconfig +++ b/drivers/pci/controller/Kconfig @@ -22,34 +22,6 @@ config PCI_AARDVARK controller is part of the South Bridge of the Marvel Armada 3700 SoC. -menu "Cadence PCIe controllers support" - -config PCIE_CADENCE - bool - -config PCIE_CADENCE_HOST - bool "Cadence PCIe host controller" - depends on OF - depends on PCI - select IRQ_DOMAIN - select PCIE_CADENCE - help - Say Y here if you want to support the Cadence PCIe controller in host - mode. This PCIe controller may be embedded into many different vendors - SoCs. - -config PCIE_CADENCE_EP - bool "Cadence PCIe endpoint controller" - depends on OF - depends on PCI_ENDPOINT - select PCIE_CADENCE - help - Say Y here if you want to support the Cadence PCIe controller in - endpoint mode. This PCIe controller may be embedded into many - different vendors SoCs. - -endmenu - config PCIE_XILINX_NWL bool "NWL PCIe Core" depends on ARCH_ZYNQMP || COMPILE_TEST @@ -135,7 +107,7 @@ config PCI_V3_SEMI config PCI_VERSATILE bool "ARM Versatile PB PCI controller" - depends on ARCH_VERSATILE + depends on ARCH_VERSATILE || COMPILE_TEST config PCIE_IPROC tristate @@ -289,4 +261,5 @@ config PCI_HYPERV_INTERFACE have a common interface with the Hyper-V PCI frontend driver. source "drivers/pci/controller/dwc/Kconfig" +source "drivers/pci/controller/cadence/Kconfig" endmenu diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile index a2a22c9d91af..3d4f597f15ce 100644 --- a/drivers/pci/controller/Makefile +++ b/drivers/pci/controller/Makefile @@ -1,7 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o -obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o -obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o +obj-$(CONFIG_PCIE_CADENCE) += cadence/ obj-$(CONFIG_PCI_FTPCI100) += pci-ftpci100.o obj-$(CONFIG_PCI_HYPERV) += pci-hyperv.o obj-$(CONFIG_PCI_HYPERV_INTERFACE) += pci-hyperv-intf.o diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig new file mode 100644 index 000000000000..b76b3cf55ce5 --- /dev/null +++ b/drivers/pci/controller/cadence/Kconfig @@ -0,0 +1,45 @@ +# SPDX-License-Identifier: GPL-2.0 + +menu "Cadence PCIe controllers support" + depends on PCI + +config PCIE_CADENCE + bool + +config PCIE_CADENCE_HOST + bool + depends on OF + select IRQ_DOMAIN + select PCIE_CADENCE + +config PCIE_CADENCE_EP + bool + depends on OF + depends on PCI_ENDPOINT + select PCIE_CADENCE + +config PCIE_CADENCE_PLAT + bool + +config PCIE_CADENCE_PLAT_HOST + bool "Cadence PCIe platform host controller" + depends on OF + select PCIE_CADENCE_HOST + select PCIE_CADENCE_PLAT + help + Say Y here if you want to support the Cadence PCIe platform controller in + host mode. This PCIe controller may be embedded into many different + vendors SoCs. + +config PCIE_CADENCE_PLAT_EP + bool "Cadence PCIe platform endpoint controller" + depends on OF + depends on PCI_ENDPOINT + select PCIE_CADENCE_EP + select PCIE_CADENCE_PLAT + help + Say Y here if you want to support the Cadence PCIe platform controller in + endpoint mode. This PCIe controller may be embedded into many + different vendors SoCs. + +endmenu diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile new file mode 100644 index 000000000000..232a3f20876a --- /dev/null +++ b/drivers/pci/controller/cadence/Makefile @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o +obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o +obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o +obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index def7820cb824..1c173dad67d1 100644 --- a/drivers/pci/controller/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -17,35 +17,6 @@ #define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE 0x1 #define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY 0x3 -/** - * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver - * @pcie: Cadence PCIe controller - * @max_regions: maximum number of regions supported by hardware - * @ob_region_map: bitmask of mapped outbound regions - * @ob_addr: base addresses in the AXI bus where the outbound regions start - * @irq_phys_addr: base address on the AXI bus where the MSI/legacy IRQ - * dedicated outbound regions is mapped. - * @irq_cpu_addr: base address in the CPU space where a write access triggers - * the sending of a memory write (MSI) / normal message (legacy - * IRQ) TLP through the PCIe bus. - * @irq_pci_addr: used to save the current mapping of the MSI/legacy IRQ - * dedicated outbound region. - * @irq_pci_fn: the latest PCI function that has updated the mapping of - * the MSI/legacy IRQ dedicated outbound region. - * @irq_pending: bitmask of asserted legacy IRQs. - */ -struct cdns_pcie_ep { - struct cdns_pcie pcie; - u32 max_regions; - unsigned long ob_region_map; - phys_addr_t *ob_addr; - phys_addr_t irq_phys_addr; - void __iomem *irq_cpu_addr; - u64 irq_pci_addr; - u8 irq_pci_fn; - u8 irq_pending; -}; - static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, struct pci_epf_header *hdr) { @@ -424,28 +395,17 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = { .get_features = cdns_pcie_ep_get_features, }; -static const struct of_device_id cdns_pcie_ep_of_match[] = { - { .compatible = "cdns,cdns-pcie-ep" }, - - { }, -}; -static int cdns_pcie_ep_probe(struct platform_device *pdev) +int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) { - struct device *dev = &pdev->dev; + struct device *dev = ep->pcie.dev; + struct platform_device *pdev = to_platform_device(dev); struct device_node *np = dev->of_node; - struct cdns_pcie_ep *ep; - struct cdns_pcie *pcie; - struct pci_epc *epc; + struct cdns_pcie *pcie = &ep->pcie; struct resource *res; + struct pci_epc *epc; int ret; - int phy_count; - - ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); - if (!ep) - return -ENOMEM; - pcie = &ep->pcie; pcie->is_rc = false; res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg"); @@ -474,19 +434,6 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) if (!ep->ob_addr) return -ENOMEM; - ret = cdns_pcie_init_phy(dev, pcie); - if (ret) { - dev_err(dev, "failed to init phy\n"); - return ret; - } - platform_set_drvdata(pdev, pcie); - pm_runtime_enable(dev); - ret = pm_runtime_get_sync(dev); - if (ret < 0) { - dev_err(dev, "pm_runtime_get_sync() failed\n"); - goto err_get_sync; - } - /* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */ cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0)); @@ -528,38 +475,5 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev) err_init: pm_runtime_put_sync(dev); - err_get_sync: - pm_runtime_disable(dev); - cdns_pcie_disable_phy(pcie); - phy_count = pcie->phy_count; - while (phy_count--) - device_link_del(pcie->link[phy_count]); - return ret; } - -static void cdns_pcie_ep_shutdown(struct platform_device *pdev) -{ - struct device *dev = &pdev->dev; - struct cdns_pcie *pcie = dev_get_drvdata(dev); - int ret; - - ret = pm_runtime_put_sync(dev); - if (ret < 0) - dev_dbg(dev, "pm_runtime_put_sync failed\n"); - - pm_runtime_disable(dev); - - cdns_pcie_disable_phy(pcie); -} - -static struct platform_driver cdns_pcie_ep_driver = { - .driver = { - .name = "cdns-pcie-ep", - .of_match_table = cdns_pcie_ep_of_match, - .pm = &cdns_pcie_pm_ops, - }, - .probe = cdns_pcie_ep_probe, - .shutdown = cdns_pcie_ep_shutdown, -}; -builtin_platform_driver(cdns_pcie_ep_driver); diff --git a/drivers/pci/controller/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index 97e251090b4f..9b1c3966414b 100644 --- a/drivers/pci/controller/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -11,33 +11,6 @@ #include "pcie-cadence.h" -/** - * struct cdns_pcie_rc - private data for this PCIe Root Complex driver - * @pcie: Cadence PCIe controller - * @dev: pointer to PCIe device - * @cfg_res: start/end offsets in the physical system memory to map PCI - * configuration space accesses - * @bus_range: first/last buses behind the PCIe host controller - * @cfg_base: IO mapped window to access the PCI configuration space of a - * single function at a time - * @max_regions: maximum number of regions supported by the hardware - * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address - * translation (nbits sets into the "no BAR match" register) - * @vendor_id: PCI vendor ID - * @device_id: PCI device ID - */ -struct cdns_pcie_rc { - struct cdns_pcie pcie; - struct device *dev; - struct resource *cfg_res; - struct resource *bus_range; - void __iomem *cfg_base; - u32 max_regions; - u32 no_bar_nbits; - u16 vendor_id; - u16 device_id; -}; - static void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where) { @@ -92,11 +65,6 @@ static struct pci_ops cdns_pcie_host_ops = { .write = pci_generic_config_write, }; -static const struct of_device_id cdns_pcie_host_of_match[] = { - { .compatible = "cdns,cdns-pcie-host" }, - - { }, -}; static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) { @@ -136,10 +104,10 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) { struct cdns_pcie *pcie = &rc->pcie; - struct resource *cfg_res = rc->cfg_res; struct resource *mem_res = pcie->mem_res; struct resource *bus_range = rc->bus_range; - struct device *dev = rc->dev; + struct resource *cfg_res = rc->cfg_res; + struct device *dev = pcie->dev; struct device_node *np = dev->of_node; struct of_pci_range_parser parser; struct of_pci_range range; @@ -211,7 +179,7 @@ static int cdns_pcie_host_init(struct device *dev, int err; /* Parse our PCI ranges and request their resources */ - err = pci_parse_request_of_pci_ranges(dev, resources, &bus_range); + err = pci_parse_request_of_pci_ranges(dev, resources, NULL, &bus_range); if (err) return err; @@ -233,25 +201,21 @@ static int cdns_pcie_host_init(struct device *dev, return err; } -static int cdns_pcie_host_probe(struct platform_device *pdev) +int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) { - struct device *dev = &pdev->dev; + struct device *dev = rc->pcie.dev; + struct platform_device *pdev = to_platform_device(dev); struct device_node *np = dev->of_node; struct pci_host_bridge *bridge; struct list_head resources; - struct cdns_pcie_rc *rc; struct cdns_pcie *pcie; struct resource *res; int ret; - int phy_count; - bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); + bridge = pci_host_bridge_from_priv(rc); if (!bridge) return -ENOMEM; - rc = pci_host_bridge_priv(bridge); - rc->dev = dev; - pcie = &rc->pcie; pcie->is_rc = true; @@ -287,21 +251,8 @@ static int cdns_pcie_host_probe(struct platform_device *pdev) dev_err(dev, "missing \"mem\"\n"); return -EINVAL; } - pcie->mem_res = res; - ret = cdns_pcie_init_phy(dev, pcie); - if (ret) { - dev_err(dev, "failed to init phy\n"); - return ret; - } - platform_set_drvdata(pdev, pcie); - - pm_runtime_enable(dev); - ret = pm_runtime_get_sync(dev); - if (ret < 0) { - dev_err(dev, "pm_runtime_get_sync() failed\n"); - goto err_get_sync; - } + pcie->mem_res = res; ret = cdns_pcie_host_init(dev, &resources, rc); if (ret) @@ -326,37 +277,5 @@ static int cdns_pcie_host_probe(struct platform_device *pdev) err_init: pm_runtime_put_sync(dev); - err_get_sync: - pm_runtime_disable(dev); - cdns_pcie_disable_phy(pcie); - phy_count = pcie->phy_count; - while (phy_count--) - device_link_del(pcie->link[phy_count]); - return ret; } - -static void cdns_pcie_shutdown(struct platform_device *pdev) -{ - struct device *dev = &pdev->dev; - struct cdns_pcie *pcie = dev_get_drvdata(dev); - int ret; - - ret = pm_runtime_put_sync(dev); - if (ret < 0) - dev_dbg(dev, "pm_runtime_put_sync failed\n"); - - pm_runtime_disable(dev); - cdns_pcie_disable_phy(pcie); -} - -static struct platform_driver cdns_pcie_host_driver = { - .driver = { - .name = "cdns-pcie-host", - .of_match_table = cdns_pcie_host_of_match, - .pm = &cdns_pcie_pm_ops, - }, - .probe = cdns_pcie_host_probe, - .shutdown = cdns_pcie_shutdown, -}; -builtin_platform_driver(cdns_pcie_host_driver); diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c new file mode 100644 index 000000000000..f5c6bf6dfcb8 --- /dev/null +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c @@ -0,0 +1,174 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Cadence PCIe platform driver. + * + * Copyright (c) 2019, Cadence Design Systems + * Author: Tom Joseph <tjoseph@cadence.com> + */ +#include <linux/kernel.h> +#include <linux/of_address.h> +#include <linux/of_pci.h> +#include <linux/platform_device.h> +#include <linux/pm_runtime.h> +#include <linux/of_device.h> +#include "pcie-cadence.h" + +/** + * struct cdns_plat_pcie - private data for this PCIe platform driver + * @pcie: Cadence PCIe controller + * @is_rc: Set to 1 indicates the PCIe controller mode is Root Complex, + * if 0 it is in Endpoint mode. + */ +struct cdns_plat_pcie { + struct cdns_pcie *pcie; + bool is_rc; +}; + +struct cdns_plat_pcie_of_data { + bool is_rc; +}; + +static const struct of_device_id cdns_plat_pcie_of_match[]; + +static int cdns_plat_pcie_probe(struct platform_device *pdev) +{ + const struct cdns_plat_pcie_of_data *data; + struct cdns_plat_pcie *cdns_plat_pcie; + const struct of_device_id *match; + struct device *dev = &pdev->dev; + struct pci_host_bridge *bridge; + struct cdns_pcie_ep *ep; + struct cdns_pcie_rc *rc; + int phy_count; + bool is_rc; + int ret; + + match = of_match_device(cdns_plat_pcie_of_match, dev); + if (!match) + return -EINVAL; + + data = (struct cdns_plat_pcie_of_data *)match->data; + is_rc = data->is_rc; + + pr_debug(" Started %s with is_rc: %d\n", __func__, is_rc); + cdns_plat_pcie = devm_kzalloc(dev, sizeof(*cdns_plat_pcie), GFP_KERNEL); + if (!cdns_plat_pcie) + return -ENOMEM; + + platform_set_drvdata(pdev, cdns_plat_pcie); + if (is_rc) { + if (!IS_ENABLED(CONFIG_PCIE_CADENCE_PLAT_HOST)) + return -ENODEV; + + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); + if (!bridge) + return -ENOMEM; + + rc = pci_host_bridge_priv(bridge); + rc->pcie.dev = dev; + cdns_plat_pcie->pcie = &rc->pcie; + cdns_plat_pcie->is_rc = is_rc; + + ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); + if (ret) { + dev_err(dev, "failed to init phy\n"); + return ret; + } + pm_runtime_enable(dev); + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + dev_err(dev, "pm_runtime_get_sync() failed\n"); + goto err_get_sync; + } + + ret = cdns_pcie_host_setup(rc); + if (ret) + goto err_init; + } else { + if (!IS_ENABLED(CONFIG_PCIE_CADENCE_PLAT_EP)) + return -ENODEV; + + ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); + if (!ep) + return -ENOMEM; + + ep->pcie.dev = dev; + cdns_plat_pcie->pcie = &ep->pcie; + cdns_plat_pcie->is_rc = is_rc; + + ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); + if (ret) { + dev_err(dev, "failed to init phy\n"); + return ret; + } + + pm_runtime_enable(dev); + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + dev_err(dev, "pm_runtime_get_sync() failed\n"); + goto err_get_sync; + } + + ret = cdns_pcie_ep_setup(ep); + if (ret) + goto err_init; + } + + err_init: + pm_runtime_put_sync(dev); + + err_get_sync: + pm_runtime_disable(dev); + cdns_pcie_disable_phy(cdns_plat_pcie->pcie); + phy_count = cdns_plat_pcie->pcie->phy_count; + while (phy_count--) + device_link_del(cdns_plat_pcie->pcie->link[phy_count]); + + return 0; +} + +static void cdns_plat_pcie_shutdown(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct cdns_pcie *pcie = dev_get_drvdata(dev); + int ret; + + ret = pm_runtime_put_sync(dev); + if (ret < 0) + dev_dbg(dev, "pm_runtime_put_sync failed\n"); + + pm_runtime_disable(dev); + + cdns_pcie_disable_phy(pcie); +} + +static const struct cdns_plat_pcie_of_data cdns_plat_pcie_host_of_data = { + .is_rc = true, +}; + +static const struct cdns_plat_pcie_of_data cdns_plat_pcie_ep_of_data = { + .is_rc = false, +}; + +static const struct of_device_id cdns_plat_pcie_of_match[] = { + { + .compatible = "cdns,cdns-pcie-host", + .data = &cdns_plat_pcie_host_of_data, + }, + { + .compatible = "cdns,cdns-pcie-ep", + .data = &cdns_plat_pcie_ep_of_data, + }, + {}, +}; + +static struct platform_driver cdns_plat_pcie_driver = { + .driver = { + .name = "cdns-pcie", + .of_match_table = cdns_plat_pcie_of_match, + .pm = &cdns_pcie_pm_ops, + }, + .probe = cdns_plat_pcie_probe, + .shutdown = cdns_plat_pcie_shutdown, +}; +builtin_platform_driver(cdns_plat_pcie_driver); diff --git a/drivers/pci/controller/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c index cd795f6fc1e2..cd795f6fc1e2 100644 --- a/drivers/pci/controller/pcie-cadence.c +++ b/drivers/pci/controller/cadence/pcie-cadence.c diff --git a/drivers/pci/controller/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h index ae6bf2a2b3d3..a2b28b912ca4 100644 --- a/drivers/pci/controller/pcie-cadence.h +++ b/drivers/pci/controller/cadence/pcie-cadence.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0 +/* SPDX-License-Identifier: GPL-2.0 */ // Copyright (c) 2017 Cadence // Cadence PCIe controller driver. // Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com> @@ -190,6 +190,8 @@ enum cdns_pcie_rp_bar { (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK) #define CDNS_PCIE_MSG_NO_DATA BIT(16) +struct cdns_pcie; + enum cdns_pcie_msg_code { MSG_CODE_ASSERT_INTA = 0x20, MSG_CODE_ASSERT_INTB = 0x21, @@ -231,13 +233,71 @@ enum cdns_pcie_msg_routing { struct cdns_pcie { void __iomem *reg_base; struct resource *mem_res; + struct device *dev; bool is_rc; u8 bus; int phy_count; struct phy **phy; struct device_link **link; + const struct cdns_pcie_common_ops *ops; +}; + +/** + * struct cdns_pcie_rc - private data for this PCIe Root Complex driver + * @pcie: Cadence PCIe controller + * @dev: pointer to PCIe device + * @cfg_res: start/end offsets in the physical system memory to map PCI + * configuration space accesses + * @bus_range: first/last buses behind the PCIe host controller + * @cfg_base: IO mapped window to access the PCI configuration space of a + * single function at a time + * @max_regions: maximum number of regions supported by the hardware + * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address + * translation (nbits sets into the "no BAR match" register) + * @vendor_id: PCI vendor ID + * @device_id: PCI device ID + */ +struct cdns_pcie_rc { + struct cdns_pcie pcie; + struct resource *cfg_res; + struct resource *bus_range; + void __iomem *cfg_base; + u32 max_regions; + u32 no_bar_nbits; + u16 vendor_id; + u16 device_id; }; +/** + * struct cdns_pcie_ep - private data for this PCIe endpoint controller driver + * @pcie: Cadence PCIe controller + * @max_regions: maximum number of regions supported by hardware + * @ob_region_map: bitmask of mapped outbound regions + * @ob_addr: base addresses in the AXI bus where the outbound regions start + * @irq_phys_addr: base address on the AXI bus where the MSI/legacy IRQ + * dedicated outbound regions is mapped. + * @irq_cpu_addr: base address in the CPU space where a write access triggers + * the sending of a memory write (MSI) / normal message (legacy + * IRQ) TLP through the PCIe bus. + * @irq_pci_addr: used to save the current mapping of the MSI/legacy IRQ + * dedicated outbound region. + * @irq_pci_fn: the latest PCI function that has updated the mapping of + * the MSI/legacy IRQ dedicated outbound region. + * @irq_pending: bitmask of asserted legacy IRQs. + */ +struct cdns_pcie_ep { + struct cdns_pcie pcie; + u32 max_regions; + unsigned long ob_region_map; + phys_addr_t *ob_addr; + phys_addr_t irq_phys_addr; + void __iomem *irq_cpu_addr; + u64 irq_pci_addr; + u8 irq_pci_fn; + u8 irq_pending; +}; + + /* Register access */ static inline void cdns_pcie_writeb(struct cdns_pcie *pcie, u32 reg, u8 value) { @@ -306,6 +366,23 @@ static inline u32 cdns_pcie_ep_fn_readl(struct cdns_pcie *pcie, u8 fn, u32 reg) return readl(pcie->reg_base + CDNS_PCIE_EP_FUNC_BASE(fn) + reg); } +#ifdef CONFIG_PCIE_CADENCE_HOST +int cdns_pcie_host_setup(struct cdns_pcie_rc *rc); +#else +static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) +{ + return 0; +} +#endif + +#ifdef CONFIG_PCIE_CADENCE_EP +int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep); +#else +static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) +{ + return 0; +} +#endif void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 fn, u32 r, bool is_io, u64 cpu_addr, u64 pci_addr, size_t size); diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig index 0ba988b5b5bc..625a031b2193 100644 --- a/drivers/pci/controller/dwc/Kconfig +++ b/drivers/pci/controller/dwc/Kconfig @@ -7,9 +7,9 @@ config PCIE_DW bool config PCIE_DW_HOST - bool + bool depends on PCI_MSI_IRQ_DOMAIN - select PCIE_DW + select PCIE_DW config PCIE_DW_EP bool @@ -224,7 +224,7 @@ config PCIE_HISI_STB depends on PCI_MSI_IRQ_DOMAIN select PCIE_DW_HOST help - Say Y here if you want PCIe controller support on HiSilicon STB SoCs + Say Y here if you want PCIe controller support on HiSilicon STB SoCs config PCI_MESON bool "MESON PCIe controller" diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c index 4234ddb4722f..b20651cea09f 100644 --- a/drivers/pci/controller/dwc/pci-dra7xx.c +++ b/drivers/pci/controller/dwc/pci-dra7xx.c @@ -353,7 +353,7 @@ static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep) struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); enum pci_barno bar; - for (bar = BAR_0; bar <= BAR_5; bar++) + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) dw_pcie_ep_reset_bar(pci, bar); dra7xx_pcie_enable_wrapper_interrupts(dra7xx); diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c index ca9aa4501e7e..0d151cead1b7 100644 --- a/drivers/pci/controller/dwc/pci-layerscape-ep.c +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c @@ -58,7 +58,7 @@ static void ls_pcie_ep_init(struct dw_pcie_ep *ep) struct dw_pcie *pci = to_dw_pcie_from_ep(ep); enum pci_barno bar; - for (bar = BAR_0; bar <= BAR_5; bar++) + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) dw_pcie_ep_reset_bar(pci, bar); } diff --git a/drivers/pci/controller/dwc/pci-layerscape.c b/drivers/pci/controller/dwc/pci-layerscape.c index 3a5fa26d5e56..f24f79a70d9a 100644 --- a/drivers/pci/controller/dwc/pci-layerscape.c +++ b/drivers/pci/controller/dwc/pci-layerscape.c @@ -263,6 +263,7 @@ static const struct ls_pcie_drvdata ls2088_drvdata = { static const struct of_device_id ls_pcie_of_match[] = { { .compatible = "fsl,ls1012a-pcie", .data = &ls1046_drvdata }, { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, + { .compatible = "fsl,ls1028a-pcie", .data = &ls2088_drvdata }, { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, { .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata }, { .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata }, diff --git a/drivers/pci/controller/dwc/pci-meson.c b/drivers/pci/controller/dwc/pci-meson.c index e35e9eaa50ee..3772b02a5c55 100644 --- a/drivers/pci/controller/dwc/pci-meson.c +++ b/drivers/pci/controller/dwc/pci-meson.c @@ -16,6 +16,7 @@ #include <linux/reset.h> #include <linux/resource.h> #include <linux/types.h> +#include <linux/phy/phy.h> #include "pcie-designware.h" @@ -96,12 +97,18 @@ struct meson_pcie_rc_reset { struct reset_control *apb; }; +struct meson_pcie_param { + bool has_shared_phy; +}; + struct meson_pcie { struct dw_pcie pci; struct meson_pcie_mem_res mem_res; struct meson_pcie_clk_res clk_res; struct meson_pcie_rc_reset mrst; struct gpio_desc *reset_gpio; + struct phy *phy; + const struct meson_pcie_param *param; }; static struct reset_control *meson_pcie_get_reset(struct meson_pcie *mp, @@ -123,10 +130,12 @@ static int meson_pcie_get_resets(struct meson_pcie *mp) { struct meson_pcie_rc_reset *mrst = &mp->mrst; - mrst->phy = meson_pcie_get_reset(mp, "phy", PCIE_SHARED_RESET); - if (IS_ERR(mrst->phy)) - return PTR_ERR(mrst->phy); - reset_control_deassert(mrst->phy); + if (!mp->param->has_shared_phy) { + mrst->phy = meson_pcie_get_reset(mp, "phy", PCIE_SHARED_RESET); + if (IS_ERR(mrst->phy)) + return PTR_ERR(mrst->phy); + reset_control_deassert(mrst->phy); + } mrst->port = meson_pcie_get_reset(mp, "port", PCIE_NORMAL_RESET); if (IS_ERR(mrst->port)) @@ -180,27 +189,52 @@ static int meson_pcie_get_mems(struct platform_device *pdev, if (IS_ERR(mp->mem_res.cfg_base)) return PTR_ERR(mp->mem_res.cfg_base); - /* Meson SoC has two PCI controllers use same phy register*/ - mp->mem_res.phy_base = meson_pcie_get_mem_shared(pdev, mp, "phy"); - if (IS_ERR(mp->mem_res.phy_base)) - return PTR_ERR(mp->mem_res.phy_base); + /* Meson AXG SoC has two PCI controllers use same phy register */ + if (!mp->param->has_shared_phy) { + mp->mem_res.phy_base = + meson_pcie_get_mem_shared(pdev, mp, "phy"); + if (IS_ERR(mp->mem_res.phy_base)) + return PTR_ERR(mp->mem_res.phy_base); + } return 0; } -static void meson_pcie_power_on(struct meson_pcie *mp) +static int meson_pcie_power_on(struct meson_pcie *mp) { - writel(MESON_PCIE_PHY_POWERUP, mp->mem_res.phy_base); + int ret = 0; + + if (mp->param->has_shared_phy) { + ret = phy_init(mp->phy); + if (ret) + return ret; + + ret = phy_power_on(mp->phy); + if (ret) { + phy_exit(mp->phy); + return ret; + } + } else + writel(MESON_PCIE_PHY_POWERUP, mp->mem_res.phy_base); + + return 0; } -static void meson_pcie_reset(struct meson_pcie *mp) +static int meson_pcie_reset(struct meson_pcie *mp) { struct meson_pcie_rc_reset *mrst = &mp->mrst; - - reset_control_assert(mrst->phy); - udelay(PCIE_RESET_DELAY); - reset_control_deassert(mrst->phy); - udelay(PCIE_RESET_DELAY); + int ret = 0; + + if (mp->param->has_shared_phy) { + ret = phy_reset(mp->phy); + if (ret) + return ret; + } else { + reset_control_assert(mrst->phy); + udelay(PCIE_RESET_DELAY); + reset_control_deassert(mrst->phy); + udelay(PCIE_RESET_DELAY); + } reset_control_assert(mrst->port); reset_control_assert(mrst->apb); @@ -208,6 +242,8 @@ static void meson_pcie_reset(struct meson_pcie *mp) reset_control_deassert(mrst->port); reset_control_deassert(mrst->apb); udelay(PCIE_RESET_DELAY); + + return 0; } static inline struct clk *meson_pcie_probe_clock(struct device *dev, @@ -250,15 +286,17 @@ static int meson_pcie_probe_clocks(struct meson_pcie *mp) if (IS_ERR(res->port_clk)) return PTR_ERR(res->port_clk); - res->mipi_gate = meson_pcie_probe_clock(dev, "pcie_mipi_en", 0); - if (IS_ERR(res->mipi_gate)) - return PTR_ERR(res->mipi_gate); + if (!mp->param->has_shared_phy) { + res->mipi_gate = meson_pcie_probe_clock(dev, "mipi", 0); + if (IS_ERR(res->mipi_gate)) + return PTR_ERR(res->mipi_gate); + } - res->general_clk = meson_pcie_probe_clock(dev, "pcie_general", 0); + res->general_clk = meson_pcie_probe_clock(dev, "general", 0); if (IS_ERR(res->general_clk)) return PTR_ERR(res->general_clk); - res->clk = meson_pcie_probe_clock(dev, "pcie", 0); + res->clk = meson_pcie_probe_clock(dev, "pclk", 0); if (IS_ERR(res->clk)) return PTR_ERR(res->clk); @@ -287,9 +325,9 @@ static inline void meson_cfg_writel(struct meson_pcie *mp, u32 val, u32 reg) static void meson_pcie_assert_reset(struct meson_pcie *mp) { - gpiod_set_value_cansleep(mp->reset_gpio, 0); - udelay(500); gpiod_set_value_cansleep(mp->reset_gpio, 1); + udelay(500); + gpiod_set_value_cansleep(mp->reset_gpio, 0); } static void meson_pcie_init_dw(struct meson_pcie *mp) @@ -524,6 +562,7 @@ static const struct dw_pcie_ops dw_pcie_ops = { static int meson_pcie_probe(struct platform_device *pdev) { + const struct meson_pcie_param *match_data; struct device *dev = &pdev->dev; struct dw_pcie *pci; struct meson_pcie *mp; @@ -537,6 +576,19 @@ static int meson_pcie_probe(struct platform_device *pdev) pci->dev = dev; pci->ops = &dw_pcie_ops; + match_data = of_device_get_match_data(dev); + if (!match_data) { + dev_err(dev, "failed to get match data\n"); + return -ENODEV; + } + mp->param = match_data; + + if (mp->param->has_shared_phy) { + mp->phy = devm_phy_get(dev, "pcie"); + if (IS_ERR(mp->phy)) + return PTR_ERR(mp->phy); + } + mp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); if (IS_ERR(mp->reset_gpio)) { dev_err(dev, "get reset gpio failed\n"); @@ -555,13 +607,22 @@ static int meson_pcie_probe(struct platform_device *pdev) return ret; } - meson_pcie_power_on(mp); - meson_pcie_reset(mp); + ret = meson_pcie_power_on(mp); + if (ret) { + dev_err(dev, "phy power on failed, %d\n", ret); + return ret; + } + + ret = meson_pcie_reset(mp); + if (ret) { + dev_err(dev, "reset failed, %d\n", ret); + goto err_phy; + } ret = meson_pcie_probe_clocks(mp); if (ret) { dev_err(dev, "init clock resources failed, %d\n", ret); - return ret; + goto err_phy; } platform_set_drvdata(pdev, mp); @@ -569,15 +630,36 @@ static int meson_pcie_probe(struct platform_device *pdev) ret = meson_add_pcie_port(mp, pdev); if (ret < 0) { dev_err(dev, "Add PCIe port failed, %d\n", ret); - return ret; + goto err_phy; } return 0; + +err_phy: + if (mp->param->has_shared_phy) { + phy_power_off(mp->phy); + phy_exit(mp->phy); + } + + return ret; } +static struct meson_pcie_param meson_pcie_axg_param = { + .has_shared_phy = false, +}; + +static struct meson_pcie_param meson_pcie_g12a_param = { + .has_shared_phy = true, +}; + static const struct of_device_id meson_pcie_of_match[] = { { .compatible = "amlogic,axg-pcie", + .data = &meson_pcie_axg_param, + }, + { + .compatible = "amlogic,g12a-pcie", + .data = &meson_pcie_g12a_param, }, {}, }; diff --git a/drivers/pci/controller/dwc/pcie-artpec6.c b/drivers/pci/controller/dwc/pcie-artpec6.c index d00252bd8fae..9e2482bd7b6d 100644 --- a/drivers/pci/controller/dwc/pcie-artpec6.c +++ b/drivers/pci/controller/dwc/pcie-artpec6.c @@ -422,7 +422,7 @@ static void artpec6_pcie_ep_init(struct dw_pcie_ep *ep) artpec6_pcie_wait_for_phy(artpec6_pcie); artpec6_pcie_set_nfts(artpec6_pcie); - for (bar = BAR_0; bar <= BAR_5; bar++) + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) dw_pcie_ep_reset_bar(pci, bar); } diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 0f36a926059a..395feb8ca051 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -10,6 +10,7 @@ #include <linux/irqchip/chained_irq.h> #include <linux/irqdomain.h> +#include <linux/msi.h> #include <linux/of_address.h> #include <linux/of_pci.h> #include <linux/pci_regs.h> @@ -78,7 +79,8 @@ static struct msi_domain_info dw_pcie_msi_domain_info = { irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) { int i, pos, irq; - u32 val, num_ctrls; + unsigned long val; + u32 status, num_ctrls; irqreturn_t ret = IRQ_NONE; num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; @@ -86,14 +88,14 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) for (i = 0; i < num_ctrls; i++) { dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + (i * MSI_REG_CTRL_BLOCK_SIZE), - 4, &val); - if (!val) + 4, &status); + if (!status) continue; ret = IRQ_HANDLED; + val = status; pos = 0; - while ((pos = find_next_bit((unsigned long *) &val, - MAX_MSI_IRQS_PER_CTRL, + while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, pos)) != MAX_MSI_IRQS_PER_CTRL) { irq = irq_find_mapping(pp->irq_domain, (i * MAX_MSI_IRQS_PER_CTRL) + @@ -319,7 +321,7 @@ int dw_pcie_host_init(struct pcie_port *pp) struct device *dev = pci->dev; struct device_node *np = dev->of_node; struct platform_device *pdev = to_platform_device(dev); - struct resource_entry *win, *tmp; + struct resource_entry *win; struct pci_bus *child; struct pci_host_bridge *bridge; struct resource *cfg_res; @@ -342,31 +344,20 @@ int dw_pcie_host_init(struct pcie_port *pp) if (!bridge) return -ENOMEM; - ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - &bridge->windows, &pp->io_base); - if (ret) - return ret; - - ret = devm_request_pci_bus_resources(dev, &bridge->windows); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (ret) return ret; /* Get the I/O and memory ranges from DT */ - resource_list_for_each_entry_safe(win, tmp, &bridge->windows) { + resource_list_for_each_entry(win, &bridge->windows) { switch (resource_type(win->res)) { case IORESOURCE_IO: - ret = devm_pci_remap_iospace(dev, win->res, - pp->io_base); - if (ret) { - dev_warn(dev, "Error %d: failed to map resource %pR\n", - ret, win->res); - resource_list_destroy_entry(win); - } else { - pp->io = win->res; - pp->io->name = "I/O"; - pp->io_size = resource_size(pp->io); - pp->io_bus_addr = pp->io->start - win->offset; - } + pp->io = win->res; + pp->io->name = "I/O"; + pp->io_size = resource_size(pp->io); + pp->io_bus_addr = pp->io->start - win->offset; + pp->io_base = pci_pio_to_address(pp->io->start); break; case IORESOURCE_MEM: pp->mem = win->res; diff --git a/drivers/pci/controller/dwc/pcie-designware-plat.c b/drivers/pci/controller/dwc/pcie-designware-plat.c index b58fdcbc664b..73646b677aff 100644 --- a/drivers/pci/controller/dwc/pcie-designware-plat.c +++ b/drivers/pci/controller/dwc/pcie-designware-plat.c @@ -70,7 +70,7 @@ static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) struct dw_pcie *pci = to_dw_pcie_from_ep(ep); enum pci_barno bar; - for (bar = BAR_0; bar <= BAR_5; bar++) + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) dw_pcie_ep_reset_bar(pci, bar); } diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h index 5a18e94e52c8..5accdd6bc388 100644 --- a/drivers/pci/controller/dwc/pcie-designware.h +++ b/drivers/pci/controller/dwc/pcie-designware.h @@ -214,7 +214,7 @@ struct dw_pcie_ep { phys_addr_t phys_base; size_t addr_size; size_t page_size; - u8 bar_to_atu[6]; + u8 bar_to_atu[PCI_STD_NUM_BARS]; phys_addr_t *outbound_addr; unsigned long *ib_window_map; unsigned long *ob_window_map; diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c index f89f5acee72d..cbe95f0ea0ca 100644 --- a/drivers/pci/controller/dwc/pcie-tegra194.c +++ b/drivers/pci/controller/dwc/pcie-tegra194.c @@ -40,8 +40,6 @@ #define APPL_PINMUX_CLKREQ_OVERRIDE BIT(3) #define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN BIT(4) #define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE BIT(5) -#define APPL_PINMUX_CLKREQ_OUT_OVRD_EN BIT(9) -#define APPL_PINMUX_CLKREQ_OUT_OVRD BIT(10) #define APPL_CTRL 0x4 #define APPL_CTRL_SYS_PRE_DET_STATE BIT(6) @@ -1193,8 +1191,8 @@ static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie, if (!pcie->supports_clkreq) { val = appl_readl(pcie, APPL_PINMUX); - val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN; - val |= APPL_PINMUX_CLKREQ_OUT_OVRD; + val |= APPL_PINMUX_CLKREQ_OVERRIDE_EN; + val &= ~APPL_PINMUX_CLKREQ_OVERRIDE; appl_writel(pcie, val, APPL_PINMUX); } diff --git a/drivers/pci/controller/dwc/pcie-uniphier.c b/drivers/pci/controller/dwc/pcie-uniphier.c index 3f30ee4a00b3..8fd7badd59c2 100644 --- a/drivers/pci/controller/dwc/pcie-uniphier.c +++ b/drivers/pci/controller/dwc/pcie-uniphier.c @@ -33,6 +33,10 @@ #define PCL_PIPEMON 0x0044 #define PCL_PCLK_ALIVE BIT(15) +#define PCL_MODE 0x8000 +#define PCL_MODE_REGEN BIT(8) +#define PCL_MODE_REGVAL BIT(0) + #define PCL_APP_READY_CTRL 0x8008 #define PCL_APP_LTSSM_ENABLE BIT(0) @@ -85,6 +89,12 @@ static void uniphier_pcie_init_rc(struct uniphier_pcie_priv *priv) { u32 val; + /* set RC MODE */ + val = readl(priv->base + PCL_MODE); + val |= PCL_MODE_REGEN; + val &= ~PCL_MODE_REGVAL; + writel(val, priv->base + PCL_MODE); + /* use auxiliary power detection */ val = readl(priv->base + PCL_APP_PM0); val |= PCL_SYS_AUX_PWR_DET; diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c index fc0fe4d4de49..2a20b649f40c 100644 --- a/drivers/pci/controller/pci-aardvark.c +++ b/drivers/pci/controller/pci-aardvark.c @@ -16,6 +16,7 @@ #include <linux/pci.h> #include <linux/init.h> #include <linux/platform_device.h> +#include <linux/msi.h> #include <linux/of_address.h> #include <linux/of_pci.h> @@ -175,18 +176,20 @@ (PCIE_CONF_BUS(bus) | PCIE_CONF_DEV(PCI_SLOT(devfn)) | \ PCIE_CONF_FUNC(PCI_FUNC(devfn)) | PCIE_CONF_REG(where)) -#define PIO_TIMEOUT_MS 1 +#define PIO_RETRY_CNT 500 +#define PIO_RETRY_DELAY 2 /* 2 us*/ #define LINK_WAIT_MAX_RETRIES 10 #define LINK_WAIT_USLEEP_MIN 90000 #define LINK_WAIT_USLEEP_MAX 100000 +#define RETRAIN_WAIT_MAX_RETRIES 10 +#define RETRAIN_WAIT_USLEEP_US 2000 #define MSI_IRQ_NUM 32 struct advk_pcie { struct platform_device *pdev; void __iomem *base; - struct list_head resources; struct irq_domain *irq_domain; struct irq_chip irq_chip; struct irq_domain *msi_domain; @@ -239,6 +242,17 @@ static int advk_pcie_wait_for_link(struct advk_pcie *pcie) return -ETIMEDOUT; } +static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie) +{ + size_t retries; + + for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) { + if (!advk_pcie_link_up(pcie)) + break; + udelay(RETRAIN_WAIT_USLEEP_US); + } +} + static void advk_pcie_setup_hw(struct advk_pcie *pcie) { u32 reg; @@ -324,6 +338,14 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) reg |= PIO_CTRL_ADDR_WIN_DISABLE; advk_writel(pcie, reg, PIO_CTRL); + /* + * PERST# signal could have been asserted by pinctrl subsystem before + * probe() callback has been called, making the endpoint going into + * fundamental reset. As required by PCI Express spec a delay for at + * least 100ms after such a reset before link training is needed. + */ + msleep(PCI_PM_D3COLD_WAIT); + /* Start link training */ reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG); reg |= PCIE_CORE_LINK_TRAINING; @@ -383,17 +405,16 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie) static int advk_pcie_wait_pio(struct advk_pcie *pcie) { struct device *dev = &pcie->pdev->dev; - unsigned long timeout; - - timeout = jiffies + msecs_to_jiffies(PIO_TIMEOUT_MS); + int i; - while (time_before(jiffies, timeout)) { + for (i = 0; i < PIO_RETRY_CNT; i++) { u32 start, isr; start = advk_readl(pcie, PIO_START); isr = advk_readl(pcie, PIO_ISR); if (!start && isr) return 0; + udelay(PIO_RETRY_DELAY); } dev_err(dev, "config read/write timed out\n"); @@ -415,7 +436,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, case PCI_EXP_RTCTL: { u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); - *value = (val & PCIE_MSG_PM_PME_MASK) ? PCI_EXP_RTCTL_PMEIE : 0; + *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; return PCI_BRIDGE_EMUL_HANDLED; } @@ -426,11 +447,20 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, return PCI_BRIDGE_EMUL_HANDLED; } + case PCI_EXP_LNKCTL: { + /* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */ + u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) & + ~(PCI_EXP_LNKSTA_LT << 16); + if (!advk_pcie_link_up(pcie)) + val |= (PCI_EXP_LNKSTA_LT << 16); + *value = val; + return PCI_BRIDGE_EMUL_HANDLED; + } + case PCI_CAP_LIST_ID: case PCI_EXP_DEVCAP: case PCI_EXP_DEVCTL: case PCI_EXP_LNKCAP: - case PCI_EXP_LNKCTL: *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); return PCI_BRIDGE_EMUL_HANDLED; default: @@ -447,14 +477,24 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, switch (reg) { case PCI_EXP_DEVCTL: + advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg); + break; + case PCI_EXP_LNKCTL: advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg); + if (new & PCI_EXP_LNKCTL_RL) + advk_pcie_wait_for_retrain(pcie); break; - case PCI_EXP_RTCTL: - new = (new & PCI_EXP_RTCTL_PMEIE) << 3; - advk_writel(pcie, new, PCIE_ISR0_MASK_REG); + case PCI_EXP_RTCTL: { + /* Only mask/unmask PME interrupt */ + u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG) & + ~PCIE_MSG_PM_PME_MASK; + if ((new & PCI_EXP_RTCTL_PMEIE) == 0) + val |= PCIE_MSG_PM_PME_MASK; + advk_writel(pcie, val, PCIE_ISR0_MASK_REG); break; + } case PCI_EXP_RTSTA: new = (new & PCI_EXP_RTSTA_PME) >> 9; @@ -479,18 +519,20 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie) { struct pci_bridge_emul *bridge = &pcie->bridge; - bridge->conf.vendor = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff; - bridge->conf.device = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) >> 16; + bridge->conf.vendor = + cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff); + bridge->conf.device = + cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) >> 16); bridge->conf.class_revision = - advk_readl(pcie, PCIE_CORE_DEV_REV_REG) & 0xff; + cpu_to_le32(advk_readl(pcie, PCIE_CORE_DEV_REV_REG) & 0xff); /* Support 32 bits I/O addressing */ bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; /* Support 64 bits memory pref */ - bridge->conf.pref_mem_base = PCI_PREF_RANGE_TYPE_64; - bridge->conf.pref_mem_limit = PCI_PREF_RANGE_TYPE_64; + bridge->conf.pref_mem_base = cpu_to_le16(PCI_PREF_RANGE_TYPE_64); + bridge->conf.pref_mem_limit = cpu_to_le16(PCI_PREF_RANGE_TYPE_64); /* Support interrupt A for MSI feature */ bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; @@ -910,63 +952,11 @@ static irqreturn_t advk_pcie_irq_handler(int irq, void *arg) return IRQ_HANDLED; } -static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie) -{ - int err, res_valid = 0; - struct device *dev = &pcie->pdev->dev; - struct resource_entry *win, *tmp; - resource_size_t iobase; - - INIT_LIST_HEAD(&pcie->resources); - - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - &pcie->resources, &iobase); - if (err) - return err; - - err = devm_request_pci_bus_resources(dev, &pcie->resources); - if (err) - goto out_release_res; - - resource_list_for_each_entry_safe(win, tmp, &pcie->resources) { - struct resource *res = win->res; - - switch (resource_type(res)) { - case IORESOURCE_IO: - err = devm_pci_remap_iospace(dev, res, iobase); - if (err) { - dev_warn(dev, "error %d: failed to map resource %pR\n", - err, res); - resource_list_destroy_entry(win); - } - break; - case IORESOURCE_MEM: - res_valid |= !(res->flags & IORESOURCE_PREFETCH); - break; - case IORESOURCE_BUS: - pcie->root_bus_nr = res->start; - break; - } - } - - if (!res_valid) { - dev_err(dev, "non-prefetchable memory resource required\n"); - err = -EINVAL; - goto out_release_res; - } - - return 0; - -out_release_res: - pci_free_resource_list(&pcie->resources); - return err; -} - static int advk_pcie_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct advk_pcie *pcie; - struct resource *res; + struct resource *res, *bus; struct pci_host_bridge *bridge; int ret, irq; @@ -991,11 +981,13 @@ static int advk_pcie_probe(struct platform_device *pdev) return ret; } - ret = advk_pcie_parse_request_of_pci_ranges(pcie); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, &bus); if (ret) { dev_err(dev, "Failed to parse resources\n"); return ret; } + pcie->root_bus_nr = bus->start; advk_pcie_setup_hw(pcie); @@ -1014,7 +1006,6 @@ static int advk_pcie_probe(struct platform_device *pdev) return ret; } - list_splice_init(&pcie->resources, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = pcie; bridge->busnr = 0; diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c index bf5ece5d9291..1b67564de7af 100644 --- a/drivers/pci/controller/pci-ftpci100.c +++ b/drivers/pci/controller/pci-ftpci100.c @@ -375,12 +375,11 @@ static int faraday_pci_setup_cascaded_irq(struct faraday_pci *p) return 0; } -static int faraday_pci_parse_map_dma_ranges(struct faraday_pci *p, - struct device_node *np) +static int faraday_pci_parse_map_dma_ranges(struct faraday_pci *p) { - struct of_pci_range range; - struct of_pci_range_parser parser; struct device *dev = p->dev; + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(p); + struct resource_entry *entry; u32 confreg[3] = { FARADAY_PCI_MEM1_BASE_SIZE, FARADAY_PCI_MEM2_BASE_SIZE, @@ -389,19 +388,13 @@ static int faraday_pci_parse_map_dma_ranges(struct faraday_pci *p, int i = 0; u32 val; - if (of_pci_dma_range_parser_init(&parser, np)) { - dev_err(dev, "missing dma-ranges property\n"); - return -EINVAL; - } - - /* - * Get the dma-ranges from the device tree - */ - for_each_of_pci_range(&parser, &range) { - u64 end = range.pci_addr + range.size - 1; + resource_list_for_each_entry(entry, &bridge->dma_ranges) { + u64 pci_addr = entry->res->start - entry->offset; + u64 end = entry->res->end - entry->offset; int ret; - ret = faraday_res_to_memcfg(range.pci_addr, range.size, &val); + ret = faraday_res_to_memcfg(pci_addr, + resource_size(entry->res), &val); if (ret) { dev_err(dev, "DMA range %d: illegal MEM resource size\n", i); @@ -409,7 +402,7 @@ static int faraday_pci_parse_map_dma_ranges(struct faraday_pci *p, } dev_info(dev, "DMA MEM%d BASE: 0x%016llx -> 0x%016llx config %08x\n", - i + 1, range.pci_addr, end, val); + i + 1, pci_addr, end, val); if (i <= 2) { faraday_raw_pci_write_config(p, 0, 0, confreg[i], 4, val); @@ -430,10 +423,8 @@ static int faraday_pci_probe(struct platform_device *pdev) const struct faraday_pci_variant *variant = of_device_get_match_data(dev); struct resource *regs; - resource_size_t io_base; struct resource_entry *win; struct faraday_pci *p; - struct resource *mem; struct resource *io; struct pci_host_bridge *host; struct clk *clk; @@ -441,7 +432,6 @@ static int faraday_pci_probe(struct platform_device *pdev) unsigned char cur_bus_speed = PCI_SPEED_33MHz; int ret; u32 val; - LIST_HEAD(res); host = devm_pci_alloc_host_bridge(dev, sizeof(*p)); if (!host) @@ -480,44 +470,21 @@ static int faraday_pci_probe(struct platform_device *pdev) if (IS_ERR(p->base)) return PTR_ERR(p->base); - ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - &res, &io_base); - if (ret) - return ret; - - ret = devm_request_pci_bus_resources(dev, &res); + ret = pci_parse_request_of_pci_ranges(dev, &host->windows, + &host->dma_ranges, NULL); if (ret) return ret; - /* Get the I/O and memory ranges from DT */ - resource_list_for_each_entry(win, &res) { - switch (resource_type(win->res)) { - case IORESOURCE_IO: - io = win->res; - io->name = "Gemini PCI I/O"; - if (!faraday_res_to_memcfg(io->start - win->offset, - resource_size(io), &val)) { - /* setup I/O space size */ - writel(val, p->base + PCI_IOSIZE); - } else { - dev_err(dev, "illegal IO mem size\n"); - return -EINVAL; - } - ret = devm_pci_remap_iospace(dev, io, io_base); - if (ret) { - dev_warn(dev, "error %d: failed to map resource %pR\n", - ret, io); - continue; - } - break; - case IORESOURCE_MEM: - mem = win->res; - mem->name = "Gemini PCI MEM"; - break; - case IORESOURCE_BUS: - break; - default: - break; + win = resource_list_first_type(&host->windows, IORESOURCE_IO); + if (win) { + io = win->res; + if (!faraday_res_to_memcfg(io->start - win->offset, + resource_size(io), &val)) { + /* setup I/O space size */ + writel(val, p->base + PCI_IOSIZE); + } else { + dev_err(dev, "illegal IO mem size\n"); + return -EINVAL; } } @@ -565,11 +532,10 @@ static int faraday_pci_probe(struct platform_device *pdev) cur_bus_speed = PCI_SPEED_66MHz; } - ret = faraday_pci_parse_map_dma_ranges(p, dev->of_node); + ret = faraday_pci_parse_map_dma_ranges(p); if (ret) return ret; - list_splice_init(&res, &host->windows); ret = pci_scan_root_bus_bridge(host); if (ret) { dev_err(dev, "failed to scan host: %d\n", ret); @@ -581,7 +547,6 @@ static int faraday_pci_probe(struct platform_device *pdev) pci_bus_assign_resources(p->bus); pci_bus_add_devices(p->bus); - pci_free_resource_list(&res); return 0; } diff --git a/drivers/pci/controller/pci-host-common.c b/drivers/pci/controller/pci-host-common.c index c8cb9c5188a4..250a3fc80ec6 100644 --- a/drivers/pci/controller/pci-host-common.c +++ b/drivers/pci/controller/pci-host-common.c @@ -27,7 +27,7 @@ static struct pci_config_window *gen_pci_init(struct device *dev, struct pci_config_window *cfg; /* Parse our PCI ranges and request their resources */ - err = pci_parse_request_of_pci_ranges(dev, resources, &bus_range); + err = pci_parse_request_of_pci_ranges(dev, resources, NULL, &bus_range); if (err) return ERR_PTR(err); diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index f1f300218fab..9977abff92fc 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -76,11 +76,6 @@ static enum pci_protocol_version_t pci_protocol_versions[] = { PCI_PROTOCOL_VERSION_1_1, }; -/* - * Protocol version negotiated by hv_pci_protocol_negotiation(). - */ -static enum pci_protocol_version_t pci_protocol_version; - #define PCI_CONFIG_MMIO_LENGTH 0x2000 #define CFG_PAGE_OFFSET 0x1000 #define CFG_PAGE_SIZE (PCI_CONFIG_MMIO_LENGTH - CFG_PAGE_OFFSET) @@ -307,7 +302,7 @@ struct pci_bus_relations { struct pci_q_res_req_response { struct vmpacket_descriptor hdr; s32 status; /* negative values are failures */ - u32 probed_bar[6]; + u32 probed_bar[PCI_STD_NUM_BARS]; } __packed; struct pci_set_power { @@ -455,12 +450,15 @@ enum hv_pcibus_state { hv_pcibus_init = 0, hv_pcibus_probed, hv_pcibus_installed, + hv_pcibus_removing, hv_pcibus_removed, hv_pcibus_maximum }; struct hv_pcibus_device { struct pci_sysdata sysdata; + /* Protocol version negotiated with the host */ + enum pci_protocol_version_t protocol_version; enum hv_pcibus_state state; refcount_t remove_lock; struct hv_device *hdev; @@ -539,7 +537,7 @@ struct hv_pci_dev { * What would be observed if one wrote 0xFFFFFFFF to a BAR and then * read it back, for each of the BAR offsets within config space. */ - u32 probed_bar[6]; + u32 probed_bar[PCI_STD_NUM_BARS]; }; struct hv_pci_compl { @@ -1224,7 +1222,7 @@ static void hv_irq_unmask(struct irq_data *data) * negative effect (yet?). */ - if (pci_protocol_version >= PCI_PROTOCOL_VERSION_1_2) { + if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) { /* * PCI_PROTOCOL_VERSION_1_2 supports the VP_SET version of the * HVCALL_RETARGET_INTERRUPT hypercall, which also coincides @@ -1394,7 +1392,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ctxt.pci_pkt.completion_func = hv_pci_compose_compl; ctxt.pci_pkt.compl_ctxt = ∁ - switch (pci_protocol_version) { + switch (hbus->protocol_version) { case PCI_PROTOCOL_VERSION_1_1: size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1, dest, @@ -1610,7 +1608,7 @@ static void survey_child_resources(struct hv_pcibus_device *hbus) * so it's sufficient to just add them up without tracking alignment. */ list_for_each_entry(hpdev, &hbus->children, list_entry) { - for (i = 0; i < 6; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (hpdev->probed_bar[i] & PCI_BASE_ADDRESS_SPACE_IO) dev_err(&hbus->hdev->device, "There's an I/O BAR in this list!\n"); @@ -1681,10 +1679,27 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus) spin_lock_irqsave(&hbus->device_list_lock, flags); + /* + * Clear the memory enable bit, in case it's already set. This occurs + * in the suspend path of hibernation, where the device is suspended, + * resumed and suspended again: see hibernation_snapshot() and + * hibernation_platform_enter(). + * + * If the memory enable bit is already set, Hyper-V sliently ignores + * the below BAR updates, and the related PCI device driver can not + * work, because reading from the device register(s) always returns + * 0xFFFFFFFF. + */ + list_for_each_entry(hpdev, &hbus->children, list_entry) { + _hv_pcifront_read_config(hpdev, PCI_COMMAND, 2, &command); + command &= ~PCI_COMMAND_MEMORY; + _hv_pcifront_write_config(hpdev, PCI_COMMAND, 2, command); + } + /* Pick addresses for the BARs. */ do { list_for_each_entry(hpdev, &hbus->children, list_entry) { - for (i = 0; i < 6; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { bar_val = hpdev->probed_bar[i]; if (bar_val == 0) continue; @@ -1841,7 +1856,7 @@ static void q_resource_requirements(void *context, struct pci_response *resp, "query resource requirements failed: %x\n", resp->status); } else { - for (i = 0; i < 6; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { completion->hpdev->probed_bar[i] = q_res_req->probed_bar[i]; } @@ -2107,6 +2122,12 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus, unsigned long flags; bool pending_dr; + if (hbus->state == hv_pcibus_removing) { + dev_info(&hbus->hdev->device, + "PCI VMBus BUS_RELATIONS: ignored\n"); + return; + } + dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT); if (!dr_wrk) return; @@ -2223,11 +2244,19 @@ static void hv_eject_device_work(struct work_struct *work) */ static void hv_pci_eject_device(struct hv_pci_dev *hpdev) { + struct hv_pcibus_device *hbus = hpdev->hbus; + struct hv_device *hdev = hbus->hdev; + + if (hbus->state == hv_pcibus_removing) { + dev_info(&hdev->device, "PCI VMBus EJECT: ignored\n"); + return; + } + hpdev->state = hv_pcichild_ejecting; get_pcichild(hpdev); INIT_WORK(&hpdev->wrk, hv_eject_device_work); - get_hvpcibus(hpdev->hbus); - queue_work(hpdev->hbus->wq, &hpdev->wrk); + get_hvpcibus(hbus); + queue_work(hbus->wq, &hpdev->wrk); } /** @@ -2379,8 +2408,11 @@ static void hv_pci_onchannelcallback(void *context) * failing if the host doesn't support the necessary protocol * level. */ -static int hv_pci_protocol_negotiation(struct hv_device *hdev) +static int hv_pci_protocol_negotiation(struct hv_device *hdev, + enum pci_protocol_version_t version[], + int num_version) { + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); struct pci_version_request *version_req; struct hv_pci_compl comp_pkt; struct pci_packet *pkt; @@ -2403,8 +2435,8 @@ static int hv_pci_protocol_negotiation(struct hv_device *hdev) version_req = (struct pci_version_request *)&pkt->message; version_req->message_type.type = PCI_QUERY_PROTOCOL_VERSION; - for (i = 0; i < ARRAY_SIZE(pci_protocol_versions); i++) { - version_req->protocol_version = pci_protocol_versions[i]; + for (i = 0; i < num_version; i++) { + version_req->protocol_version = version[i]; ret = vmbus_sendpacket(hdev->channel, version_req, sizeof(struct pci_version_request), (unsigned long)pkt, VM_PKT_DATA_INBAND, @@ -2420,10 +2452,10 @@ static int hv_pci_protocol_negotiation(struct hv_device *hdev) } if (comp_pkt.completion_status >= 0) { - pci_protocol_version = pci_protocol_versions[i]; + hbus->protocol_version = version[i]; dev_info(&hdev->device, "PCI VMBus probing: Using version %#x\n", - pci_protocol_version); + hbus->protocol_version); goto exit; } @@ -2707,7 +2739,7 @@ static int hv_send_resources_allocated(struct hv_device *hdev) u32 wslot; int ret; - size_res = (pci_protocol_version < PCI_PROTOCOL_VERSION_1_2) + size_res = (hbus->protocol_version < PCI_PROTOCOL_VERSION_1_2) ? sizeof(*res_assigned) : sizeof(*res_assigned2); pkt = kmalloc(sizeof(*pkt) + size_res, GFP_KERNEL); @@ -2726,7 +2758,7 @@ static int hv_send_resources_allocated(struct hv_device *hdev) pkt->completion_func = hv_pci_generic_compl; pkt->compl_ctxt = &comp_pkt; - if (pci_protocol_version < PCI_PROTOCOL_VERSION_1_2) { + if (hbus->protocol_version < PCI_PROTOCOL_VERSION_1_2) { res_assigned = (struct pci_resources_assigned *)&pkt->message; res_assigned->message_type.type = @@ -2870,9 +2902,27 @@ static int hv_pci_probe(struct hv_device *hdev, * hv_pcibus_device contains the hypercall arguments for retargeting in * hv_irq_unmask(). Those must not cross a page boundary. */ - BUILD_BUG_ON(sizeof(*hbus) > PAGE_SIZE); + BUILD_BUG_ON(sizeof(*hbus) > HV_HYP_PAGE_SIZE); - hbus = (struct hv_pcibus_device *)get_zeroed_page(GFP_KERNEL); + /* + * With the recent 59bb47985c1d ("mm, sl[aou]b: guarantee natural + * alignment for kmalloc(power-of-two)"), kzalloc() is able to allocate + * a 4KB buffer that is guaranteed to be 4KB-aligned. Here the size and + * alignment of hbus is important because hbus's field + * retarget_msi_interrupt_params must not cross a 4KB page boundary. + * + * Here we prefer kzalloc to get_zeroed_page(), because a buffer + * allocated by the latter is not tracked and scanned by kmemleak, and + * hence kmemleak reports the pointer contained in the hbus buffer + * (i.e. the hpdev struct, which is created in new_pcichild_device() and + * is tracked by hbus->children) as memory leak (false positive). + * + * If the kernel doesn't have 59bb47985c1d, get_zeroed_page() *must* be + * used to allocate the hbus buffer and we can avoid the kmemleak false + * positive by using kmemleak_alloc() and kmemleak_free() to ask + * kmemleak to track and scan the hbus buffer. + */ + hbus = (struct hv_pcibus_device *)kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); if (!hbus) return -ENOMEM; hbus->state = hv_pcibus_init; @@ -2930,7 +2980,8 @@ static int hv_pci_probe(struct hv_device *hdev, hv_set_drvdata(hdev, hbus); - ret = hv_pci_protocol_negotiation(hdev); + ret = hv_pci_protocol_negotiation(hdev, pci_protocol_versions, + ARRAY_SIZE(pci_protocol_versions)); if (ret) goto close; @@ -3011,7 +3062,7 @@ free_bus: return ret; } -static void hv_pci_bus_exit(struct hv_device *hdev) +static int hv_pci_bus_exit(struct hv_device *hdev, bool hibernating) { struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); struct { @@ -3027,16 +3078,20 @@ static void hv_pci_bus_exit(struct hv_device *hdev) * access the per-channel ringbuffer any longer. */ if (hdev->channel->rescind) - return; + return 0; - /* Delete any children which might still exist. */ - memset(&relations, 0, sizeof(relations)); - hv_pci_devices_present(hbus, &relations); + if (!hibernating) { + /* Delete any children which might still exist. */ + memset(&relations, 0, sizeof(relations)); + hv_pci_devices_present(hbus, &relations); + } ret = hv_send_resources_released(hdev); - if (ret) + if (ret) { dev_err(&hdev->device, "Couldn't send resources released packet(s)\n"); + return ret; + } memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet)); init_completion(&comp_pkt.host_event); @@ -3049,8 +3104,13 @@ static void hv_pci_bus_exit(struct hv_device *hdev) (unsigned long)&pkt.teardown_packet, VM_PKT_DATA_INBAND, VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); - if (!ret) - wait_for_completion_timeout(&comp_pkt.host_event, 10 * HZ); + if (ret) + return ret; + + if (wait_for_completion_timeout(&comp_pkt.host_event, 10 * HZ) == 0) + return -ETIMEDOUT; + + return 0; } /** @@ -3062,6 +3122,7 @@ static void hv_pci_bus_exit(struct hv_device *hdev) static int hv_pci_remove(struct hv_device *hdev) { struct hv_pcibus_device *hbus; + int ret; hbus = hv_get_drvdata(hdev); if (hbus->state == hv_pcibus_installed) { @@ -3074,7 +3135,7 @@ static int hv_pci_remove(struct hv_device *hdev) hbus->state = hv_pcibus_removed; } - hv_pci_bus_exit(hdev); + ret = hv_pci_bus_exit(hdev, false); vmbus_close(hdev->channel); @@ -3090,10 +3151,97 @@ static int hv_pci_remove(struct hv_device *hdev) hv_put_dom_num(hbus->sysdata.domain); - free_page((unsigned long)hbus); + kfree(hbus); + return ret; +} + +static int hv_pci_suspend(struct hv_device *hdev) +{ + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); + enum hv_pcibus_state old_state; + int ret; + + /* + * hv_pci_suspend() must make sure there are no pending work items + * before calling vmbus_close(), since it runs in a process context + * as a callback in dpm_suspend(). When it starts to run, the channel + * callback hv_pci_onchannelcallback(), which runs in a tasklet + * context, can be still running concurrently and scheduling new work + * items onto hbus->wq in hv_pci_devices_present() and + * hv_pci_eject_device(), and the work item handlers can access the + * vmbus channel, which can be being closed by hv_pci_suspend(), e.g. + * the work item handler pci_devices_present_work() -> + * new_pcichild_device() writes to the vmbus channel. + * + * To eliminate the race, hv_pci_suspend() disables the channel + * callback tasklet, sets hbus->state to hv_pcibus_removing, and + * re-enables the tasklet. This way, when hv_pci_suspend() proceeds, + * it knows that no new work item can be scheduled, and then it flushes + * hbus->wq and safely closes the vmbus channel. + */ + tasklet_disable(&hdev->channel->callback_event); + + /* Change the hbus state to prevent new work items. */ + old_state = hbus->state; + if (hbus->state == hv_pcibus_installed) + hbus->state = hv_pcibus_removing; + + tasklet_enable(&hdev->channel->callback_event); + + if (old_state != hv_pcibus_installed) + return -EINVAL; + + flush_workqueue(hbus->wq); + + ret = hv_pci_bus_exit(hdev, true); + if (ret) + return ret; + + vmbus_close(hdev->channel); + return 0; } +static int hv_pci_resume(struct hv_device *hdev) +{ + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); + enum pci_protocol_version_t version[1]; + int ret; + + hbus->state = hv_pcibus_init; + + ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, + hv_pci_onchannelcallback, hbus); + if (ret) + return ret; + + /* Only use the version that was in use before hibernation. */ + version[0] = hbus->protocol_version; + ret = hv_pci_protocol_negotiation(hdev, version, 1); + if (ret) + goto out; + + ret = hv_pci_query_relations(hdev); + if (ret) + goto out; + + ret = hv_pci_enter_d0(hdev); + if (ret) + goto out; + + ret = hv_send_resources_allocated(hdev); + if (ret) + goto out; + + prepopulate_bars(hbus); + + hbus->state = hv_pcibus_installed; + return 0; +out: + vmbus_close(hdev->channel); + return ret; +} + static const struct hv_vmbus_device_id hv_pci_id_table[] = { /* PCI Pass-through Class ID */ /* 44C4F61D-4444-4400-9D52-802E27EDE19F */ @@ -3108,6 +3256,8 @@ static struct hv_driver hv_pci_drv = { .id_table = hv_pci_id_table, .probe = hv_pci_probe, .remove = hv_pci_remove, + .suspend = hv_pci_suspend, + .resume = hv_pci_resume, }; static void __exit exit_hv_pci_drv(void) diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c index d3a0419e42f2..153a64676bc9 100644 --- a/drivers/pci/controller/pci-mvebu.c +++ b/drivers/pci/controller/pci-mvebu.c @@ -554,7 +554,7 @@ mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, } } -struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { +static struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { .write_base = mvebu_pci_bridge_emul_base_conf_write, .read_pcie = mvebu_pci_bridge_emul_pcie_conf_read, .write_pcie = mvebu_pci_bridge_emul_pcie_conf_write, @@ -713,7 +713,7 @@ static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev, ret = of_address_to_resource(np, 0, ®s); if (ret) - return ERR_PTR(ret); + return (void __iomem *)ERR_PTR(ret); return devm_ioremap_resource(&pdev->dev, ®s); } diff --git a/drivers/pci/controller/pci-thunder-pem.c b/drivers/pci/controller/pci-thunder-pem.c index f127ce8bd4ef..9491e266b1ea 100644 --- a/drivers/pci/controller/pci-thunder-pem.c +++ b/drivers/pci/controller/pci-thunder-pem.c @@ -6,6 +6,7 @@ #include <linux/bitfield.h> #include <linux/kernel.h> #include <linux/init.h> +#include <linux/pci.h> #include <linux/of_address.h> #include <linux/of_pci.h> #include <linux/pci-acpi.h> diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c index d219404bad92..bd05221f5a22 100644 --- a/drivers/pci/controller/pci-v3-semi.c +++ b/drivers/pci/controller/pci-v3-semi.c @@ -241,10 +241,8 @@ struct v3_pci { void __iomem *config_base; struct pci_bus *bus; u32 config_mem; - u32 io_mem; u32 non_pre_mem; u32 pre_mem; - phys_addr_t io_bus_addr; phys_addr_t non_pre_bus_addr; phys_addr_t pre_bus_addr; struct regmap *map; @@ -520,35 +518,22 @@ static int v3_integrator_init(struct v3_pci *v3) } static int v3_pci_setup_resource(struct v3_pci *v3, - resource_size_t io_base, struct pci_host_bridge *host, struct resource_entry *win) { struct device *dev = v3->dev; struct resource *mem; struct resource *io; - int ret; switch (resource_type(win->res)) { case IORESOURCE_IO: io = win->res; - io->name = "V3 PCI I/O"; - v3->io_mem = io_base; - v3->io_bus_addr = io->start - win->offset; - dev_dbg(dev, "I/O window %pR, bus addr %pap\n", - io, &v3->io_bus_addr); - ret = devm_pci_remap_iospace(dev, io, io_base); - if (ret) { - dev_warn(dev, - "error %d: failed to map resource %pR\n", - ret, io); - return ret; - } + /* Setup window 2 - PCI I/O */ - writel(v3_addr_to_lb_base2(v3->io_mem) | + writel(v3_addr_to_lb_base2(pci_pio_to_address(io->start)) | V3_LB_BASE2_ENABLE, v3->base + V3_LB_BASE2); - writew(v3_addr_to_lb_map2(v3->io_bus_addr), + writew(v3_addr_to_lb_map2(io->start - win->offset), v3->base + V3_LB_MAP2); break; case IORESOURCE_MEM: @@ -613,28 +598,30 @@ static int v3_pci_setup_resource(struct v3_pci *v3, } static int v3_get_dma_range_config(struct v3_pci *v3, - struct of_pci_range *range, + struct resource_entry *entry, u32 *pci_base, u32 *pci_map) { struct device *dev = v3->dev; - u64 cpu_end = range->cpu_addr + range->size - 1; - u64 pci_end = range->pci_addr + range->size - 1; + u64 cpu_addr = entry->res->start; + u64 cpu_end = entry->res->end; + u64 pci_end = cpu_end - entry->offset; + u64 pci_addr = entry->res->start - entry->offset; u32 val; - if (range->pci_addr & ~V3_PCI_BASE_M_ADR_BASE) { + if (pci_addr & ~V3_PCI_BASE_M_ADR_BASE) { dev_err(dev, "illegal range, only PCI bits 31..20 allowed\n"); return -EINVAL; } - val = ((u32)range->pci_addr) & V3_PCI_BASE_M_ADR_BASE; + val = ((u32)pci_addr) & V3_PCI_BASE_M_ADR_BASE; *pci_base = val; - if (range->cpu_addr & ~V3_PCI_MAP_M_MAP_ADR) { + if (cpu_addr & ~V3_PCI_MAP_M_MAP_ADR) { dev_err(dev, "illegal range, only CPU bits 31..20 allowed\n"); return -EINVAL; } - val = ((u32)range->cpu_addr) & V3_PCI_MAP_M_MAP_ADR; + val = ((u32)cpu_addr) & V3_PCI_MAP_M_MAP_ADR; - switch (range->size) { + switch (resource_size(entry->res)) { case SZ_1M: val |= V3_LB_BASE_ADR_SIZE_1MB; break; @@ -682,8 +669,8 @@ static int v3_get_dma_range_config(struct v3_pci *v3, dev_dbg(dev, "DMA MEM CPU: 0x%016llx -> 0x%016llx => " "PCI: 0x%016llx -> 0x%016llx base %08x map %08x\n", - range->cpu_addr, cpu_end, - range->pci_addr, pci_end, + cpu_addr, cpu_end, + pci_addr, pci_end, *pci_base, *pci_map); return 0; @@ -692,24 +679,16 @@ static int v3_get_dma_range_config(struct v3_pci *v3, static int v3_pci_parse_map_dma_ranges(struct v3_pci *v3, struct device_node *np) { - struct of_pci_range range; - struct of_pci_range_parser parser; + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(v3); struct device *dev = v3->dev; + struct resource_entry *entry; int i = 0; - if (of_pci_dma_range_parser_init(&parser, np)) { - dev_err(dev, "missing dma-ranges property\n"); - return -EINVAL; - } - - /* - * Get the dma-ranges from the device tree - */ - for_each_of_pci_range(&parser, &range) { + resource_list_for_each_entry(entry, &bridge->dma_ranges) { int ret; u32 pci_base, pci_map; - ret = v3_get_dma_range_config(v3, &range, &pci_base, &pci_map); + ret = v3_get_dma_range_config(v3, entry, &pci_base, &pci_map); if (ret) return ret; @@ -732,7 +711,6 @@ static int v3_pci_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct device_node *np = dev->of_node; - resource_size_t io_base; struct resource *regs; struct resource_entry *win; struct v3_pci *v3; @@ -741,7 +719,6 @@ static int v3_pci_probe(struct platform_device *pdev) u16 val; int irq; int ret; - LIST_HEAD(res); host = pci_alloc_host_bridge(sizeof(*v3)); if (!host) @@ -793,12 +770,8 @@ static int v3_pci_probe(struct platform_device *pdev) if (IS_ERR(v3->config_base)) return PTR_ERR(v3->config_base); - ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, - &io_base); - if (ret) - return ret; - - ret = devm_request_pci_bus_resources(dev, &res); + ret = pci_parse_request_of_pci_ranges(dev, &host->windows, + &host->dma_ranges, NULL); if (ret) return ret; @@ -852,8 +825,8 @@ static int v3_pci_probe(struct platform_device *pdev) writew(val, v3->base + V3_PCI_CMD); /* Get the I/O and memory ranges from DT */ - resource_list_for_each_entry(win, &res) { - ret = v3_pci_setup_resource(v3, io_base, host, win); + resource_list_for_each_entry(win, &host->windows) { + ret = v3_pci_setup_resource(v3, host, win); if (ret) { dev_err(dev, "error setting up resources\n"); return ret; @@ -931,7 +904,6 @@ static int v3_pci_probe(struct platform_device *pdev) val |= V3_SYSTEM_M_LOCK; writew(val, v3->base + V3_SYSTEM); - list_splice_init(&res, &host->windows); ret = pci_scan_root_bus_bridge(host); if (ret) { dev_err(dev, "failed to register host: %d\n", ret); diff --git a/drivers/pci/controller/pci-versatile.c b/drivers/pci/controller/pci-versatile.c index f59ad2728c0b..b911359b6d81 100644 --- a/drivers/pci/controller/pci-versatile.c +++ b/drivers/pci/controller/pci-versatile.c @@ -62,65 +62,16 @@ static struct pci_ops pci_versatile_ops = { .write = pci_generic_config_write, }; -static int versatile_pci_parse_request_of_pci_ranges(struct device *dev, - struct list_head *res) -{ - int err, mem = 1, res_valid = 0; - resource_size_t iobase; - struct resource_entry *win, *tmp; - - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, res, &iobase); - if (err) - return err; - - err = devm_request_pci_bus_resources(dev, res); - if (err) - goto out_release_res; - - resource_list_for_each_entry_safe(win, tmp, res) { - struct resource *res = win->res; - - switch (resource_type(res)) { - case IORESOURCE_IO: - err = devm_pci_remap_iospace(dev, res, iobase); - if (err) { - dev_warn(dev, "error %d: failed to map resource %pR\n", - err, res); - resource_list_destroy_entry(win); - } - break; - case IORESOURCE_MEM: - res_valid |= !(res->flags & IORESOURCE_PREFETCH); - - writel(res->start >> 28, PCI_IMAP(mem)); - writel(PHYS_OFFSET >> 28, PCI_SMAP(mem)); - mem++; - - break; - } - } - - if (res_valid) - return 0; - - dev_err(dev, "non-prefetchable memory resource required\n"); - err = -EINVAL; - -out_release_res: - pci_free_resource_list(res); - return err; -} - static int versatile_pci_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct resource *res; - int ret, i, myslot = -1; + struct resource_entry *entry; + int ret, i, myslot = -1, mem = 1; u32 val; void __iomem *local_pci_cfg_base; struct pci_bus *bus, *child; struct pci_host_bridge *bridge; - LIST_HEAD(pci_res); bridge = devm_pci_alloc_host_bridge(dev, 0); if (!bridge) @@ -141,10 +92,19 @@ static int versatile_pci_probe(struct platform_device *pdev) if (IS_ERR(versatile_cfg_base[1])) return PTR_ERR(versatile_cfg_base[1]); - ret = versatile_pci_parse_request_of_pci_ranges(dev, &pci_res); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + NULL, NULL); if (ret) return ret; + resource_list_for_each_entry(entry, &bridge->windows) { + if (resource_type(entry->res) == IORESOURCE_MEM) { + writel(entry->res->start >> 28, PCI_IMAP(mem)); + writel(__pa(PAGE_OFFSET) >> 28, PCI_SMAP(mem)); + mem++; + } + } + /* * We need to discover the PCI core first to configure itself * before the main PCI probing is performed @@ -177,9 +137,9 @@ static int versatile_pci_probe(struct platform_device *pdev) /* * Configure the PCI inbound memory windows to be 1:1 mapped to SDRAM */ - writel(PHYS_OFFSET, local_pci_cfg_base + PCI_BASE_ADDRESS_0); - writel(PHYS_OFFSET, local_pci_cfg_base + PCI_BASE_ADDRESS_1); - writel(PHYS_OFFSET, local_pci_cfg_base + PCI_BASE_ADDRESS_2); + writel(__pa(PAGE_OFFSET), local_pci_cfg_base + PCI_BASE_ADDRESS_0); + writel(__pa(PAGE_OFFSET), local_pci_cfg_base + PCI_BASE_ADDRESS_1); + writel(__pa(PAGE_OFFSET), local_pci_cfg_base + PCI_BASE_ADDRESS_2); /* * For many years the kernel and QEMU were symbiotically buggy @@ -197,7 +157,6 @@ static int versatile_pci_probe(struct platform_device *pdev) pci_add_flags(PCI_ENABLE_PROC_DOMAINS); pci_add_flags(PCI_REASSIGN_ALL_BUS); - list_splice_init(&pci_res, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = NULL; bridge->busnr = 0; diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c index ffda3e8b4742..de195fd430dc 100644 --- a/drivers/pci/controller/pci-xgene.c +++ b/drivers/pci/controller/pci-xgene.c @@ -405,15 +405,13 @@ static void xgene_pcie_setup_cfg_reg(struct xgene_pcie_port *port) xgene_pcie_writel(port, CFGCTL, EN_REG); } -static int xgene_pcie_map_ranges(struct xgene_pcie_port *port, - struct list_head *res, - resource_size_t io_base) +static int xgene_pcie_map_ranges(struct xgene_pcie_port *port) { + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port); struct resource_entry *window; struct device *dev = port->dev; - int ret; - resource_list_for_each_entry(window, res) { + resource_list_for_each_entry(window, &bridge->windows) { struct resource *res = window->res; u64 restype = resource_type(res); @@ -421,11 +419,9 @@ static int xgene_pcie_map_ranges(struct xgene_pcie_port *port, switch (restype) { case IORESOURCE_IO: - xgene_pcie_setup_ob_reg(port, res, OMR3BARL, io_base, + xgene_pcie_setup_ob_reg(port, res, OMR3BARL, + pci_pio_to_address(res->start), res->start - window->offset); - ret = devm_pci_remap_iospace(dev, res, io_base); - if (ret < 0) - return ret; break; case IORESOURCE_MEM: if (res->flags & IORESOURCE_PREFETCH) @@ -485,27 +481,28 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size) } static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port, - struct of_pci_range *range, u8 *ib_reg_mask) + struct resource_entry *entry, + u8 *ib_reg_mask) { void __iomem *cfg_base = port->cfg_base; struct device *dev = port->dev; void *bar_addr; u32 pim_reg; - u64 cpu_addr = range->cpu_addr; - u64 pci_addr = range->pci_addr; - u64 size = range->size; + u64 cpu_addr = entry->res->start; + u64 pci_addr = cpu_addr - entry->offset; + u64 size = resource_size(entry->res); u64 mask = ~(size - 1) | EN_REG; u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64; u32 bar_low; int region; - region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size); + region = xgene_pcie_select_ib_reg(ib_reg_mask, size); if (region < 0) { dev_warn(dev, "invalid pcie dma-range config\n"); return; } - if (range->flags & IORESOURCE_PREFETCH) + if (entry->res->flags & IORESOURCE_PREFETCH) flags |= PCI_BASE_ADDRESS_MEM_PREFETCH; bar_low = pcie_bar_low_val((u32)cpu_addr, flags); @@ -536,25 +533,13 @@ static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port, static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port) { - struct device_node *np = port->node; - struct of_pci_range range; - struct of_pci_range_parser parser; - struct device *dev = port->dev; + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port); + struct resource_entry *entry; u8 ib_reg_mask = 0; - if (of_pci_dma_range_parser_init(&parser, np)) { - dev_err(dev, "missing dma-ranges property\n"); - return -EINVAL; - } - - /* Get the dma-ranges from DT */ - for_each_of_pci_range(&parser, &range) { - u64 end = range.cpu_addr + range.size - 1; + resource_list_for_each_entry(entry, &bridge->dma_ranges) + xgene_pcie_setup_ib_reg(port, entry, &ib_reg_mask); - dev_dbg(dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n", - range.flags, range.cpu_addr, end, range.pci_addr); - xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask); - } return 0; } @@ -567,8 +552,7 @@ static void xgene_pcie_clear_config(struct xgene_pcie_port *port) xgene_pcie_writel(port, i, 0); } -static int xgene_pcie_setup(struct xgene_pcie_port *port, struct list_head *res, - resource_size_t io_base) +static int xgene_pcie_setup(struct xgene_pcie_port *port) { struct device *dev = port->dev; u32 val, lanes = 0, speed = 0; @@ -580,7 +564,7 @@ static int xgene_pcie_setup(struct xgene_pcie_port *port, struct list_head *res, val = (XGENE_PCIE_DEVICEID << 16) | XGENE_PCIE_VENDORID; xgene_pcie_writel(port, BRIDGE_CFG_0, val); - ret = xgene_pcie_map_ranges(port, res, io_base); + ret = xgene_pcie_map_ranges(port); if (ret) return ret; @@ -607,11 +591,9 @@ static int xgene_pcie_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct device_node *dn = dev->of_node; struct xgene_pcie_port *port; - resource_size_t iobase = 0; struct pci_bus *bus, *child; struct pci_host_bridge *bridge; int ret; - LIST_HEAD(res); bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port)); if (!bridge) @@ -634,20 +616,15 @@ static int xgene_pcie_probe(struct platform_device *pdev) if (ret) return ret; - ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, - &iobase); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (ret) return ret; - ret = devm_request_pci_bus_resources(dev, &res); - if (ret) - goto error; - - ret = xgene_pcie_setup(port, &res, iobase); + ret = xgene_pcie_setup(port); if (ret) - goto error; + return ret; - list_splice_init(&res, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = port; bridge->busnr = 0; @@ -657,7 +634,7 @@ static int xgene_pcie_probe(struct platform_device *pdev) ret = pci_scan_root_bus_bridge(bridge); if (ret < 0) - goto error; + return ret; bus = bridge->bus; @@ -666,10 +643,6 @@ static int xgene_pcie_probe(struct platform_device *pdev) pcie_bus_configure_settings(child); pci_bus_add_devices(bus); return 0; - -error: - pci_free_resource_list(&res); - return ret; } static const struct of_device_id xgene_pcie_match_table[] = { diff --git a/drivers/pci/controller/pcie-altera.c b/drivers/pci/controller/pcie-altera.c index d2497ca43828..b447c3e4abad 100644 --- a/drivers/pci/controller/pcie-altera.c +++ b/drivers/pci/controller/pcie-altera.c @@ -92,7 +92,6 @@ struct altera_pcie { u8 root_bus_nr; struct irq_domain *irq_domain; struct resource bus_range; - struct list_head resources; const struct altera_pcie_data *pcie_data; }; @@ -670,39 +669,6 @@ static void altera_pcie_isr(struct irq_desc *desc) chained_irq_exit(chip, desc); } -static int altera_pcie_parse_request_of_pci_ranges(struct altera_pcie *pcie) -{ - int err, res_valid = 0; - struct device *dev = &pcie->pdev->dev; - struct resource_entry *win; - - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - &pcie->resources, NULL); - if (err) - return err; - - err = devm_request_pci_bus_resources(dev, &pcie->resources); - if (err) - goto out_release_res; - - resource_list_for_each_entry(win, &pcie->resources) { - struct resource *res = win->res; - - if (resource_type(res) == IORESOURCE_MEM) - res_valid |= !(res->flags & IORESOURCE_PREFETCH); - } - - if (res_valid) - return 0; - - dev_err(dev, "non-prefetchable memory resource required\n"); - err = -EINVAL; - -out_release_res: - pci_free_resource_list(&pcie->resources); - return err; -} - static int altera_pcie_init_irq_domain(struct altera_pcie *pcie) { struct device *dev = &pcie->pdev->dev; @@ -833,9 +799,8 @@ static int altera_pcie_probe(struct platform_device *pdev) return ret; } - INIT_LIST_HEAD(&pcie->resources); - - ret = altera_pcie_parse_request_of_pci_ranges(pcie); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (ret) { dev_err(dev, "Failed add resources\n"); return ret; @@ -853,7 +818,6 @@ static int altera_pcie_probe(struct platform_device *pdev) cra_writel(pcie, P2A_INT_ENA_ALL, P2A_INT_ENABLE); altera_pcie_host_init(pcie); - list_splice_init(&pcie->resources, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = pcie; bridge->busnr = pcie->root_bus_nr; @@ -884,7 +848,6 @@ static int altera_pcie_remove(struct platform_device *pdev) pci_stop_root_bus(bridge->bus); pci_remove_root_bus(bridge->bus); - pci_free_resource_list(&pcie->resources); altera_pcie_irq_teardown(pcie); return 0; diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c index 0a3f61be5625..3176ad3ab0e5 100644 --- a/drivers/pci/controller/pcie-iproc-msi.c +++ b/drivers/pci/controller/pcie-iproc-msi.c @@ -293,11 +293,12 @@ static const struct irq_domain_ops msi_domain_ops = { static inline u32 decode_msi_hwirq(struct iproc_msi *msi, u32 eq, u32 head) { - u32 *msg, hwirq; + u32 __iomem *msg; + u32 hwirq; unsigned int offs; offs = iproc_msi_eq_offset(msi, eq) + head * sizeof(u32); - msg = (u32 *)(msi->eq_cpu + offs); + msg = (u32 __iomem *)(msi->eq_cpu + offs); hwirq = readl(msg); hwirq = (hwirq >> 5) + (hwirq & 0x1f); diff --git a/drivers/pci/controller/pcie-iproc-platform.c b/drivers/pci/controller/pcie-iproc-platform.c index 9ee6200a66f4..ff0a81a632a1 100644 --- a/drivers/pci/controller/pcie-iproc-platform.c +++ b/drivers/pci/controller/pcie-iproc-platform.c @@ -43,8 +43,6 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev) struct iproc_pcie *pcie; struct device_node *np = dev->of_node; struct resource reg; - resource_size_t iobase = 0; - LIST_HEAD(resources); struct pci_host_bridge *bridge; int ret; @@ -97,8 +95,8 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev) if (IS_ERR(pcie->phy)) return PTR_ERR(pcie->phy); - ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources, - &iobase); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (ret) { dev_err(dev, "unable to get PCI host bridge resources\n"); return ret; @@ -113,10 +111,9 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev) pcie->map_irq = of_irq_parse_and_map_pci; } - ret = iproc_pcie_setup(pcie, &resources); + ret = iproc_pcie_setup(pcie, &bridge->windows); if (ret) { dev_err(dev, "PCIe controller setup failed\n"); - pci_free_resource_list(&resources); return ret; } diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c index 2d457bfdaf66..0a468c73bae3 100644 --- a/drivers/pci/controller/pcie-iproc.c +++ b/drivers/pci/controller/pcie-iproc.c @@ -1122,15 +1122,16 @@ static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx, } static int iproc_pcie_setup_ib(struct iproc_pcie *pcie, - struct of_pci_range *range, + struct resource_entry *entry, enum iproc_pcie_ib_map_type type) { struct device *dev = pcie->dev; struct iproc_pcie_ib *ib = &pcie->ib; int ret; unsigned int region_idx, size_idx; - u64 axi_addr = range->cpu_addr, pci_addr = range->pci_addr; - resource_size_t size = range->size; + u64 axi_addr = entry->res->start; + u64 pci_addr = entry->res->start - entry->offset; + resource_size_t size = resource_size(entry->res); /* iterate through all IARR mapping regions */ for (region_idx = 0; region_idx < ib->nr_regions; region_idx++) { @@ -1182,67 +1183,46 @@ err_ib: return ret; } -static int iproc_pcie_add_dma_range(struct device *dev, - struct list_head *resources, - struct of_pci_range *range) +static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie) { - struct resource *res; - struct resource_entry *entry, *tmp; - struct list_head *head = resources; - - res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL); - if (!res) - return -ENOMEM; + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); + struct resource_entry *entry; + int ret = 0; - resource_list_for_each_entry(tmp, resources) { - if (tmp->res->start < range->cpu_addr) - head = &tmp->node; + resource_list_for_each_entry(entry, &host->dma_ranges) { + /* Each range entry corresponds to an inbound mapping region */ + ret = iproc_pcie_setup_ib(pcie, entry, IPROC_PCIE_IB_MAP_MEM); + if (ret) + break; } - res->start = range->cpu_addr; - res->end = res->start + range->size - 1; - - entry = resource_list_create_entry(res, 0); - if (!entry) - return -ENOMEM; - - entry->offset = res->start - range->cpu_addr; - resource_list_add(entry, head); - - return 0; + return ret; } -static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie) +static void iproc_pcie_invalidate_mapping(struct iproc_pcie *pcie) { - struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); - struct of_pci_range range; - struct of_pci_range_parser parser; - int ret; - LIST_HEAD(resources); + struct iproc_pcie_ib *ib = &pcie->ib; + struct iproc_pcie_ob *ob = &pcie->ob; + int idx; - /* Get the dma-ranges from DT */ - ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node); - if (ret) - return ret; + if (pcie->ep_is_internal) + return; - for_each_of_pci_range(&parser, &range) { - ret = iproc_pcie_add_dma_range(pcie->dev, - &resources, - &range); - if (ret) - goto out; - /* Each range entry corresponds to an inbound mapping region */ - ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM); - if (ret) - goto out; + if (pcie->need_ob_cfg) { + /* iterate through all OARR mapping regions */ + for (idx = ob->nr_windows - 1; idx >= 0; idx--) { + iproc_pcie_write_reg(pcie, + MAP_REG(IPROC_PCIE_OARR0, idx), 0); + } } - list_splice_init(&resources, &host->dma_ranges); - - return 0; -out: - pci_free_resource_list(&resources); - return ret; + if (pcie->need_ib_cfg) { + /* iterate through all IARR mapping regions */ + for (idx = 0; idx < ib->nr_regions; idx++) { + iproc_pcie_write_reg(pcie, + MAP_REG(IPROC_PCIE_IARR0, idx), 0); + } + } } static int iproce_pcie_get_msi(struct iproc_pcie *pcie, @@ -1276,13 +1256,16 @@ static int iproce_pcie_get_msi(struct iproc_pcie *pcie, static int iproc_pcie_paxb_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr) { int ret; - struct of_pci_range range; + struct resource_entry entry; - memset(&range, 0, sizeof(range)); - range.size = SZ_32K; - range.pci_addr = range.cpu_addr = msi_addr & ~(range.size - 1); + memset(&entry, 0, sizeof(entry)); + entry.res = &entry.__res; - ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_IO); + msi_addr &= ~(SZ_32K - 1); + entry.res->start = msi_addr; + entry.res->end = msi_addr + SZ_32K - 1; + + ret = iproc_pcie_setup_ib(pcie, &entry, IPROC_PCIE_IB_MAP_IO); return ret; } @@ -1498,10 +1481,6 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) return ret; } - ret = devm_request_pci_bus_resources(dev, res); - if (ret) - return ret; - ret = phy_init(pcie->phy); if (ret) { dev_err(dev, "unable to initialize PCIe PHY\n"); @@ -1517,6 +1496,8 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) iproc_pcie_perst_ctrl(pcie, true); iproc_pcie_perst_ctrl(pcie, false); + iproc_pcie_invalidate_mapping(pcie); + if (pcie->need_ob_cfg) { ret = iproc_pcie_map_ranges(pcie, res); if (ret) { @@ -1543,7 +1524,6 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) if (iproc_pcie_msi_enable(pcie)) dev_info(dev, "not using iProc MSI\n"); - list_splice_init(res, &host->windows); host->busnr = 0; host->dev.parent = dev; host->ops = &iproc_pcie_ops; diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c index 626a7c352dfd..cb982891b22b 100644 --- a/drivers/pci/controller/pcie-mediatek.c +++ b/drivers/pci/controller/pcie-mediatek.c @@ -216,7 +216,6 @@ struct mtk_pcie { void __iomem *base; struct clk *free_ck; - struct resource mem; struct list_head ports; const struct mtk_pcie_soc *soc; unsigned int busnr; @@ -661,11 +660,19 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port, static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) { struct mtk_pcie *pcie = port->pcie; - struct resource *mem = &pcie->mem; + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); + struct resource *mem = NULL; + struct resource_entry *entry; const struct mtk_pcie_soc *soc = port->pcie->soc; u32 val; int err; + entry = resource_list_first_type(&host->windows, IORESOURCE_MEM); + if (entry) + mem = entry->res; + if (!mem) + return -EINVAL; + /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ if (pcie->base) { val = readl(pcie->base + PCIE_SYS_CFG_V2); @@ -1023,39 +1030,15 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie) struct mtk_pcie_port *port, *tmp; struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); struct list_head *windows = &host->windows; - struct resource_entry *win, *tmp_win; - resource_size_t io_base; + struct resource *bus; int err; - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - windows, &io_base); + err = pci_parse_request_of_pci_ranges(dev, windows, + &host->dma_ranges, &bus); if (err) return err; - err = devm_request_pci_bus_resources(dev, windows); - if (err < 0) - return err; - - /* Get the I/O and memory ranges from DT */ - resource_list_for_each_entry_safe(win, tmp_win, windows) { - switch (resource_type(win->res)) { - case IORESOURCE_IO: - err = devm_pci_remap_iospace(dev, win->res, io_base); - if (err) { - dev_warn(dev, "error %d: failed to map resource %pR\n", - err, win->res); - resource_list_destroy_entry(win); - } - break; - case IORESOURCE_MEM: - memcpy(&pcie->mem, win->res, sizeof(*win->res)); - pcie->mem.name = "non-prefetchable"; - break; - case IORESOURCE_BUS: - pcie->busnr = win->res->start; - break; - } - } + pcie->busnr = bus->start; for_each_available_child_of_node(node, child) { int slot; diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c index a45a6447b01d..3a696ca45bfa 100644 --- a/drivers/pci/controller/pcie-mobiveil.c +++ b/drivers/pci/controller/pcie-mobiveil.c @@ -140,7 +140,6 @@ struct mobiveil_msi { /* MSI information */ struct mobiveil_pcie { struct platform_device *pdev; - struct list_head resources; void __iomem *config_axi_slave_base; /* endpoint config base */ void __iomem *csr_axi_slave_base; /* root port config base */ void __iomem *apb_csr_base; /* MSI register base */ @@ -235,7 +234,7 @@ static int mobiveil_pcie_write(void __iomem *addr, int size, u32 val) return PCIBIOS_SUCCESSFUL; } -static u32 csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size) +static u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size) { void *addr; u32 val; @@ -250,7 +249,8 @@ static u32 csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size) return val; } -static void csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off, size_t size) +static void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off, + size_t size) { void *addr; int ret; @@ -262,19 +262,19 @@ static void csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off, size_t size) dev_err(&pcie->pdev->dev, "write CSR address failed\n"); } -static u32 csr_readl(struct mobiveil_pcie *pcie, u32 off) +static u32 mobiveil_csr_readl(struct mobiveil_pcie *pcie, u32 off) { - return csr_read(pcie, off, 0x4); + return mobiveil_csr_read(pcie, off, 0x4); } -static void csr_writel(struct mobiveil_pcie *pcie, u32 val, u32 off) +static void mobiveil_csr_writel(struct mobiveil_pcie *pcie, u32 val, u32 off) { - csr_write(pcie, val, off, 0x4); + mobiveil_csr_write(pcie, val, off, 0x4); } static bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie) { - return (csr_readl(pcie, LTSSM_STATUS) & + return (mobiveil_csr_readl(pcie, LTSSM_STATUS) & LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0; } @@ -323,7 +323,7 @@ static void __iomem *mobiveil_pcie_map_bus(struct pci_bus *bus, PCI_SLOT(devfn) << PAB_DEVICE_SHIFT | PCI_FUNC(devfn) << PAB_FUNCTION_SHIFT; - csr_writel(pcie, value, PAB_AXI_AMAP_PEX_WIN_L(WIN_NUM_0)); + mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_PEX_WIN_L(WIN_NUM_0)); return pcie->config_axi_slave_base + where; } @@ -353,13 +353,14 @@ static void mobiveil_pcie_isr(struct irq_desc *desc) chained_irq_enter(chip, desc); /* read INTx status */ - val = csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT); - mask = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); + val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT); + mask = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); intr_status = val & mask; /* Handle INTx */ if (intr_status & PAB_INTP_INTX_MASK) { - shifted_status = csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT); + shifted_status = mobiveil_csr_readl(pcie, + PAB_INTP_AMBA_MISC_STAT); shifted_status &= PAB_INTP_INTX_MASK; shifted_status >>= PAB_INTX_START; do { @@ -373,12 +374,13 @@ static void mobiveil_pcie_isr(struct irq_desc *desc) bit); /* clear interrupt handled */ - csr_writel(pcie, 1 << (PAB_INTX_START + bit), - PAB_INTP_AMBA_MISC_STAT); + mobiveil_csr_writel(pcie, + 1 << (PAB_INTX_START + bit), + PAB_INTP_AMBA_MISC_STAT); } - shifted_status = csr_readl(pcie, - PAB_INTP_AMBA_MISC_STAT); + shifted_status = mobiveil_csr_readl(pcie, + PAB_INTP_AMBA_MISC_STAT); shifted_status &= PAB_INTP_INTX_MASK; shifted_status >>= PAB_INTX_START; } while (shifted_status != 0); @@ -413,7 +415,7 @@ static void mobiveil_pcie_isr(struct irq_desc *desc) } /* Clear the interrupt status */ - csr_writel(pcie, intr_status, PAB_INTP_AMBA_MISC_STAT); + mobiveil_csr_writel(pcie, intr_status, PAB_INTP_AMBA_MISC_STAT); chained_irq_exit(chip, desc); } @@ -474,24 +476,24 @@ static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, return; } - value = csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num)); + value = mobiveil_csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num)); value &= ~(AMAP_CTRL_TYPE_MASK << AMAP_CTRL_TYPE_SHIFT | WIN_SIZE_MASK); value |= type << AMAP_CTRL_TYPE_SHIFT | 1 << AMAP_CTRL_EN_SHIFT | (lower_32_bits(size64) & WIN_SIZE_MASK); - csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num)); + mobiveil_csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num)); - csr_writel(pcie, upper_32_bits(size64), - PAB_EXT_PEX_AMAP_SIZEN(win_num)); + mobiveil_csr_writel(pcie, upper_32_bits(size64), + PAB_EXT_PEX_AMAP_SIZEN(win_num)); - csr_writel(pcie, lower_32_bits(cpu_addr), - PAB_PEX_AMAP_AXI_WIN(win_num)); - csr_writel(pcie, upper_32_bits(cpu_addr), - PAB_EXT_PEX_AMAP_AXI_WIN(win_num)); + mobiveil_csr_writel(pcie, lower_32_bits(cpu_addr), + PAB_PEX_AMAP_AXI_WIN(win_num)); + mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr), + PAB_EXT_PEX_AMAP_AXI_WIN(win_num)); - csr_writel(pcie, lower_32_bits(pci_addr), - PAB_PEX_AMAP_PEX_WIN_L(win_num)); - csr_writel(pcie, upper_32_bits(pci_addr), - PAB_PEX_AMAP_PEX_WIN_H(win_num)); + mobiveil_csr_writel(pcie, lower_32_bits(pci_addr), + PAB_PEX_AMAP_PEX_WIN_L(win_num)); + mobiveil_csr_writel(pcie, upper_32_bits(pci_addr), + PAB_PEX_AMAP_PEX_WIN_H(win_num)); pcie->ib_wins_configured++; } @@ -515,27 +517,29 @@ static void program_ob_windows(struct mobiveil_pcie *pcie, int win_num, * program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit * to 4 KB in PAB_AXI_AMAP_CTRL register */ - value = csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num)); + value = mobiveil_csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num)); value &= ~(WIN_TYPE_MASK << WIN_TYPE_SHIFT | WIN_SIZE_MASK); value |= 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT | (lower_32_bits(size64) & WIN_SIZE_MASK); - csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num)); + mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num)); - csr_writel(pcie, upper_32_bits(size64), PAB_EXT_AXI_AMAP_SIZE(win_num)); + mobiveil_csr_writel(pcie, upper_32_bits(size64), + PAB_EXT_AXI_AMAP_SIZE(win_num)); /* * program AXI window base with appropriate value in * PAB_AXI_AMAP_AXI_WIN0 register */ - csr_writel(pcie, lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK), - PAB_AXI_AMAP_AXI_WIN(win_num)); - csr_writel(pcie, upper_32_bits(cpu_addr), - PAB_EXT_AXI_AMAP_AXI_WIN(win_num)); + mobiveil_csr_writel(pcie, + lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK), + PAB_AXI_AMAP_AXI_WIN(win_num)); + mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr), + PAB_EXT_AXI_AMAP_AXI_WIN(win_num)); - csr_writel(pcie, lower_32_bits(pci_addr), - PAB_AXI_AMAP_PEX_WIN_L(win_num)); - csr_writel(pcie, upper_32_bits(pci_addr), - PAB_AXI_AMAP_PEX_WIN_H(win_num)); + mobiveil_csr_writel(pcie, lower_32_bits(pci_addr), + PAB_AXI_AMAP_PEX_WIN_L(win_num)); + mobiveil_csr_writel(pcie, upper_32_bits(pci_addr), + PAB_AXI_AMAP_PEX_WIN_H(win_num)); pcie->ob_wins_configured++; } @@ -575,46 +579,47 @@ static void mobiveil_pcie_enable_msi(struct mobiveil_pcie *pcie) static int mobiveil_host_init(struct mobiveil_pcie *pcie) { + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); u32 value, pab_ctrl, type; struct resource_entry *win; /* setup bus numbers */ - value = csr_readl(pcie, PCI_PRIMARY_BUS); + value = mobiveil_csr_readl(pcie, PCI_PRIMARY_BUS); value &= 0xff000000; value |= 0x00ff0100; - csr_writel(pcie, value, PCI_PRIMARY_BUS); + mobiveil_csr_writel(pcie, value, PCI_PRIMARY_BUS); /* * program Bus Master Enable Bit in Command Register in PAB Config * Space */ - value = csr_readl(pcie, PCI_COMMAND); + value = mobiveil_csr_readl(pcie, PCI_COMMAND); value |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER; - csr_writel(pcie, value, PCI_COMMAND); + mobiveil_csr_writel(pcie, value, PCI_COMMAND); /* * program PIO Enable Bit to 1 (and PEX PIO Enable to 1) in PAB_CTRL * register */ - pab_ctrl = csr_readl(pcie, PAB_CTRL); + pab_ctrl = mobiveil_csr_readl(pcie, PAB_CTRL); pab_ctrl |= (1 << AMBA_PIO_ENABLE_SHIFT) | (1 << PEX_PIO_ENABLE_SHIFT); - csr_writel(pcie, pab_ctrl, PAB_CTRL); + mobiveil_csr_writel(pcie, pab_ctrl, PAB_CTRL); - csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK), - PAB_INTP_AMBA_MISC_ENB); + mobiveil_csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK), + PAB_INTP_AMBA_MISC_ENB); /* * program PIO Enable Bit to 1 and Config Window Enable Bit to 1 in * PAB_AXI_PIO_CTRL Register */ - value = csr_readl(pcie, PAB_AXI_PIO_CTRL); + value = mobiveil_csr_readl(pcie, PAB_AXI_PIO_CTRL); value |= APIO_EN_MASK; - csr_writel(pcie, value, PAB_AXI_PIO_CTRL); + mobiveil_csr_writel(pcie, value, PAB_AXI_PIO_CTRL); /* Enable PCIe PIO master */ - value = csr_readl(pcie, PAB_PEX_PIO_CTRL); + value = mobiveil_csr_readl(pcie, PAB_PEX_PIO_CTRL); value |= 1 << PIO_ENABLE_SHIFT; - csr_writel(pcie, value, PAB_PEX_PIO_CTRL); + mobiveil_csr_writel(pcie, value, PAB_PEX_PIO_CTRL); /* * we'll program one outbound window for config reads and @@ -631,7 +636,7 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); /* Get the I/O and memory ranges from DT */ - resource_list_for_each_entry(win, &pcie->resources) { + resource_list_for_each_entry(win, &bridge->windows) { if (resource_type(win->res) == IORESOURCE_MEM) type = MEM_WINDOW_TYPE; else if (resource_type(win->res) == IORESOURCE_IO) @@ -647,10 +652,10 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie) } /* fixup for PCIe class register */ - value = csr_readl(pcie, PAB_INTP_AXI_PIO_CLASS); + value = mobiveil_csr_readl(pcie, PAB_INTP_AXI_PIO_CLASS); value &= 0xff; value |= (PCI_CLASS_BRIDGE_PCI << 16); - csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS); + mobiveil_csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS); /* setup MSI hardware registers */ mobiveil_pcie_enable_msi(pcie); @@ -668,9 +673,9 @@ static void mobiveil_mask_intx_irq(struct irq_data *data) pcie = irq_desc_get_chip_data(desc); mask = 1 << ((data->hwirq + PAB_INTX_START) - 1); raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags); - shifted_val = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); + shifted_val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); shifted_val &= ~mask; - csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB); + mobiveil_csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB); raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags); } @@ -684,9 +689,9 @@ static void mobiveil_unmask_intx_irq(struct irq_data *data) pcie = irq_desc_get_chip_data(desc); mask = 1 << ((data->hwirq + PAB_INTX_START) - 1); raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags); - shifted_val = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); + shifted_val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); shifted_val |= mask; - csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB); + mobiveil_csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB); raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags); } @@ -857,7 +862,6 @@ static int mobiveil_pcie_probe(struct platform_device *pdev) struct pci_bus *child; struct pci_host_bridge *bridge; struct device *dev = &pdev->dev; - resource_size_t iobase; int ret; /* allocate the PCIe port */ @@ -875,11 +879,9 @@ static int mobiveil_pcie_probe(struct platform_device *pdev) return ret; } - INIT_LIST_HEAD(&pcie->resources); - /* parse the host bridge base addresses from the device tree file */ - ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - &pcie->resources, &iobase); + ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (ret) { dev_err(dev, "Getting bridge resources failed\n"); return ret; @@ -892,24 +894,19 @@ static int mobiveil_pcie_probe(struct platform_device *pdev) ret = mobiveil_host_init(pcie); if (ret) { dev_err(dev, "Failed to initialize host\n"); - goto error; + return ret; } /* initialize the IRQ domains */ ret = mobiveil_pcie_init_irq_domain(pcie); if (ret) { dev_err(dev, "Failed creating IRQ Domain\n"); - goto error; + return ret; } irq_set_chained_handler_and_data(pcie->irq, mobiveil_pcie_isr, pcie); - ret = devm_request_pci_bus_resources(dev, &pcie->resources); - if (ret) - goto error; - /* Initialize bridge */ - list_splice_init(&pcie->resources, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = pcie; bridge->busnr = pcie->root_bus_nr; @@ -920,13 +917,13 @@ static int mobiveil_pcie_probe(struct platform_device *pdev) ret = mobiveil_bringup_link(pcie); if (ret) { dev_info(dev, "link bring-up failed\n"); - goto error; + return ret; } /* setup the kernel resources for the newly added PCIe root bus */ ret = pci_scan_root_bus_bridge(bridge); if (ret) - goto error; + return ret; bus = bridge->bus; @@ -936,9 +933,6 @@ static int mobiveil_pcie_probe(struct platform_device *pdev) pci_bus_add_devices(bus); return 0; -error: - pci_free_resource_list(&pcie->resources); - return ret; } static const struct of_device_id mobiveil_pcie_of_match[] = { diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c index f6a669a9af41..759c6542c5c8 100644 --- a/drivers/pci/controller/pcie-rcar.c +++ b/drivers/pci/controller/pcie-rcar.c @@ -30,8 +30,6 @@ #include <linux/pm_runtime.h> #include <linux/slab.h> -#include "../pci.h" - #define PCIECAR 0x000010 #define PCIECCTLR 0x000018 #define CONFIG_SEND_ENABLE BIT(31) @@ -93,8 +91,11 @@ #define LINK_SPEED_2_5GTS (1 << 16) #define LINK_SPEED_5_0GTS (2 << 16) #define MACCTLR 0x011058 +#define MACCTLR_NFTS_MASK GENMASK(23, 16) /* The name is from SH7786 */ #define SPEED_CHANGE BIT(24) #define SCRAMBLE_DISABLE BIT(27) +#define LTSMDIS BIT(31) +#define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK) #define PMSR 0x01105c #define MACS2R 0x011078 #define MACCGSPSETR 0x011084 @@ -615,6 +616,8 @@ static int rcar_pcie_hw_init(struct rcar_pcie *pcie) if (IS_ENABLED(CONFIG_PCI_MSI)) rcar_pci_write_reg(pcie, 0x801f0000, PCIEMSITXR); + rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); + /* Finish initialization - establish a PCI Express link */ rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); @@ -1014,40 +1017,43 @@ err_irq1: } static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, - struct of_pci_range *range, + struct resource_entry *entry, int *index) { - u64 restype = range->flags; - u64 cpu_addr = range->cpu_addr; - u64 cpu_end = range->cpu_addr + range->size; - u64 pci_addr = range->pci_addr; + u64 restype = entry->res->flags; + u64 cpu_addr = entry->res->start; + u64 cpu_end = entry->res->end; + u64 pci_addr = entry->res->start - entry->offset; u32 flags = LAM_64BIT | LAR_ENABLE; u64 mask; - u64 size; + u64 size = resource_size(entry->res); int idx = *index; if (restype & IORESOURCE_PREFETCH) flags |= LAM_PREFETCH; - /* - * If the size of the range is larger than the alignment of the start - * address, we have to use multiple entries to perform the mapping. - */ - if (cpu_addr > 0) { - unsigned long nr_zeros = __ffs64(cpu_addr); - u64 alignment = 1ULL << nr_zeros; + while (cpu_addr < cpu_end) { + if (idx >= MAX_NR_INBOUND_MAPS - 1) { + dev_err(pcie->dev, "Failed to map inbound regions!\n"); + return -EINVAL; + } + /* + * If the size of the range is larger than the alignment of + * the start address, we have to use multiple entries to + * perform the mapping. + */ + if (cpu_addr > 0) { + unsigned long nr_zeros = __ffs64(cpu_addr); + u64 alignment = 1ULL << nr_zeros; - size = min(range->size, alignment); - } else { - size = range->size; - } - /* Hardware supports max 4GiB inbound region */ - size = min(size, 1ULL << 32); + size = min(size, alignment); + } + /* Hardware supports max 4GiB inbound region */ + size = min(size, 1ULL << 32); - mask = roundup_pow_of_two(size) - 1; - mask &= ~0xf; + mask = roundup_pow_of_two(size) - 1; + mask &= ~0xf; - while (cpu_addr < cpu_end) { /* * Set up 64-bit inbound regions as the range parser doesn't * distinguish between 32 and 64-bit types. @@ -1067,41 +1073,25 @@ static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, pci_addr += size; cpu_addr += size; idx += 2; - - if (idx > MAX_NR_INBOUND_MAPS) { - dev_err(pcie->dev, "Failed to map inbound regions!\n"); - return -EINVAL; - } } *index = idx; return 0; } -static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie, - struct device_node *np) +static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie) { - struct of_pci_range range; - struct of_pci_range_parser parser; - int index = 0; - int err; - - if (of_pci_dma_range_parser_init(&parser, np)) - return -EINVAL; - - /* Get the dma-ranges from DT */ - for_each_of_pci_range(&parser, &range) { - u64 end = range.cpu_addr + range.size - 1; - - dev_dbg(pcie->dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n", - range.flags, range.cpu_addr, end, range.pci_addr); + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); + struct resource_entry *entry; + int index = 0, err = 0; - err = rcar_pcie_inbound_ranges(pcie, &range, &index); + resource_list_for_each_entry(entry, &bridge->dma_ranges) { + err = rcar_pcie_inbound_ranges(pcie, entry, &index); if (err) - return err; + break; } - return 0; + return err; } static const struct of_device_id rcar_pcie_of_match[] = { @@ -1138,7 +1128,8 @@ static int rcar_pcie_probe(struct platform_device *pdev) pcie->dev = dev; platform_set_drvdata(pdev, pcie); - err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL); + err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, + &bridge->dma_ranges, NULL); if (err) goto err_free_bridge; @@ -1161,7 +1152,7 @@ static int rcar_pcie_probe(struct platform_device *pdev) goto err_unmap_msi_irqs; } - err = rcar_pcie_parse_map_dma_ranges(pcie, dev->of_node); + err = rcar_pcie_parse_map_dma_ranges(pcie); if (err) goto err_clk_disable; @@ -1237,6 +1228,7 @@ static int rcar_pcie_resume_noirq(struct device *dev) return 0; /* Re-establish the PCIe link */ + rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); return rcar_pcie_wait_for_dl(pcie); } diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c index ef8e677ce9d1..d9b63bfa5dd7 100644 --- a/drivers/pci/controller/pcie-rockchip-host.c +++ b/drivers/pci/controller/pcie-rockchip-host.c @@ -620,19 +620,13 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip) dev_info(dev, "no vpcie3v3 regulator found\n"); } - rockchip->vpcie1v8 = devm_regulator_get_optional(dev, "vpcie1v8"); - if (IS_ERR(rockchip->vpcie1v8)) { - if (PTR_ERR(rockchip->vpcie1v8) != -ENODEV) - return PTR_ERR(rockchip->vpcie1v8); - dev_info(dev, "no vpcie1v8 regulator found\n"); - } + rockchip->vpcie1v8 = devm_regulator_get(dev, "vpcie1v8"); + if (IS_ERR(rockchip->vpcie1v8)) + return PTR_ERR(rockchip->vpcie1v8); - rockchip->vpcie0v9 = devm_regulator_get_optional(dev, "vpcie0v9"); - if (IS_ERR(rockchip->vpcie0v9)) { - if (PTR_ERR(rockchip->vpcie0v9) != -ENODEV) - return PTR_ERR(rockchip->vpcie0v9); - dev_info(dev, "no vpcie0v9 regulator found\n"); - } + rockchip->vpcie0v9 = devm_regulator_get(dev, "vpcie0v9"); + if (IS_ERR(rockchip->vpcie0v9)) + return PTR_ERR(rockchip->vpcie0v9); return 0; } @@ -658,27 +652,22 @@ static int rockchip_pcie_set_vpcie(struct rockchip_pcie *rockchip) } } - if (!IS_ERR(rockchip->vpcie1v8)) { - err = regulator_enable(rockchip->vpcie1v8); - if (err) { - dev_err(dev, "fail to enable vpcie1v8 regulator\n"); - goto err_disable_3v3; - } + err = regulator_enable(rockchip->vpcie1v8); + if (err) { + dev_err(dev, "fail to enable vpcie1v8 regulator\n"); + goto err_disable_3v3; } - if (!IS_ERR(rockchip->vpcie0v9)) { - err = regulator_enable(rockchip->vpcie0v9); - if (err) { - dev_err(dev, "fail to enable vpcie0v9 regulator\n"); - goto err_disable_1v8; - } + err = regulator_enable(rockchip->vpcie0v9); + if (err) { + dev_err(dev, "fail to enable vpcie0v9 regulator\n"); + goto err_disable_1v8; } return 0; err_disable_1v8: - if (!IS_ERR(rockchip->vpcie1v8)) - regulator_disable(rockchip->vpcie1v8); + regulator_disable(rockchip->vpcie1v8); err_disable_3v3: if (!IS_ERR(rockchip->vpcie3v3)) regulator_disable(rockchip->vpcie3v3); @@ -806,19 +795,28 @@ static int rockchip_pcie_prog_ib_atu(struct rockchip_pcie *rockchip, static int rockchip_pcie_cfg_atu(struct rockchip_pcie *rockchip) { struct device *dev = rockchip->dev; + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rockchip); + struct resource_entry *entry; + u64 pci_addr, size; int offset; int err; int reg_no; rockchip_pcie_cfg_configuration_accesses(rockchip, AXI_WRAPPER_TYPE0_CFG); + entry = resource_list_first_type(&bridge->windows, IORESOURCE_MEM); + if (!entry) + return -ENODEV; + + size = resource_size(entry->res); + pci_addr = entry->res->start - entry->offset; + rockchip->msg_bus_addr = pci_addr; - for (reg_no = 0; reg_no < (rockchip->mem_size >> 20); reg_no++) { + for (reg_no = 0; reg_no < (size >> 20); reg_no++) { err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1, AXI_WRAPPER_MEM_WRITE, 20 - 1, - rockchip->mem_bus_addr + - (reg_no << 20), + pci_addr + (reg_no << 20), 0); if (err) { dev_err(dev, "program RC mem outbound ATU failed\n"); @@ -832,14 +830,20 @@ static int rockchip_pcie_cfg_atu(struct rockchip_pcie *rockchip) return err; } - offset = rockchip->mem_size >> 20; - for (reg_no = 0; reg_no < (rockchip->io_size >> 20); reg_no++) { + entry = resource_list_first_type(&bridge->windows, IORESOURCE_IO); + if (!entry) + return -ENODEV; + + size = resource_size(entry->res); + pci_addr = entry->res->start - entry->offset; + + offset = size >> 20; + for (reg_no = 0; reg_no < (size >> 20); reg_no++) { err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1 + offset, AXI_WRAPPER_IO_WRITE, 20 - 1, - rockchip->io_bus_addr + - (reg_no << 20), + pci_addr + (reg_no << 20), 0); if (err) { dev_err(dev, "program RC io outbound ATU failed\n"); @@ -852,8 +856,7 @@ static int rockchip_pcie_cfg_atu(struct rockchip_pcie *rockchip) AXI_WRAPPER_NOR_MSG, 20 - 1, 0, 0); - rockchip->msg_bus_addr = rockchip->mem_bus_addr + - ((reg_no + offset) << 20); + rockchip->msg_bus_addr += ((reg_no + offset) << 20); return err; } @@ -897,8 +900,7 @@ static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) rockchip_pcie_disable_clocks(rockchip); - if (!IS_ERR(rockchip->vpcie0v9)) - regulator_disable(rockchip->vpcie0v9); + regulator_disable(rockchip->vpcie0v9); return ret; } @@ -908,12 +910,10 @@ static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) struct rockchip_pcie *rockchip = dev_get_drvdata(dev); int err; - if (!IS_ERR(rockchip->vpcie0v9)) { - err = regulator_enable(rockchip->vpcie0v9); - if (err) { - dev_err(dev, "fail to enable vpcie0v9 regulator\n"); - return err; - } + err = regulator_enable(rockchip->vpcie0v9); + if (err) { + dev_err(dev, "fail to enable vpcie0v9 regulator\n"); + return err; } err = rockchip_pcie_enable_clocks(rockchip); @@ -939,8 +939,7 @@ err_err_deinit_port: err_pcie_resume: rockchip_pcie_disable_clocks(rockchip); err_disable_0v9: - if (!IS_ERR(rockchip->vpcie0v9)) - regulator_disable(rockchip->vpcie0v9); + regulator_disable(rockchip->vpcie0v9); return err; } @@ -950,14 +949,9 @@ static int rockchip_pcie_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct pci_bus *bus, *child; struct pci_host_bridge *bridge; - struct resource_entry *win; - resource_size_t io_base; - struct resource *mem; - struct resource *io; + struct resource *bus_res; int err; - LIST_HEAD(res); - if (!dev->of_node) return -ENODEV; @@ -995,56 +989,23 @@ static int rockchip_pcie_probe(struct platform_device *pdev) if (err < 0) goto err_deinit_port; - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, - &res, &io_base); + err = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, &bus_res); if (err) goto err_remove_irq_domain; - err = devm_request_pci_bus_resources(dev, &res); - if (err) - goto err_free_res; - - /* Get the I/O and memory ranges from DT */ - resource_list_for_each_entry(win, &res) { - switch (resource_type(win->res)) { - case IORESOURCE_IO: - io = win->res; - io->name = "I/O"; - rockchip->io_size = resource_size(io); - rockchip->io_bus_addr = io->start - win->offset; - err = pci_remap_iospace(io, io_base); - if (err) { - dev_warn(dev, "error %d: failed to map resource %pR\n", - err, io); - continue; - } - rockchip->io = io; - break; - case IORESOURCE_MEM: - mem = win->res; - mem->name = "MEM"; - rockchip->mem_size = resource_size(mem); - rockchip->mem_bus_addr = mem->start - win->offset; - break; - case IORESOURCE_BUS: - rockchip->root_bus_nr = win->res->start; - break; - default: - continue; - } - } + rockchip->root_bus_nr = bus_res->start; err = rockchip_pcie_cfg_atu(rockchip); if (err) - goto err_unmap_iospace; + goto err_remove_irq_domain; rockchip->msg_region = devm_ioremap(dev, rockchip->msg_bus_addr, SZ_1M); if (!rockchip->msg_region) { err = -ENOMEM; - goto err_unmap_iospace; + goto err_remove_irq_domain; } - list_splice_init(&res, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = rockchip; bridge->busnr = 0; @@ -1054,7 +1015,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev) err = pci_scan_root_bus_bridge(bridge); if (err < 0) - goto err_unmap_iospace; + goto err_remove_irq_domain; bus = bridge->bus; @@ -1068,10 +1029,6 @@ static int rockchip_pcie_probe(struct platform_device *pdev) pci_bus_add_devices(bus); return 0; -err_unmap_iospace: - pci_unmap_iospace(rockchip->io); -err_free_res: - pci_free_resource_list(&res); err_remove_irq_domain: irq_domain_remove(rockchip->irq_domain); err_deinit_port: @@ -1081,10 +1038,8 @@ err_vpcie: regulator_disable(rockchip->vpcie12v); if (!IS_ERR(rockchip->vpcie3v3)) regulator_disable(rockchip->vpcie3v3); - if (!IS_ERR(rockchip->vpcie1v8)) - regulator_disable(rockchip->vpcie1v8); - if (!IS_ERR(rockchip->vpcie0v9)) - regulator_disable(rockchip->vpcie0v9); + regulator_disable(rockchip->vpcie1v8); + regulator_disable(rockchip->vpcie0v9); err_set_vpcie: rockchip_pcie_disable_clocks(rockchip); return err; @@ -1097,7 +1052,6 @@ static int rockchip_pcie_remove(struct platform_device *pdev) pci_stop_root_bus(rockchip->root_bus); pci_remove_root_bus(rockchip->root_bus); - pci_unmap_iospace(rockchip->io); irq_domain_remove(rockchip->irq_domain); rockchip_pcie_deinit_phys(rockchip); @@ -1108,10 +1062,8 @@ static int rockchip_pcie_remove(struct platform_device *pdev) regulator_disable(rockchip->vpcie12v); if (!IS_ERR(rockchip->vpcie3v3)) regulator_disable(rockchip->vpcie3v3); - if (!IS_ERR(rockchip->vpcie1v8)) - regulator_disable(rockchip->vpcie1v8); - if (!IS_ERR(rockchip->vpcie0v9)) - regulator_disable(rockchip->vpcie0v9); + regulator_disable(rockchip->vpcie1v8); + regulator_disable(rockchip->vpcie0v9); return 0; } diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h index 8e87a059ce73..d90dfb354573 100644 --- a/drivers/pci/controller/pcie-rockchip.h +++ b/drivers/pci/controller/pcie-rockchip.h @@ -1,4 +1,4 @@ -// SPDX-License-Identifier: GPL-2.0+ +/* SPDX-License-Identifier: GPL-2.0+ */ /* * Rockchip AXI PCIe controller driver * @@ -304,13 +304,8 @@ struct rockchip_pcie { struct irq_domain *irq_domain; int offset; struct pci_bus *root_bus; - struct resource *io; - phys_addr_t io_bus_addr; - u32 io_size; void __iomem *msg_region; - u32 mem_size; phys_addr_t msg_bus_addr; - phys_addr_t mem_bus_addr; bool is_rc; struct resource *mem_res; }; diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c index 45c0f344ccd1..9bd1427f2fd6 100644 --- a/drivers/pci/controller/pcie-xilinx-nwl.c +++ b/drivers/pci/controller/pcie-xilinx-nwl.c @@ -821,8 +821,6 @@ static int nwl_pcie_probe(struct platform_device *pdev) struct pci_bus *child; struct pci_host_bridge *bridge; int err; - resource_size_t iobase = 0; - LIST_HEAD(res); bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); if (!bridge) @@ -845,24 +843,19 @@ static int nwl_pcie_probe(struct platform_device *pdev) return err; } - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, - &iobase); + err = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (err) { dev_err(dev, "Getting bridge resources failed\n"); return err; } - err = devm_request_pci_bus_resources(dev, &res); - if (err) - goto error; - err = nwl_pcie_init_irq_domain(pcie); if (err) { dev_err(dev, "Failed creating IRQ Domain\n"); - goto error; + return err; } - list_splice_init(&res, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = pcie; bridge->busnr = pcie->root_busno; @@ -874,13 +867,13 @@ static int nwl_pcie_probe(struct platform_device *pdev) err = nwl_pcie_enable_msi(pcie); if (err < 0) { dev_err(dev, "failed to enable MSI support: %d\n", err); - goto error; + return err; } } err = pci_scan_root_bus_bridge(bridge); if (err) - goto error; + return err; bus = bridge->bus; @@ -889,10 +882,6 @@ static int nwl_pcie_probe(struct platform_device *pdev) pcie_bus_configure_settings(child); pci_bus_add_devices(bus); return 0; - -error: - pci_free_resource_list(&res); - return err; } static struct platform_driver nwl_pcie_driver = { diff --git a/drivers/pci/controller/pcie-xilinx.c b/drivers/pci/controller/pcie-xilinx.c index 5bf3af3b28e6..98e55297815b 100644 --- a/drivers/pci/controller/pcie-xilinx.c +++ b/drivers/pci/controller/pcie-xilinx.c @@ -619,8 +619,6 @@ static int xilinx_pcie_probe(struct platform_device *pdev) struct pci_bus *bus, *child; struct pci_host_bridge *bridge; int err; - resource_size_t iobase = 0; - LIST_HEAD(res); if (!dev->of_node) return -ENODEV; @@ -647,19 +645,13 @@ static int xilinx_pcie_probe(struct platform_device *pdev) return err; } - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res, - &iobase); + err = pci_parse_request_of_pci_ranges(dev, &bridge->windows, + &bridge->dma_ranges, NULL); if (err) { dev_err(dev, "Getting bridge resources failed\n"); return err; } - err = devm_request_pci_bus_resources(dev, &res); - if (err) - goto error; - - - list_splice_init(&res, &bridge->windows); bridge->dev.parent = dev; bridge->sysdata = port; bridge->busnr = 0; @@ -673,7 +665,7 @@ static int xilinx_pcie_probe(struct platform_device *pdev) #endif err = pci_scan_root_bus_bridge(bridge); if (err < 0) - goto error; + return err; bus = bridge->bus; @@ -682,10 +674,6 @@ static int xilinx_pcie_probe(struct platform_device *pdev) pcie_bus_configure_settings(child); pci_bus_add_devices(bus); return 0; - -error: - pci_free_resource_list(&res); - return err; } static const struct of_device_id xilinx_pcie_of_match[] = { diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index a35d3f3996d7..212842263f55 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -602,16 +602,30 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) /* * Certain VMD devices may have a root port configuration option which - * limits the bus range to between 0-127 or 128-255 + * limits the bus range to between 0-127, 128-255, or 224-255 */ if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) { - u32 vmcap, vmconfig; - - pci_read_config_dword(vmd->dev, PCI_REG_VMCAP, &vmcap); - pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig); - if (BUS_RESTRICT_CAP(vmcap) && - (BUS_RESTRICT_CFG(vmconfig) == 0x1)) - vmd->busn_start = 128; + u16 reg16; + + pci_read_config_word(vmd->dev, PCI_REG_VMCAP, ®16); + if (BUS_RESTRICT_CAP(reg16)) { + pci_read_config_word(vmd->dev, PCI_REG_VMCONFIG, + ®16); + + switch (BUS_RESTRICT_CFG(reg16)) { + case 1: + vmd->busn_start = 128; + break; + case 2: + vmd->busn_start = 224; + break; + case 3: + pci_err(vmd->dev, "Unknown Bus Offset Setting\n"); + return -ENODEV; + default: + break; + } + } } res = &vmd->dev->resource[VMD_CFGBAR]; @@ -823,7 +837,7 @@ static int vmd_suspend(struct device *dev) int i; for (i = 0; i < vmd->msix_count; i++) - devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]); + devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]); pci_save_state(pdev); return 0; @@ -854,6 +868,8 @@ static const struct pci_device_id vmd_ids[] = { {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0), .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW | VMD_FEAT_HAS_BUS_RESTRICTIONS,}, + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B), + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,}, {0,} }; MODULE_DEVICE_TABLE(pci, vmd_ids); diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 1cfe3687a211..5d74f81ddfe4 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -44,7 +44,7 @@ static struct workqueue_struct *kpcitest_workqueue; struct pci_epf_test { - void *reg[6]; + void *reg[PCI_STD_NUM_BARS]; struct pci_epf *epf; enum pci_barno test_reg_bar; struct delayed_work cmd_handler; @@ -377,7 +377,7 @@ static void pci_epf_test_unbind(struct pci_epf *epf) cancel_delayed_work(&epf_test->cmd_handler); pci_epc_stop(epc); - for (bar = BAR_0; bar <= BAR_5; bar++) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { epf_bar = &epf->bar[bar]; if (epf_test->reg[bar]) { @@ -400,7 +400,7 @@ static int pci_epf_test_set_bar(struct pci_epf *epf) epc_features = epf_test->epc_features; - for (bar = BAR_0; bar <= BAR_5; bar += add) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar += add) { epf_bar = &epf->bar[bar]; /* * pci_epc_set_bar() sets PCI_BASE_ADDRESS_MEM_TYPE_64 @@ -450,7 +450,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf) } epf_test->reg[test_reg_bar] = base; - for (bar = BAR_0; bar <= BAR_5; bar += add) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar += add) { epf_bar = &epf->bar[bar]; add = (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ? 2 : 1; @@ -478,7 +478,7 @@ static void pci_epf_configure_bar(struct pci_epf *epf, bool bar_fixed_64bit; int i; - for (i = BAR_0; i <= BAR_5; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { epf_bar = &epf->bar[i]; bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i)); if (bar_fixed_64bit) diff --git a/drivers/pci/endpoint/pci-epc-mem.c b/drivers/pci/endpoint/pci-epc-mem.c index 2bf8bd1f0563..d2b174ce15de 100644 --- a/drivers/pci/endpoint/pci-epc-mem.c +++ b/drivers/pci/endpoint/pci-epc-mem.c @@ -134,7 +134,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, if (pageno < 0) return NULL; - *phys_addr = mem->phys_base + (pageno << page_shift); + *phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift); virt_addr = ioremap(*phys_addr, size); if (!virt_addr) bitmap_release_region(mem->bitmap, pageno, order); diff --git a/drivers/pci/hotplug/Kconfig b/drivers/pci/hotplug/Kconfig index e7b493c22bf3..32455a79372d 100644 --- a/drivers/pci/hotplug/Kconfig +++ b/drivers/pci/hotplug/Kconfig @@ -83,7 +83,7 @@ config HOTPLUG_PCI_CPCI_ZT5550 depends on HOTPLUG_PCI_CPCI && X86 help Say Y here if you have an Performance Technologies (formerly Intel, - formerly just Ziatech) Ziatech ZT5550 CompactPCI system card. + formerly just Ziatech) Ziatech ZT5550 CompactPCI system card. To compile this driver as a module, choose M here: the module will be called cpcihp_zt5550. diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index e4c46637f32f..b3869951c0eb 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -449,8 +449,15 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge) /* Scan non-hotplug bridges that need to be reconfigured */ for_each_pci_bridge(dev, bus) { - if (!hotplug_is_native(dev)) - max = pci_scan_bridge(bus, dev, max, 1); + if (hotplug_is_native(dev)) + continue; + + max = pci_scan_bridge(bus, dev, max, 1); + if (dev->subordinate) { + pcibios_resource_survey_bus(dev->subordinate); + pci_bus_size_bridges(dev->subordinate); + pci_bus_assign_resources(dev->subordinate); + } } } @@ -480,7 +487,6 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge) if (PCI_SLOT(dev->devfn) == slot->device) acpiphp_native_scan_bridge(dev); } - pci_assign_unassigned_bridge_resources(bus->self); } else { LIST_HEAD(add_list); int max, pass; diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index 654c972b8ea0..aa61d4c219d7 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h @@ -72,6 +72,7 @@ extern int pciehp_poll_time; * @reset_lock: prevents access to the Data Link Layer Link Active bit in the * Link Status register and to the Presence Detect State bit in the Slot * Status register during a slot reset which may cause them to flap + * @ist_running: flag to keep user request waiting while IRQ thread is running * @request_result: result of last user request submitted to the IRQ thread * @requester: wait queue to wake up on completion of user request, * used for synchronous slot enable/disable request via sysfs @@ -101,6 +102,7 @@ struct controller { struct hotplug_slot hotplug_slot; /* hotplug core interface */ struct rw_semaphore reset_lock; + unsigned int ist_running; int request_result; wait_queue_head_t requester; }; @@ -172,10 +174,10 @@ void pciehp_set_indicators(struct controller *ctrl, int pwr, int attn); void pciehp_get_latch_status(struct controller *ctrl, u8 *status); int pciehp_query_power_fault(struct controller *ctrl); -bool pciehp_card_present(struct controller *ctrl); -bool pciehp_card_present_or_link_active(struct controller *ctrl); +int pciehp_card_present(struct controller *ctrl); +int pciehp_card_present_or_link_active(struct controller *ctrl); int pciehp_check_link_status(struct controller *ctrl); -bool pciehp_check_link_active(struct controller *ctrl); +int pciehp_check_link_active(struct controller *ctrl); void pciehp_release_ctrl(struct controller *ctrl); int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot); diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c index b3122c151b80..312cc45c44c7 100644 --- a/drivers/pci/hotplug/pciehp_core.c +++ b/drivers/pci/hotplug/pciehp_core.c @@ -139,10 +139,15 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) { struct controller *ctrl = to_ctrl(hotplug_slot); struct pci_dev *pdev = ctrl->pcie->port; + int ret; pci_config_pm_runtime_get(pdev); - *value = pciehp_card_present_or_link_active(ctrl); + ret = pciehp_card_present_or_link_active(ctrl); pci_config_pm_runtime_put(pdev); + if (ret < 0) + return ret; + + *value = ret; return 0; } @@ -158,13 +163,13 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) */ static void pciehp_check_presence(struct controller *ctrl) { - bool occupied; + int occupied; down_read(&ctrl->reset_lock); mutex_lock(&ctrl->state_lock); occupied = pciehp_card_present_or_link_active(ctrl); - if ((occupied && (ctrl->state == OFF_STATE || + if ((occupied > 0 && (ctrl->state == OFF_STATE || ctrl->state == BLINKINGON_STATE)) || (!occupied && (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE))) @@ -253,7 +258,7 @@ static bool pme_is_native(struct pcie_device *dev) return pcie_ports_native || host->native_pme; } -static int pciehp_suspend(struct pcie_device *dev) +static void pciehp_disable_interrupt(struct pcie_device *dev) { /* * Disable hotplug interrupt so that it does not trigger @@ -261,7 +266,19 @@ static int pciehp_suspend(struct pcie_device *dev) */ if (pme_is_native(dev)) pcie_disable_interrupt(get_service_data(dev)); +} + +#ifdef CONFIG_PM_SLEEP +static int pciehp_suspend(struct pcie_device *dev) +{ + /* + * If the port is already runtime suspended we can keep it that + * way. + */ + if (dev_pm_smart_suspend_and_suspended(&dev->port->dev)) + return 0; + pciehp_disable_interrupt(dev); return 0; } @@ -279,6 +296,7 @@ static int pciehp_resume_noirq(struct pcie_device *dev) return 0; } +#endif static int pciehp_resume(struct pcie_device *dev) { @@ -292,6 +310,12 @@ static int pciehp_resume(struct pcie_device *dev) return 0; } +static int pciehp_runtime_suspend(struct pcie_device *dev) +{ + pciehp_disable_interrupt(dev); + return 0; +} + static int pciehp_runtime_resume(struct pcie_device *dev) { struct controller *ctrl = get_service_data(dev); @@ -318,10 +342,12 @@ static struct pcie_port_service_driver hpdriver_portdrv = { .remove = pciehp_remove, #ifdef CONFIG_PM +#ifdef CONFIG_PM_SLEEP .suspend = pciehp_suspend, .resume_noirq = pciehp_resume_noirq, .resume = pciehp_resume, - .runtime_suspend = pciehp_suspend, +#endif + .runtime_suspend = pciehp_runtime_suspend, .runtime_resume = pciehp_runtime_resume, #endif /* PM */ }; diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c index 21af7b16d7a4..6503d15effbb 100644 --- a/drivers/pci/hotplug/pciehp_ctrl.c +++ b/drivers/pci/hotplug/pciehp_ctrl.c @@ -226,7 +226,7 @@ void pciehp_handle_disable_request(struct controller *ctrl) void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events) { - bool present, link_active; + int present, link_active; /* * If the slot is on and presence or link has changed, turn it off. @@ -257,7 +257,7 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events) mutex_lock(&ctrl->state_lock); present = pciehp_card_present(ctrl); link_active = pciehp_check_link_active(ctrl); - if (!present && !link_active) { + if (present <= 0 && link_active <= 0) { mutex_unlock(&ctrl->state_lock); return; } @@ -375,7 +375,8 @@ int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot) ctrl->request_result = -ENODEV; pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); wait_event(ctrl->requester, - !atomic_read(&ctrl->pending_events)); + !atomic_read(&ctrl->pending_events) && + !ctrl->ist_running); return ctrl->request_result; case POWERON_STATE: ctrl_info(ctrl, "Slot(%s): Already in powering on state\n", @@ -408,7 +409,8 @@ int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot) mutex_unlock(&ctrl->state_lock); pciehp_request(ctrl, DISABLE_SLOT); wait_event(ctrl->requester, - !atomic_read(&ctrl->pending_events)); + !atomic_read(&ctrl->pending_events) && + !ctrl->ist_running); return ctrl->request_result; case POWEROFF_STATE: ctrl_info(ctrl, "Slot(%s): Already in powering off state\n", diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c index 1a522c1c4177..8a2cb1764386 100644 --- a/drivers/pci/hotplug/pciehp_hpc.c +++ b/drivers/pci/hotplug/pciehp_hpc.c @@ -68,7 +68,7 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout) struct pci_dev *pdev = ctrl_dev(ctrl); u16 slot_status; - while (true) { + do { pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); if (slot_status == (u16) ~0) { ctrl_info(ctrl, "%s: no response from device\n", @@ -81,11 +81,9 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout) PCI_EXP_SLTSTA_CC); return 1; } - if (timeout < 0) - break; msleep(10); timeout -= 10; - } + } while (timeout >= 0); return 0; /* timeout */ } @@ -201,17 +199,29 @@ static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask) pcie_do_write_cmd(ctrl, cmd, mask, false); } -bool pciehp_check_link_active(struct controller *ctrl) +/** + * pciehp_check_link_active() - Is the link active + * @ctrl: PCIe hotplug controller + * + * Check whether the downstream link is currently active. Note it is + * possible that the card is removed immediately after this so the + * caller may need to take it into account. + * + * If the hotplug controller itself is not available anymore returns + * %-ENODEV. + */ +int pciehp_check_link_active(struct controller *ctrl) { struct pci_dev *pdev = ctrl_dev(ctrl); u16 lnk_status; - bool ret; + int ret; - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); - ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); + ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); + if (ret == PCIBIOS_DEVICE_NOT_FOUND || lnk_status == (u16)~0) + return -ENODEV; - if (ret) - ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); + ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); + ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); return ret; } @@ -373,13 +383,29 @@ void pciehp_get_latch_status(struct controller *ctrl, u8 *status) *status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS); } -bool pciehp_card_present(struct controller *ctrl) +/** + * pciehp_card_present() - Is the card present + * @ctrl: PCIe hotplug controller + * + * Function checks whether the card is currently present in the slot and + * in that case returns true. Note it is possible that the card is + * removed immediately after the check so the caller may need to take + * this into account. + * + * It the hotplug controller itself is not available anymore returns + * %-ENODEV. + */ +int pciehp_card_present(struct controller *ctrl) { struct pci_dev *pdev = ctrl_dev(ctrl); u16 slot_status; + int ret; - pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); - return slot_status & PCI_EXP_SLTSTA_PDS; + ret = pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); + if (ret == PCIBIOS_DEVICE_NOT_FOUND || slot_status == (u16)~0) + return -ENODEV; + + return !!(slot_status & PCI_EXP_SLTSTA_PDS); } /** @@ -390,10 +416,19 @@ bool pciehp_card_present(struct controller *ctrl) * Presence Detect State bit, this helper also returns true if the Link Active * bit is set. This is a concession to broken hotplug ports which hardwire * Presence Detect State to zero, such as Wilocity's [1ae9:0200]. + * + * Returns: %1 if the slot is occupied and %0 if it is not. If the hotplug + * port is not present anymore returns %-ENODEV. */ -bool pciehp_card_present_or_link_active(struct controller *ctrl) +int pciehp_card_present_or_link_active(struct controller *ctrl) { - return pciehp_card_present(ctrl) || pciehp_check_link_active(ctrl); + int ret; + + ret = pciehp_card_present(ctrl); + if (ret) + return ret; + + return pciehp_check_link_active(ctrl); } int pciehp_query_power_fault(struct controller *ctrl) @@ -583,6 +618,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) irqreturn_t ret; u32 events; + ctrl->ist_running = true; pci_config_pm_runtime_get(pdev); /* rerun pciehp_isr() if the port was inaccessible on interrupt */ @@ -629,6 +665,7 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) up_read(&ctrl->reset_lock); pci_config_pm_runtime_put(pdev); + ctrl->ist_running = false; wake_up(&ctrl->requester); return IRQ_HANDLED; } diff --git a/drivers/pci/hotplug/rpaphp_core.c b/drivers/pci/hotplug/rpaphp_core.c index 951f7f216fb3..e408e4021cee 100644 --- a/drivers/pci/hotplug/rpaphp_core.c +++ b/drivers/pci/hotplug/rpaphp_core.c @@ -185,8 +185,8 @@ static int get_children_props(struct device_node *dn, const __be32 **drc_indexes /* Verify the existence of 'drc_name' and/or 'drc_type' within the - * current node. First obtain it's my-drc-index property. Next, - * obtain the DRC info from it's parent. Use the my-drc-index for + * current node. First obtain its my-drc-index property. Next, + * obtain the DRC info from its parent. Use the my-drc-index for * correlation, and obtain/validate the requested properties. */ diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index b3f972e8cfed..1e88fd427757 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c @@ -9,7 +9,6 @@ #include <linux/pci.h> #include <linux/slab.h> -#include <linux/mutex.h> #include <linux/export.h> #include <linux/string.h> #include <linux/delay.h> @@ -254,8 +253,14 @@ static ssize_t sriov_numvfs_show(struct device *dev, char *buf) { struct pci_dev *pdev = to_pci_dev(dev); + u16 num_vfs; + + /* Serialize vs sriov_numvfs_store() so readers see valid num_VFs */ + device_lock(&pdev->dev); + num_vfs = pdev->sriov->num_VFs; + device_unlock(&pdev->dev); - return sprintf(buf, "%u\n", pdev->sriov->num_VFs); + return sprintf(buf, "%u\n", num_vfs); } /* diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 0884bedcfc7a..c7709e49f0e4 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -213,12 +213,13 @@ u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag) if (pci_msi_ignore_mask) return 0; + desc_addr = pci_msix_desc_addr(desc); if (!desc_addr) return 0; mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; - if (flag) + if (flag & PCI_MSIX_ENTRY_CTRL_MASKBIT) mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT; writel(mask_bits, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL); @@ -861,7 +862,7 @@ static int pci_msi_supported(struct pci_dev *dev, int nvec) if (!pci_msi_enable) return 0; - if (!dev || dev->no_msi || dev->current_state != PCI_D0) + if (!dev || dev->no_msi) return 0; /* @@ -972,7 +973,7 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nr_entries; int i, j; - if (!pci_msi_supported(dev, nvec)) + if (!pci_msi_supported(dev, nvec) || dev->current_state != PCI_D0) return -EINVAL; nr_entries = pci_msix_vec_count(dev); @@ -1058,7 +1059,7 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, int nvec; int rc; - if (!pci_msi_supported(dev, minvec)) + if (!pci_msi_supported(dev, minvec) || dev->current_state != PCI_D0) return -EINVAL; /* Check whether driver already requested MSI-X IRQs */ @@ -1315,22 +1316,6 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) } EXPORT_SYMBOL(pci_irq_get_affinity); -/** - * pci_irq_get_node - return the NUMA node of a particular MSI vector - * @pdev: PCI device to operate on - * @vec: device-relative interrupt vector index (0-based). - */ -int pci_irq_get_node(struct pci_dev *pdev, int vec) -{ - const struct cpumask *mask; - - mask = pci_irq_get_affinity(pdev, vec); - if (mask) - return local_memory_node(cpu_to_node(cpumask_first(mask))); - return dev_to_node(&pdev->dev); -} -EXPORT_SYMBOL(pci_irq_get_node); - struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc) { return to_pci_dev(desc->dev); diff --git a/drivers/pci/of.c b/drivers/pci/of.c index 36891e7deee3..81ceeaa6f1d5 100644 --- a/drivers/pci/of.c +++ b/drivers/pci/of.c @@ -236,7 +236,6 @@ void of_pci_check_probe_only(void) } EXPORT_SYMBOL_GPL(of_pci_check_probe_only); -#if defined(CONFIG_OF_ADDRESS) /** * devm_of_pci_get_host_bridge_resources() - Resource-managed parsing of PCI * host bridge resources from DT @@ -255,16 +254,18 @@ EXPORT_SYMBOL_GPL(of_pci_check_probe_only); * It returns zero if the range parsing has been successful or a standard error * value if it failed. */ -int devm_of_pci_get_host_bridge_resources(struct device *dev, +static int devm_of_pci_get_host_bridge_resources(struct device *dev, unsigned char busno, unsigned char bus_max, - struct list_head *resources, resource_size_t *io_base) + struct list_head *resources, + struct list_head *ib_resources, + resource_size_t *io_base) { struct device_node *dev_node = dev->of_node; struct resource *res, tmp_res; struct resource *bus_range; struct of_pci_range range; struct of_pci_range_parser parser; - char range_type[4]; + const char *range_type; int err; if (io_base) @@ -298,12 +299,12 @@ int devm_of_pci_get_host_bridge_resources(struct device *dev, for_each_of_pci_range(&parser, &range) { /* Read next ranges element */ if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO) - snprintf(range_type, 4, " IO"); + range_type = "IO"; else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM) - snprintf(range_type, 4, "MEM"); + range_type = "MEM"; else - snprintf(range_type, 4, "err"); - dev_info(dev, " %s %#010llx..%#010llx -> %#010llx\n", + range_type = "err"; + dev_info(dev, " %6s %#012llx..%#012llx -> %#012llx\n", range_type, range.cpu_addr, range.cpu_addr + range.size - 1, range.pci_addr); @@ -340,14 +341,54 @@ int devm_of_pci_get_host_bridge_resources(struct device *dev, pci_add_resource_offset(resources, res, res->start - range.pci_addr); } + /* Check for dma-ranges property */ + if (!ib_resources) + return 0; + err = of_pci_dma_range_parser_init(&parser, dev_node); + if (err) + return 0; + + dev_dbg(dev, "Parsing dma-ranges property...\n"); + for_each_of_pci_range(&parser, &range) { + struct resource_entry *entry; + /* + * If we failed translation or got a zero-sized region + * then skip this range + */ + if (((range.flags & IORESOURCE_TYPE_BITS) != IORESOURCE_MEM) || + range.cpu_addr == OF_BAD_ADDR || range.size == 0) + continue; + + dev_info(dev, " %6s %#012llx..%#012llx -> %#012llx\n", + "IB MEM", range.cpu_addr, + range.cpu_addr + range.size - 1, range.pci_addr); + + + err = of_pci_range_to_resource(&range, dev_node, &tmp_res); + if (err) + continue; + + res = devm_kmemdup(dev, &tmp_res, sizeof(tmp_res), GFP_KERNEL); + if (!res) { + err = -ENOMEM; + goto failed; + } + + /* Keep the resource list sorted */ + resource_list_for_each_entry(entry, ib_resources) + if (entry->res->start > res->start) + break; + + pci_add_resource_offset(&entry->node, res, + res->start - range.pci_addr); + } + return 0; failed: pci_free_resource_list(resources); return err; } -EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources); -#endif /* CONFIG_OF_ADDRESS */ #if IS_ENABLED(CONFIG_OF_IRQ) /** @@ -482,6 +523,7 @@ EXPORT_SYMBOL_GPL(of_irq_parse_and_map_pci); int pci_parse_request_of_pci_ranges(struct device *dev, struct list_head *resources, + struct list_head *ib_resources, struct resource **bus_range) { int err, res_valid = 0; @@ -489,8 +531,10 @@ int pci_parse_request_of_pci_ranges(struct device *dev, struct resource_entry *win, *tmp; INIT_LIST_HEAD(resources); + if (ib_resources) + INIT_LIST_HEAD(ib_resources); err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, resources, - &iobase); + ib_resources, &iobase); if (err) return err; @@ -530,6 +574,7 @@ int pci_parse_request_of_pci_ranges(struct device *dev, pci_free_resource_list(resources); return err; } +EXPORT_SYMBOL_GPL(pci_parse_request_of_pci_ranges); #endif /* CONFIG_PCI */ diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c index 5fd90105510d..fffa77093c08 100644 --- a/drivers/pci/pci-bridge-emul.c +++ b/drivers/pci/pci-bridge-emul.c @@ -270,10 +270,10 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { int pci_bridge_emul_init(struct pci_bridge_emul *bridge, unsigned int flags) { - bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; + bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16); bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; bridge->conf.cache_line_size = 0x10; - bridge->conf.status = PCI_STATUS_CAP_LIST; + bridge->conf.status = cpu_to_le16(PCI_STATUS_CAP_LIST); bridge->pci_regs_behavior = kmemdup(pci_regs_behavior, sizeof(pci_regs_behavior), GFP_KERNEL); @@ -284,8 +284,9 @@ int pci_bridge_emul_init(struct pci_bridge_emul *bridge, bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; /* Set PCIe v2, root port, slot support */ - bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | - PCI_EXP_FLAGS_SLOT; + bridge->pcie_conf.cap = + cpu_to_le16(PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | + PCI_EXP_FLAGS_SLOT); bridge->pcie_cap_regs_behavior = kmemdup(pcie_cap_regs_behavior, sizeof(pcie_cap_regs_behavior), @@ -327,7 +328,7 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, int reg = where & ~3; pci_bridge_emul_read_status_t (*read_op)(struct pci_bridge_emul *bridge, int reg, u32 *value); - u32 *cfgspace; + __le32 *cfgspace; const struct pci_bridge_reg_behavior *behavior; if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) { @@ -343,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { reg -= PCI_CAP_PCIE_START; read_op = bridge->ops->read_pcie; - cfgspace = (u32 *) &bridge->pcie_conf; + cfgspace = (__le32 *) &bridge->pcie_conf; behavior = bridge->pcie_cap_regs_behavior; } else { read_op = bridge->ops->read_base; - cfgspace = (u32 *) &bridge->conf; + cfgspace = (__le32 *) &bridge->conf; behavior = bridge->pci_regs_behavior; } @@ -357,7 +358,7 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, ret = PCI_BRIDGE_EMUL_NOT_HANDLED; if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) - *value = cfgspace[reg / 4]; + *value = le32_to_cpu(cfgspace[reg / 4]); /* * Make sure we never return any reserved bit with a value @@ -387,7 +388,7 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, int mask, ret, old, new, shift; void (*write_op)(struct pci_bridge_emul *bridge, int reg, u32 old, u32 new, u32 mask); - u32 *cfgspace; + __le32 *cfgspace; const struct pci_bridge_reg_behavior *behavior; if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) @@ -414,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { reg -= PCI_CAP_PCIE_START; write_op = bridge->ops->write_pcie; - cfgspace = (u32 *) &bridge->pcie_conf; + cfgspace = (__le32 *) &bridge->pcie_conf; behavior = bridge->pcie_cap_regs_behavior; } else { write_op = bridge->ops->write_base; - cfgspace = (u32 *) &bridge->conf; + cfgspace = (__le32 *) &bridge->conf; behavior = bridge->pci_regs_behavior; } @@ -431,7 +432,7 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, /* Clear the W1C bits */ new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); - cfgspace[reg / 4] = new; + cfgspace[reg / 4] = cpu_to_le32(new); if (write_op) write_op(bridge, reg, old, new, mask); diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h index e65b1b79899d..b31883022a8e 100644 --- a/drivers/pci/pci-bridge-emul.h +++ b/drivers/pci/pci-bridge-emul.h @@ -6,65 +6,65 @@ /* PCI configuration space of a PCI-to-PCI bridge. */ struct pci_bridge_emul_conf { - u16 vendor; - u16 device; - u16 command; - u16 status; - u32 class_revision; + __le16 vendor; + __le16 device; + __le16 command; + __le16 status; + __le32 class_revision; u8 cache_line_size; u8 latency_timer; u8 header_type; u8 bist; - u32 bar[2]; + __le32 bar[2]; u8 primary_bus; u8 secondary_bus; u8 subordinate_bus; u8 secondary_latency_timer; u8 iobase; u8 iolimit; - u16 secondary_status; - u16 membase; - u16 memlimit; - u16 pref_mem_base; - u16 pref_mem_limit; - u32 prefbaseupper; - u32 preflimitupper; - u16 iobaseupper; - u16 iolimitupper; + __le16 secondary_status; + __le16 membase; + __le16 memlimit; + __le16 pref_mem_base; + __le16 pref_mem_limit; + __le32 prefbaseupper; + __le32 preflimitupper; + __le16 iobaseupper; + __le16 iolimitupper; u8 capabilities_pointer; u8 reserve[3]; - u32 romaddr; + __le32 romaddr; u8 intline; u8 intpin; - u16 bridgectrl; + __le16 bridgectrl; }; /* PCI configuration space of the PCIe capabilities */ struct pci_bridge_emul_pcie_conf { u8 cap_id; u8 next; - u16 cap; - u32 devcap; - u16 devctl; - u16 devsta; - u32 lnkcap; - u16 lnkctl; - u16 lnksta; - u32 slotcap; - u16 slotctl; - u16 slotsta; - u16 rootctl; - u16 rsvd; - u32 rootsta; - u32 devcap2; - u16 devctl2; - u16 devsta2; - u32 lnkcap2; - u16 lnkctl2; - u16 lnksta2; - u32 slotcap2; - u16 slotctl2; - u16 slotsta2; + __le16 cap; + __le32 devcap; + __le16 devctl; + __le16 devsta; + __le32 lnkcap; + __le16 lnkctl; + __le16 lnksta; + __le32 slotcap; + __le16 slotctl; + __le16 slotsta; + __le16 rootctl; + __le16 rsvd; + __le32 rootsta; + __le32 devcap2; + __le16 devctl2; + __le16 devsta2; + __le32 lnkcap2; + __le16 lnkctl2; + __le16 lnksta2; + __le32 slotcap2; + __le16 slotctl2; + __le16 slotsta2; }; struct pci_bridge_emul; diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index a8124e47bf6e..0454ca0e4e3f 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -315,7 +315,8 @@ static long local_pci_probe(void *_ddi) * Probe function should return < 0 for failure, 0 for success * Treat values > 0 as success, but warn. */ - dev_warn(dev, "Driver probe function unexpectedly returned %d\n", rc); + pci_warn(pci_dev, "Driver probe function unexpectedly returned %d\n", + rc); return 0; } @@ -517,6 +518,12 @@ static int pci_restore_standard_config(struct pci_dev *pci_dev) return 0; } +static void pci_pm_default_resume(struct pci_dev *pci_dev) +{ + pci_fixup_device(pci_fixup_resume, pci_dev); + pci_enable_wake(pci_dev, PCI_D0, false); +} + #endif #ifdef CONFIG_PM_SLEEP @@ -524,6 +531,7 @@ static int pci_restore_standard_config(struct pci_dev *pci_dev) static void pci_pm_default_resume_early(struct pci_dev *pci_dev) { pci_power_up(pci_dev); + pci_update_current_state(pci_dev, PCI_D0); pci_restore_state(pci_dev); pci_pme_restore(pci_dev); } @@ -578,9 +586,9 @@ static int pci_legacy_suspend(struct device *dev, pm_message_t state) if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0 && pci_dev->current_state != PCI_UNKNOWN) { - WARN_ONCE(pci_dev->current_state != prev, - "PCI PM: Device state not saved by %pS\n", - drv->suspend); + pci_WARN_ONCE(pci_dev, pci_dev->current_state != prev, + "PCI PM: Device state not saved by %pS\n", + drv->suspend); } } @@ -592,46 +600,17 @@ static int pci_legacy_suspend(struct device *dev, pm_message_t state) static int pci_legacy_suspend_late(struct device *dev, pm_message_t state) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct pci_driver *drv = pci_dev->driver; - - if (drv && drv->suspend_late) { - pci_power_t prev = pci_dev->current_state; - int error; - - error = drv->suspend_late(pci_dev, state); - suspend_report_result(drv->suspend_late, error); - if (error) - return error; - - if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0 - && pci_dev->current_state != PCI_UNKNOWN) { - WARN_ONCE(pci_dev->current_state != prev, - "PCI PM: Device state not saved by %pS\n", - drv->suspend_late); - goto Fixup; - } - } if (!pci_dev->state_saved) pci_save_state(pci_dev); pci_pm_set_unknown_state(pci_dev); -Fixup: pci_fixup_device(pci_fixup_suspend_late, pci_dev); return 0; } -static int pci_legacy_resume_early(struct device *dev) -{ - struct pci_dev *pci_dev = to_pci_dev(dev); - struct pci_driver *drv = pci_dev->driver; - - return drv && drv->resume_early ? - drv->resume_early(pci_dev) : 0; -} - static int pci_legacy_resume(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); @@ -645,12 +624,6 @@ static int pci_legacy_resume(struct device *dev) /* Auxiliary functions used by the new power management framework */ -static void pci_pm_default_resume(struct pci_dev *pci_dev) -{ - pci_fixup_device(pci_fixup_resume, pci_dev); - pci_enable_wake(pci_dev, PCI_D0, false); -} - static void pci_pm_default_suspend(struct pci_dev *pci_dev) { /* Disable non-bridge devices without PM support */ @@ -661,16 +634,15 @@ static void pci_pm_default_suspend(struct pci_dev *pci_dev) static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev) { struct pci_driver *drv = pci_dev->driver; - bool ret = drv && (drv->suspend || drv->suspend_late || drv->resume - || drv->resume_early); + bool ret = drv && (drv->suspend || drv->resume); /* * Legacy PM support is used by default, so warn if the new framework is * supported as well. Drivers are supposed to support either the * former, or the latter, but not both at the same time. */ - WARN(ret && drv->driver.pm, "driver %s device %04x:%04x\n", - drv->name, pci_dev->vendor, pci_dev->device); + pci_WARN(pci_dev, ret && drv->driver.pm, "device %04x:%04x\n", + pci_dev->vendor, pci_dev->device); return ret; } @@ -679,11 +651,11 @@ static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev) static int pci_pm_prepare(struct device *dev) { - struct device_driver *drv = dev->driver; struct pci_dev *pci_dev = to_pci_dev(dev); + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - if (drv && drv->pm && drv->pm->prepare) { - int error = drv->pm->prepare(dev); + if (pm && pm->prepare) { + int error = pm->prepare(dev); if (error < 0) return error; @@ -793,9 +765,9 @@ static int pci_pm_suspend(struct device *dev) if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0 && pci_dev->current_state != PCI_UNKNOWN) { - WARN_ONCE(pci_dev->current_state != prev, - "PCI PM: State of device not saved by %pS\n", - pm->suspend); + pci_WARN_ONCE(pci_dev, pci_dev->current_state != prev, + "PCI PM: State of device not saved by %pS\n", + pm->suspend); } } @@ -841,9 +813,9 @@ static int pci_pm_suspend_noirq(struct device *dev) if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0 && pci_dev->current_state != PCI_UNKNOWN) { - WARN_ONCE(pci_dev->current_state != prev, - "PCI PM: State of device not saved by %pS\n", - pm->suspend_noirq); + pci_WARN_ONCE(pci_dev, pci_dev->current_state != prev, + "PCI PM: State of device not saved by %pS\n", + pm->suspend_noirq); goto Fixup; } } @@ -865,7 +837,7 @@ static int pci_pm_suspend_noirq(struct device *dev) pci_prepare_to_sleep(pci_dev); } - dev_dbg(dev, "PCI PM: Suspend power state: %s\n", + pci_dbg(pci_dev, "PCI PM: Suspend power state: %s\n", pci_power_name(pci_dev->current_state)); if (pci_dev->current_state == PCI_D0) { @@ -880,7 +852,7 @@ static int pci_pm_suspend_noirq(struct device *dev) } if (pci_dev->skip_bus_pm && pm_suspend_no_platform()) { - dev_dbg(dev, "PCI PM: Skipped\n"); + pci_dbg(pci_dev, "PCI PM: Skipped\n"); goto Fixup; } @@ -917,8 +889,9 @@ Fixup: static int pci_pm_resume_noirq(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct device_driver *drv = dev->driver; - int error = 0; + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; + pci_power_t prev_state = pci_dev->current_state; + bool skip_bus_pm = pci_dev->skip_bus_pm; if (dev_pm_may_skip_resume(dev)) return 0; @@ -937,27 +910,28 @@ static int pci_pm_resume_noirq(struct device *dev) * configuration here and attempting to put them into D0 again is * pointless, so avoid doing that. */ - if (!(pci_dev->skip_bus_pm && pm_suspend_no_platform())) + if (!(skip_bus_pm && pm_suspend_no_platform())) pci_pm_default_resume_early(pci_dev); pci_fixup_device(pci_fixup_resume_early, pci_dev); + pcie_pme_root_status_cleanup(pci_dev); - if (pci_has_legacy_pm_support(pci_dev)) - return pci_legacy_resume_early(dev); + if (!skip_bus_pm && prev_state == PCI_D3cold) + pci_bridge_wait_for_secondary_bus(pci_dev); - pcie_pme_root_status_cleanup(pci_dev); + if (pci_has_legacy_pm_support(pci_dev)) + return 0; - if (drv && drv->pm && drv->pm->resume_noirq) - error = drv->pm->resume_noirq(dev); + if (pm && pm->resume_noirq) + return pm->resume_noirq(dev); - return error; + return 0; } static int pci_pm_resume(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - int error = 0; /* * This is necessary for the suspend error path in which resume is @@ -973,12 +947,12 @@ static int pci_pm_resume(struct device *dev) if (pm) { if (pm->resume) - error = pm->resume(dev); + return pm->resume(dev); } else { pci_pm_reenable_device(pci_dev); } - return error; + return 0; } #else /* !CONFIG_SUSPEND */ @@ -993,7 +967,6 @@ static int pci_pm_resume(struct device *dev) #ifdef CONFIG_HIBERNATE_CALLBACKS - /* * pcibios_pm_ops - provide arch-specific hooks when a PCI device is doing * a hibernate transition @@ -1039,16 +1012,16 @@ static int pci_pm_freeze(struct device *dev) static int pci_pm_freeze_noirq(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct device_driver *drv = dev->driver; + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; if (pci_has_legacy_pm_support(pci_dev)) return pci_legacy_suspend_late(dev, PMSG_FREEZE); - if (drv && drv->pm && drv->pm->freeze_noirq) { + if (pm && pm->freeze_noirq) { int error; - error = drv->pm->freeze_noirq(dev); - suspend_report_result(drv->pm->freeze_noirq, error); + error = pm->freeze_noirq(dev); + suspend_report_result(pm->freeze_noirq, error); if (error) return error; } @@ -1067,8 +1040,8 @@ static int pci_pm_freeze_noirq(struct device *dev) static int pci_pm_thaw_noirq(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct device_driver *drv = dev->driver; - int error = 0; + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; + int error; if (pcibios_pm_ops.thaw_noirq) { error = pcibios_pm_ops.thaw_noirq(dev); @@ -1076,21 +1049,25 @@ static int pci_pm_thaw_noirq(struct device *dev) return error; } - if (pci_has_legacy_pm_support(pci_dev)) - return pci_legacy_resume_early(dev); - /* - * pci_restore_state() requires the device to be in D0 (because of MSI - * restoration among other things), so force it into D0 in case the - * driver's "freeze" callbacks put it into a low-power state directly. + * The pm->thaw_noirq() callback assumes the device has been + * returned to D0 and its config state has been restored. + * + * In addition, pci_restore_state() restores MSI-X state in MMIO + * space, which requires the device to be in D0, so return it to D0 + * in case the driver's "freeze" callbacks put it into a low-power + * state. */ pci_set_power_state(pci_dev, PCI_D0); pci_restore_state(pci_dev); - if (drv && drv->pm && drv->pm->thaw_noirq) - error = drv->pm->thaw_noirq(dev); + if (pci_has_legacy_pm_support(pci_dev)) + return 0; + + if (pm && pm->thaw_noirq) + return pm->thaw_noirq(dev); - return error; + return 0; } static int pci_pm_thaw(struct device *dev) @@ -1161,24 +1138,24 @@ static int pci_pm_poweroff_late(struct device *dev) static int pci_pm_poweroff_noirq(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct device_driver *drv = dev->driver; + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; if (dev_pm_smart_suspend_and_suspended(dev)) return 0; - if (pci_has_legacy_pm_support(to_pci_dev(dev))) + if (pci_has_legacy_pm_support(pci_dev)) return pci_legacy_suspend_late(dev, PMSG_HIBERNATE); - if (!drv || !drv->pm) { + if (!pm) { pci_fixup_device(pci_fixup_suspend_late, pci_dev); return 0; } - if (drv->pm->poweroff_noirq) { + if (pm->poweroff_noirq) { int error; - error = drv->pm->poweroff_noirq(dev); - suspend_report_result(drv->pm->poweroff_noirq, error); + error = pm->poweroff_noirq(dev); + suspend_report_result(pm->poweroff_noirq, error); if (error) return error; } @@ -1204,8 +1181,8 @@ static int pci_pm_poweroff_noirq(struct device *dev) static int pci_pm_restore_noirq(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); - struct device_driver *drv = dev->driver; - int error = 0; + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; + int error; if (pcibios_pm_ops.restore_noirq) { error = pcibios_pm_ops.restore_noirq(dev); @@ -1217,19 +1194,18 @@ static int pci_pm_restore_noirq(struct device *dev) pci_fixup_device(pci_fixup_resume_early, pci_dev); if (pci_has_legacy_pm_support(pci_dev)) - return pci_legacy_resume_early(dev); + return 0; - if (drv && drv->pm && drv->pm->restore_noirq) - error = drv->pm->restore_noirq(dev); + if (pm && pm->restore_noirq) + return pm->restore_noirq(dev); - return error; + return 0; } static int pci_pm_restore(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - int error = 0; /* * This is necessary for the hibernation error path in which restore is @@ -1245,12 +1221,12 @@ static int pci_pm_restore(struct device *dev) if (pm) { if (pm->restore) - error = pm->restore(dev); + return pm->restore(dev); } else { pci_pm_reenable_device(pci_dev); } - return error; + return 0; } #else /* !CONFIG_HIBERNATE_CALLBACKS */ @@ -1295,11 +1271,11 @@ static int pci_pm_runtime_suspend(struct device *dev) * log level. */ if (error == -EBUSY || error == -EAGAIN) { - dev_dbg(dev, "can't suspend now (%ps returned %d)\n", + pci_dbg(pci_dev, "can't suspend now (%ps returned %d)\n", pm->runtime_suspend, error); return error; } else if (error) { - dev_err(dev, "can't suspend (%ps returned %d)\n", + pci_err(pci_dev, "can't suspend (%ps returned %d)\n", pm->runtime_suspend, error); return error; } @@ -1310,9 +1286,9 @@ static int pci_pm_runtime_suspend(struct device *dev) if (pm && pm->runtime_suspend && !pci_dev->state_saved && pci_dev->current_state != PCI_D0 && pci_dev->current_state != PCI_UNKNOWN) { - WARN_ONCE(pci_dev->current_state != prev, - "PCI PM: State of device not saved by %pS\n", - pm->runtime_suspend); + pci_WARN_ONCE(pci_dev, pci_dev->current_state != prev, + "PCI PM: State of device not saved by %pS\n", + pm->runtime_suspend); return 0; } @@ -1326,9 +1302,10 @@ static int pci_pm_runtime_suspend(struct device *dev) static int pci_pm_runtime_resume(struct device *dev) { - int rc = 0; struct pci_dev *pci_dev = to_pci_dev(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; + pci_power_t prev_state = pci_dev->current_state; + int error = 0; /* * Restoring config space is necessary even if the device is not bound @@ -1341,22 +1318,23 @@ static int pci_pm_runtime_resume(struct device *dev) return 0; pci_fixup_device(pci_fixup_resume_early, pci_dev); - pci_enable_wake(pci_dev, PCI_D0, false); - pci_fixup_device(pci_fixup_resume, pci_dev); + pci_pm_default_resume(pci_dev); + + if (prev_state == PCI_D3cold) + pci_bridge_wait_for_secondary_bus(pci_dev); if (pm && pm->runtime_resume) - rc = pm->runtime_resume(dev); + error = pm->runtime_resume(dev); pci_dev->runtime_d3cold = false; - return rc; + return error; } static int pci_pm_runtime_idle(struct device *dev) { struct pci_dev *pci_dev = to_pci_dev(dev); const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - int ret = 0; /* * If pci_dev->driver is not set (unbound), the device should @@ -1369,9 +1347,9 @@ static int pci_pm_runtime_idle(struct device *dev) return -ENOSYS; if (pm->runtime_idle) - ret = pm->runtime_idle(dev); + return pm->runtime_idle(dev); - return ret; + return 0; } static const struct dev_pm_ops pci_dev_pm_ops = { diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 793412954529..13f766db0684 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -1122,7 +1122,7 @@ static void pci_remove_resource_files(struct pci_dev *pdev) { int i; - for (i = 0; i < PCI_ROM_RESOURCE; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { struct bin_attribute *res_attr; res_attr = pdev->res_attr[i]; @@ -1193,7 +1193,7 @@ static int pci_create_resource_files(struct pci_dev *pdev) int retval; /* Expose the PCI resources from this device as files */ - for (i = 0; i < PCI_ROM_RESOURCE; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { /* skip empty resources */ if (!pci_resource_len(pdev, i)) @@ -1330,7 +1330,6 @@ static int pci_create_capabilities_sysfs(struct pci_dev *dev) int retval; pcie_vpd_create_sysfs_dev_files(dev); - pcie_aspm_create_sysfs_dev_files(dev); if (dev->reset_fn) { retval = device_create_file(&dev->dev, &dev_attr_reset); @@ -1340,7 +1339,6 @@ static int pci_create_capabilities_sysfs(struct pci_dev *dev) return 0; error: - pcie_aspm_remove_sysfs_dev_files(dev); pcie_vpd_remove_sysfs_dev_files(dev); return retval; } @@ -1416,7 +1414,6 @@ err: static void pci_remove_capabilities_sysfs(struct pci_dev *dev) { pcie_vpd_remove_sysfs_dev_files(dev); - pcie_aspm_remove_sysfs_dev_files(dev); if (dev->reset_fn) { device_remove_file(&dev->dev, &dev_attr_reset); dev->reset_fn = 0; @@ -1539,24 +1536,6 @@ const struct attribute_group *pci_dev_groups[] = { NULL, }; -static const struct attribute_group pci_bridge_group = { - .attrs = pci_bridge_attrs, -}; - -const struct attribute_group *pci_bridge_groups[] = { - &pci_bridge_group, - NULL, -}; - -static const struct attribute_group pcie_dev_group = { - .attrs = pcie_dev_attrs, -}; - -const struct attribute_group *pcie_dev_groups[] = { - &pcie_dev_group, - NULL, -}; - static const struct attribute_group pci_dev_hp_attr_group = { .attrs = pci_dev_hp_attrs, .is_visible = pci_dev_hp_attrs_are_visible, @@ -1588,6 +1567,9 @@ static const struct attribute_group *pci_dev_attr_groups[] = { #ifdef CONFIG_PCIEAER &aer_stats_attr_group, #endif +#ifdef CONFIG_PCIEASPM + &aspm_ctrl_attr_group, +#endif NULL, }; diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index fcfaadc774ee..e87196cc1a7f 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -13,6 +13,7 @@ #include <linux/delay.h> #include <linux/dmi.h> #include <linux/init.h> +#include <linux/msi.h> #include <linux/of.h> #include <linux/of_pci.h> #include <linux/pci.h> @@ -85,10 +86,17 @@ unsigned long pci_cardbus_io_size = DEFAULT_CARDBUS_IO_SIZE; unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE; #define DEFAULT_HOTPLUG_IO_SIZE (256) -#define DEFAULT_HOTPLUG_MEM_SIZE (2*1024*1024) -/* pci=hpmemsize=nnM,hpiosize=nn can override this */ +#define DEFAULT_HOTPLUG_MMIO_SIZE (2*1024*1024) +#define DEFAULT_HOTPLUG_MMIO_PREF_SIZE (2*1024*1024) +/* hpiosize=nn can override this */ unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE; -unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE; +/* + * pci=hpmmiosize=nnM overrides non-prefetchable MMIO size, + * pci=hpmmioprefsize=nnM overrides prefetchable MMIO size; + * pci=hpmemsize=nnM overrides both + */ +unsigned long pci_hotplug_mmio_size = DEFAULT_HOTPLUG_MMIO_SIZE; +unsigned long pci_hotplug_mmio_pref_size = DEFAULT_HOTPLUG_MMIO_PREF_SIZE; #define DEFAULT_HOTPLUG_BUS_SIZE 1 unsigned long pci_hotplug_bus_size = DEFAULT_HOTPLUG_BUS_SIZE; @@ -674,7 +682,7 @@ struct resource *pci_find_resource(struct pci_dev *dev, struct resource *res) { int i; - for (i = 0; i < PCI_ROM_RESOURCE; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { struct resource *r = &dev->resource[i]; if (r->start && resource_contains(r, res)) @@ -834,14 +842,16 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) return -EINVAL; /* - * Validate current state: - * Can enter D0 from any state, but if we can only go deeper - * to sleep if we're already in a low power state + * Validate transition: We can enter D0 from any state, but if + * we're already in a low-power state, we can only go deeper. E.g., + * we can go from D1 to D3, but we can't go directly from D3 to D1; + * we'd have to go from D3 to D0, then to D1. */ if (state != PCI_D0 && dev->current_state <= PCI_D3cold && dev->current_state > state) { - pci_err(dev, "invalid power transition (from state %d to %d)\n", - dev->current_state, state); + pci_err(dev, "invalid power transition (from %s to %s)\n", + pci_power_name(dev->current_state), + pci_power_name(state)); return -EINVAL; } @@ -851,6 +861,12 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) return -EIO; pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); + if (pmcsr == (u16) ~0) { + pci_err(dev, "can't change power state from %s to %s (config space inaccessible)\n", + pci_power_name(dev->current_state), + pci_power_name(state)); + return -EIO; + } /* * If we're (effectively) in D3, force entire word to 0. @@ -886,13 +902,14 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) if (state == PCI_D3hot || dev->current_state == PCI_D3hot) pci_dev_d3_sleep(dev); else if (state == PCI_D2 || dev->current_state == PCI_D2) - udelay(PCI_PM_D2_DELAY); + msleep(PCI_PM_D2_DELAY); pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); if (dev->current_state != state) - pci_info_ratelimited(dev, "Refused to change power state, currently in D%d\n", - dev->current_state); + pci_info_ratelimited(dev, "refused to change power state from %s to %s\n", + pci_power_name(dev->current_state), + pci_power_name(state)); /* * According to section 5.4.1 of the "PCI BUS POWER MANAGEMENT @@ -963,7 +980,7 @@ void pci_refresh_power_state(struct pci_dev *dev) * @dev: PCI device to handle. * @state: State to put the device into. */ -static int pci_platform_power_transition(struct pci_dev *dev, pci_power_t state) +int pci_platform_power_transition(struct pci_dev *dev, pci_power_t state) { int error; @@ -979,6 +996,7 @@ static int pci_platform_power_transition(struct pci_dev *dev, pci_power_t state) return error; } +EXPORT_SYMBOL_GPL(pci_platform_power_transition); /** * pci_wakeup - Wake up a PCI device @@ -1002,34 +1020,70 @@ void pci_wakeup_bus(struct pci_bus *bus) pci_walk_bus(bus, pci_wakeup, NULL); } +static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) +{ + int delay = 1; + u32 id; + + /* + * After reset, the device should not silently discard config + * requests, but it may still indicate that it needs more time by + * responding to them with CRS completions. The Root Port will + * generally synthesize ~0 data to complete the read (except when + * CRS SV is enabled and the read was for the Vendor ID; in that + * case it synthesizes 0x0001 data). + * + * Wait for the device to return a non-CRS completion. Read the + * Command register instead of Vendor ID so we don't have to + * contend with the CRS SV value. + */ + pci_read_config_dword(dev, PCI_COMMAND, &id); + while (id == ~0) { + if (delay > timeout) { + pci_warn(dev, "not ready %dms after %s; giving up\n", + delay - 1, reset_type); + return -ENOTTY; + } + + if (delay > 1000) + pci_info(dev, "not ready %dms after %s; waiting\n", + delay - 1, reset_type); + + msleep(delay); + delay *= 2; + pci_read_config_dword(dev, PCI_COMMAND, &id); + } + + if (delay > 1000) + pci_info(dev, "ready %dms after %s\n", delay - 1, + reset_type); + + return 0; +} + /** - * __pci_start_power_transition - Start power transition of a PCI device - * @dev: PCI device to handle. - * @state: State to put the device into. + * pci_power_up - Put the given device into D0 + * @dev: PCI device to power up */ -static void __pci_start_power_transition(struct pci_dev *dev, pci_power_t state) +int pci_power_up(struct pci_dev *dev) { - if (state == PCI_D0) { - pci_platform_power_transition(dev, PCI_D0); + pci_platform_power_transition(dev, PCI_D0); + + /* + * Mandatory power management transition delays are handled in + * pci_pm_resume_noirq() and pci_pm_runtime_resume() of the + * corresponding bridge. + */ + if (dev->runtime_d3cold) { /* - * Mandatory power management transition delays, see - * PCI Express Base Specification Revision 2.0 Section - * 6.6.1: Conventional Reset. Do not delay for - * devices powered on/off by corresponding bridge, - * because have already delayed for the bridge. + * When powering on a bridge from D3cold, the whole hierarchy + * may be powered on into D0uninitialized state, resume them to + * give them a chance to suspend again */ - if (dev->runtime_d3cold) { - if (dev->d3cold_delay && !dev->imm_ready) - msleep(dev->d3cold_delay); - /* - * When powering on a bridge from D3cold, the - * whole hierarchy may be powered on into - * D0uninitialized state, resume them to give - * them a chance to suspend again - */ - pci_wakeup_bus(dev->subordinate); - } + pci_wakeup_bus(dev->subordinate); } + + return pci_raw_set_power_state(dev, PCI_D0); } /** @@ -1057,27 +1111,6 @@ void pci_bus_set_current_state(struct pci_bus *bus, pci_power_t state) } /** - * __pci_complete_power_transition - Complete power transition of a PCI device - * @dev: PCI device to handle. - * @state: State to put the device into. - * - * This function should not be called directly by device drivers. - */ -int __pci_complete_power_transition(struct pci_dev *dev, pci_power_t state) -{ - int ret; - - if (state <= PCI_D0) - return -EINVAL; - ret = pci_platform_power_transition(dev, state); - /* Power off the bridge may power off the whole hierarchy */ - if (!ret && state == PCI_D3cold) - pci_bus_set_current_state(dev->subordinate, PCI_D3cold); - return ret; -} -EXPORT_SYMBOL_GPL(__pci_complete_power_transition); - -/** * pci_set_power_state - Set the power state of a PCI device * @dev: PCI device to handle. * @state: PCI power state (D0, D1, D2, D3hot) to put the device into. @@ -1117,7 +1150,8 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state) if (dev->current_state == state) return 0; - __pci_start_power_transition(dev, state); + if (state == PCI_D0) + return pci_power_up(dev); /* * This device is quirked not to be put into D3, so don't put it in @@ -1133,23 +1167,16 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state) error = pci_raw_set_power_state(dev, state > PCI_D3hot ? PCI_D3hot : state); - if (!__pci_complete_power_transition(dev, state)) - error = 0; + if (pci_platform_power_transition(dev, state)) + return error; - return error; -} -EXPORT_SYMBOL(pci_set_power_state); + /* Powering off a bridge may power off the whole hierarchy */ + if (state == PCI_D3cold) + pci_bus_set_current_state(dev->subordinate, PCI_D3cold); -/** - * pci_power_up - Put the given device into D0 forcibly - * @dev: PCI device to power up - */ -void pci_power_up(struct pci_dev *dev) -{ - __pci_start_power_transition(dev, PCI_D0); - pci_raw_set_power_state(dev, PCI_D0); - pci_update_current_state(dev, PCI_D0); + return 0; } +EXPORT_SYMBOL(pci_set_power_state); /** * pci_choose_state - Choose the power state of a PCI device @@ -1359,6 +1386,7 @@ int pci_save_state(struct pci_dev *dev) pci_save_ltr_state(dev); pci_save_dpc_state(dev); + pci_save_aer_state(dev); return pci_save_vc_state(dev); } EXPORT_SYMBOL(pci_save_state); @@ -1472,6 +1500,7 @@ void pci_restore_state(struct pci_dev *dev) pci_restore_dpc_state(dev); pci_cleanup_aer_error_status_regs(dev); + pci_restore_aer_state(dev); pci_restore_config_space(dev); @@ -3766,7 +3795,7 @@ void pci_release_selected_regions(struct pci_dev *pdev, int bars) { int i; - for (i = 0; i < 6; i++) + for (i = 0; i < PCI_STD_NUM_BARS; i++) if (bars & (1 << i)) pci_release_region(pdev, i); } @@ -3777,7 +3806,7 @@ static int __pci_request_selected_regions(struct pci_dev *pdev, int bars, { int i; - for (i = 0; i < 6; i++) + for (i = 0; i < PCI_STD_NUM_BARS; i++) if (bars & (1 << i)) if (__pci_request_region(pdev, i, res_name, excl)) goto err_out; @@ -3825,7 +3854,7 @@ EXPORT_SYMBOL(pci_request_selected_regions_exclusive); void pci_release_regions(struct pci_dev *pdev) { - pci_release_selected_regions(pdev, (1 << 6) - 1); + pci_release_selected_regions(pdev, (1 << PCI_STD_NUM_BARS) - 1); } EXPORT_SYMBOL(pci_release_regions); @@ -3844,7 +3873,8 @@ EXPORT_SYMBOL(pci_release_regions); */ int pci_request_regions(struct pci_dev *pdev, const char *res_name) { - return pci_request_selected_regions(pdev, ((1 << 6) - 1), res_name); + return pci_request_selected_regions(pdev, + ((1 << PCI_STD_NUM_BARS) - 1), res_name); } EXPORT_SYMBOL(pci_request_regions); @@ -3866,7 +3896,7 @@ EXPORT_SYMBOL(pci_request_regions); int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) { return pci_request_selected_regions_exclusive(pdev, - ((1 << 6) - 1), res_name); + ((1 << PCI_STD_NUM_BARS) - 1), res_name); } EXPORT_SYMBOL(pci_request_regions_exclusive); @@ -4428,47 +4458,6 @@ int pci_wait_for_pending_transaction(struct pci_dev *dev) } EXPORT_SYMBOL(pci_wait_for_pending_transaction); -static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) -{ - int delay = 1; - u32 id; - - /* - * After reset, the device should not silently discard config - * requests, but it may still indicate that it needs more time by - * responding to them with CRS completions. The Root Port will - * generally synthesize ~0 data to complete the read (except when - * CRS SV is enabled and the read was for the Vendor ID; in that - * case it synthesizes 0x0001 data). - * - * Wait for the device to return a non-CRS completion. Read the - * Command register instead of Vendor ID so we don't have to - * contend with the CRS SV value. - */ - pci_read_config_dword(dev, PCI_COMMAND, &id); - while (id == ~0) { - if (delay > timeout) { - pci_warn(dev, "not ready %dms after %s; giving up\n", - delay - 1, reset_type); - return -ENOTTY; - } - - if (delay > 1000) - pci_info(dev, "not ready %dms after %s; waiting\n", - delay - 1, reset_type); - - msleep(delay); - delay *= 2; - pci_read_config_dword(dev, PCI_COMMAND, &id); - } - - if (delay > 1000) - pci_info(dev, "ready %dms after %s\n", delay - 1, - reset_type); - - return 0; -} - /** * pcie_has_flr - check if a device supports function level resets * @dev: device to check @@ -4603,16 +4592,19 @@ static int pci_pm_reset(struct pci_dev *dev, int probe) pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, csr); pci_dev_d3_sleep(dev); - return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS); + return pci_dev_wait(dev, "PM D3hot->D0", PCIE_RESET_READY_POLL_MS); } + /** - * pcie_wait_for_link - Wait until link is active or inactive + * pcie_wait_for_link_delay - Wait until link is active or inactive * @pdev: Bridge device * @active: waiting for active or inactive? + * @delay: Delay to wait after link has become active (in ms) * * Use this to wait till link becomes active or inactive. */ -bool pcie_wait_for_link(struct pci_dev *pdev, bool active) +static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, + int delay) { int timeout = 1000; bool ret; @@ -4649,13 +4641,144 @@ bool pcie_wait_for_link(struct pci_dev *pdev, bool active) timeout -= 10; } if (active && ret) - msleep(100); + msleep(delay); else if (ret != active) pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", active ? "set" : "cleared"); return ret == active; } +/** + * pcie_wait_for_link - Wait until link is active or inactive + * @pdev: Bridge device + * @active: waiting for active or inactive? + * + * Use this to wait till link becomes active or inactive. + */ +bool pcie_wait_for_link(struct pci_dev *pdev, bool active) +{ + return pcie_wait_for_link_delay(pdev, active, 100); +} + +/* + * Find maximum D3cold delay required by all the devices on the bus. The + * spec says 100 ms, but firmware can lower it and we allow drivers to + * increase it as well. + * + * Called with @pci_bus_sem locked for reading. + */ +static int pci_bus_max_d3cold_delay(const struct pci_bus *bus) +{ + const struct pci_dev *pdev; + int min_delay = 100; + int max_delay = 0; + + list_for_each_entry(pdev, &bus->devices, bus_list) { + if (pdev->d3cold_delay < min_delay) + min_delay = pdev->d3cold_delay; + if (pdev->d3cold_delay > max_delay) + max_delay = pdev->d3cold_delay; + } + + return max(min_delay, max_delay); +} + +/** + * pci_bridge_wait_for_secondary_bus - Wait for secondary bus to be accessible + * @dev: PCI bridge + * + * Handle necessary delays before access to the devices on the secondary + * side of the bridge are permitted after D3cold to D0 transition. + * + * For PCIe this means the delays in PCIe 5.0 section 6.6.1. For + * conventional PCI it means Tpvrh + Trhfa specified in PCI 3.0 section + * 4.3.2. + */ +void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) +{ + struct pci_dev *child; + int delay; + + if (pci_dev_is_disconnected(dev)) + return; + + if (!pci_is_bridge(dev) || !dev->bridge_d3) + return; + + down_read(&pci_bus_sem); + + /* + * We only deal with devices that are present currently on the bus. + * For any hot-added devices the access delay is handled in pciehp + * board_added(). In case of ACPI hotplug the firmware is expected + * to configure the devices before OS is notified. + */ + if (!dev->subordinate || list_empty(&dev->subordinate->devices)) { + up_read(&pci_bus_sem); + return; + } + + /* Take d3cold_delay requirements into account */ + delay = pci_bus_max_d3cold_delay(dev->subordinate); + if (!delay) { + up_read(&pci_bus_sem); + return; + } + + child = list_first_entry(&dev->subordinate->devices, struct pci_dev, + bus_list); + up_read(&pci_bus_sem); + + /* + * Conventional PCI and PCI-X we need to wait Tpvrh + Trhfa before + * accessing the device after reset (that is 1000 ms + 100 ms). In + * practice this should not be needed because we don't do power + * management for them (see pci_bridge_d3_possible()). + */ + if (!pci_is_pcie(dev)) { + pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay); + msleep(1000 + delay); + return; + } + + /* + * For PCIe downstream and root ports that do not support speeds + * greater than 5 GT/s need to wait minimum 100 ms. For higher + * speeds (gen3) we need to wait first for the data link layer to + * become active. + * + * However, 100 ms is the minimum and the PCIe spec says the + * software must allow at least 1s before it can determine that the + * device that did not respond is a broken device. There is + * evidence that 100 ms is not always enough, for example certain + * Titan Ridge xHCI controller does not always respond to + * configuration requests if we only wait for 100 ms (see + * https://bugzilla.kernel.org/show_bug.cgi?id=203885). + * + * Therefore we wait for 100 ms and check for the device presence. + * If it is still not present give it an additional 100 ms. + */ + if (!pcie_downstream_port(dev)) + return; + + if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { + pci_dbg(dev, "waiting %d ms for downstream link\n", delay); + msleep(delay); + } else { + pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", + delay); + if (!pcie_wait_for_link_delay(dev, true, delay)) { + /* Did not train, no need to wait any further */ + return; + } + } + + if (!pci_device_is_present(child)) { + pci_dbg(child, "waiting additional %d ms to become accessible\n", delay); + msleep(delay); + } +} + void pci_reset_secondary_bus(struct pci_dev *dev) { u16 ctrl; @@ -6304,8 +6427,13 @@ static int __init pci_setup(char *str) pcie_ecrc_get_policy(str + 5); } else if (!strncmp(str, "hpiosize=", 9)) { pci_hotplug_io_size = memparse(str + 9, &str); + } else if (!strncmp(str, "hpmmiosize=", 11)) { + pci_hotplug_mmio_size = memparse(str + 11, &str); + } else if (!strncmp(str, "hpmmioprefsize=", 15)) { + pci_hotplug_mmio_pref_size = memparse(str + 15, &str); } else if (!strncmp(str, "hpmemsize=", 10)) { - pci_hotplug_mem_size = memparse(str + 10, &str); + pci_hotplug_mmio_size = memparse(str + 10, &str); + pci_hotplug_mmio_pref_size = pci_hotplug_mmio_size; } else if (!strncmp(str, "hpbussize=", 10)) { pci_hotplug_bus_size = simple_strtoul(str + 10, &str, 0); diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 3f6947ee3324..a0a53bd05a0b 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -12,6 +12,7 @@ extern const unsigned char pcie_link_speed[]; extern bool pci_early_dump; bool pcie_cap_has_lnkctl(const struct pci_dev *dev); +bool pcie_cap_has_rtctl(const struct pci_dev *dev); /* Functions internal to the PCI core code */ @@ -85,7 +86,7 @@ struct pci_platform_pm_ops { int pci_set_platform_pm(const struct pci_platform_pm_ops *ops); void pci_update_current_state(struct pci_dev *dev, pci_power_t state); void pci_refresh_power_state(struct pci_dev *dev); -void pci_power_up(struct pci_dev *dev); +int pci_power_up(struct pci_dev *dev); void pci_disable_enabled_device(struct pci_dev *dev); int pci_finish_runtime_suspend(struct pci_dev *dev); void pcie_clear_root_pme_status(struct pci_dev *dev); @@ -104,6 +105,7 @@ void pci_allocate_cap_save_buffers(struct pci_dev *dev); void pci_free_cap_save_buffers(struct pci_dev *dev); bool pci_bridge_d3_possible(struct pci_dev *dev); void pci_bridge_d3_update(struct pci_dev *dev); +void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev); static inline void pci_wakeup_event(struct pci_dev *dev) { @@ -218,7 +220,8 @@ extern const struct device_type pci_dev_type; extern const struct attribute_group *pci_bus_groups[]; extern unsigned long pci_hotplug_io_size; -extern unsigned long pci_hotplug_mem_size; +extern unsigned long pci_hotplug_mmio_size; +extern unsigned long pci_hotplug_mmio_pref_size; extern unsigned long pci_hotplug_bus_size; /** @@ -456,6 +459,22 @@ static inline void pci_ats_init(struct pci_dev *d) { } static inline void pci_restore_ats_state(struct pci_dev *dev) { } #endif /* CONFIG_PCI_ATS */ +#ifdef CONFIG_PCI_PRI +void pci_pri_init(struct pci_dev *dev); +void pci_restore_pri_state(struct pci_dev *pdev); +#else +static inline void pci_pri_init(struct pci_dev *dev) { } +static inline void pci_restore_pri_state(struct pci_dev *pdev) { } +#endif + +#ifdef CONFIG_PCI_PASID +void pci_pasid_init(struct pci_dev *dev); +void pci_restore_pasid_state(struct pci_dev *pdev); +#else +static inline void pci_pasid_init(struct pci_dev *dev) { } +static inline void pci_restore_pasid_state(struct pci_dev *pdev) { } +#endif + #ifdef CONFIG_PCI_IOV int pci_iov_init(struct pci_dev *dev); void pci_iov_release(struct pci_dev *dev); @@ -541,14 +560,6 @@ static inline void pcie_aspm_pm_state_change(struct pci_dev *pdev) { } static inline void pcie_aspm_powersave_config_link(struct pci_dev *pdev) { } #endif -#ifdef CONFIG_PCIEASPM_DEBUG -void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev); -void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev); -#else -static inline void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev) { } -static inline void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev) { } -#endif - #ifdef CONFIG_PCIE_ECRC void pcie_set_ecrc_checking(struct pci_dev *dev); void pcie_ecrc_get_policy(char *str); @@ -630,19 +641,6 @@ static inline void pci_set_bus_of_node(struct pci_bus *bus) { } static inline void pci_release_bus_of_node(struct pci_bus *bus) { } #endif /* CONFIG_OF */ -#if defined(CONFIG_OF_ADDRESS) -int devm_of_pci_get_host_bridge_resources(struct device *dev, - unsigned char busno, unsigned char bus_max, - struct list_head *resources, resource_size_t *io_base); -#else -static inline int devm_of_pci_get_host_bridge_resources(struct device *dev, - unsigned char busno, unsigned char bus_max, - struct list_head *resources, resource_size_t *io_base) -{ - return -EINVAL; -} -#endif - #ifdef CONFIG_PCIEAER void pci_no_aer(void); void pci_aer_init(struct pci_dev *dev); @@ -667,4 +665,8 @@ static inline int pci_acpi_program_hp_params(struct pci_dev *dev) } #endif +#ifdef CONFIG_PCIEASPM +extern const struct attribute_group aspm_ctrl_attr_group; +#endif + #endif /* DRIVERS_PCI_H */ diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig index 362eb8cfa53b..6e3c04b46fb1 100644 --- a/drivers/pci/pcie/Kconfig +++ b/drivers/pci/pcie/Kconfig @@ -4,7 +4,6 @@ # config PCIEPORTBUS bool "PCI Express Port Bus support" - depends on PCI help This enables PCI Express Port Bus support. Users can then enable support for Native Hot-Plug, Advanced Error Reporting, Power @@ -63,7 +62,6 @@ config PCIE_ECRC # config PCIEASPM bool "PCI Express ASPM control" if EXPERT - depends on PCI && PCIEPORTBUS default y help This enables OS control over PCI Express ASPM (Active State @@ -79,13 +77,6 @@ config PCIEASPM When in doubt, say Y. -config PCIEASPM_DEBUG - bool "Debug PCI Express ASPM" - depends on PCIEASPM - help - This enables PCI Express ASPM debug support. It will add per-device - interface to control ASPM. - choice prompt "Default ASPM policy" default PCIEASPM_DEFAULT @@ -135,7 +126,6 @@ config PCIE_DPC config PCIE_PTM bool "PCI Express Precision Time Measurement support" - depends on PCIEPORTBUS help This enables PCI Express Precision Time Measurement (PTM) support. diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index b45bc47d04fe..1ca86f2e0166 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c @@ -15,6 +15,7 @@ #define pr_fmt(fmt) "AER: " fmt #define dev_fmt pr_fmt +#include <linux/bitops.h> #include <linux/cper.h> #include <linux/pci.h> #include <linux/pci-acpi.h> @@ -36,7 +37,7 @@ #define AER_ERROR_SOURCES_MAX 128 #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ -#define AER_MAX_TYPEOF_UNCOR_ERRS 26 /* as per PCI_ERR_UNCOR_STATUS*/ +#define AER_MAX_TYPEOF_UNCOR_ERRS 27 /* as per PCI_ERR_UNCOR_STATUS*/ struct aer_err_source { unsigned int status; @@ -201,6 +202,7 @@ void pcie_set_ecrc_checking(struct pci_dev *dev) /** * pcie_ecrc_get_policy - parse kernel command-line ecrc option + * @str: ECRC policy from kernel command line to use */ void pcie_ecrc_get_policy(char *str) { @@ -448,12 +450,70 @@ int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) return 0; } +void pci_save_aer_state(struct pci_dev *dev) +{ + struct pci_cap_saved_state *save_state; + u32 *cap; + int pos; + + pos = dev->aer_cap; + if (!pos) + return; + + save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_ERR); + if (!save_state) + return; + + cap = &save_state->cap.data[0]; + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, cap++); + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, cap++); + pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, cap++); + pci_read_config_dword(dev, pos + PCI_ERR_CAP, cap++); + if (pcie_cap_has_rtctl(dev)) + pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, cap++); +} + +void pci_restore_aer_state(struct pci_dev *dev) +{ + struct pci_cap_saved_state *save_state; + u32 *cap; + int pos; + + pos = dev->aer_cap; + if (!pos) + return; + + save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_ERR); + if (!save_state) + return; + + cap = &save_state->cap.data[0]; + pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, *cap++); + pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, *cap++); + pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, *cap++); + pci_write_config_dword(dev, pos + PCI_ERR_CAP, *cap++); + if (pcie_cap_has_rtctl(dev)) + pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, *cap++); +} + void pci_aer_init(struct pci_dev *dev) { + int n; + dev->aer_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); + if (!dev->aer_cap) + return; - if (dev->aer_cap) - dev->aer_stats = kzalloc(sizeof(struct aer_stats), GFP_KERNEL); + dev->aer_stats = kzalloc(sizeof(struct aer_stats), GFP_KERNEL); + + /* + * We save/restore PCI_ERR_UNCOR_MASK, PCI_ERR_UNCOR_SEVER, + * PCI_ERR_COR_MASK, and PCI_ERR_CAP. Root and Root Complex Event + * Collectors also implement PCI_ERR_ROOT_COMMAND (PCIe r5.0, sec + * 7.8.4). + */ + n = pcie_cap_has_rtctl(dev) ? 5 : 4; + pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_ERR, sizeof(u32) * n); pci_cleanup_aer_error_status_regs(dev); } @@ -560,6 +620,7 @@ static const char *aer_uncorrectable_error_string[AER_MAX_TYPEOF_UNCOR_ERRS] = { "BlockedTLP", /* Bit Position 23 */ "AtomicOpBlocked", /* Bit Position 24 */ "TLPBlockedErr", /* Bit Position 25 */ + "PoisonTLPBlocked", /* Bit Position 26 */ }; static const char *aer_agent_string[] = { @@ -657,7 +718,8 @@ const struct attribute_group aer_stats_attr_group = { static void pci_dev_aer_stats_incr(struct pci_dev *pdev, struct aer_err_info *info) { - int status, i, max = -1; + unsigned long status = info->status & ~info->mask; + int i, max = -1; u64 *counter = NULL; struct aer_stats *aer_stats = pdev->aer_stats; @@ -682,10 +744,8 @@ static void pci_dev_aer_stats_incr(struct pci_dev *pdev, break; } - status = (info->status & ~info->mask); - for (i = 0; i < max; i++) - if (status & (1 << i)) - counter[i]++; + for_each_set_bit(i, &status, max) + counter[i]++; } static void pci_rootport_aer_stats_incr(struct pci_dev *pdev, @@ -717,14 +777,11 @@ static void __print_tlp_header(struct pci_dev *dev, static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info) { - int i, status; + unsigned long status = info->status & ~info->mask; const char *errmsg = NULL; - status = (info->status & ~info->mask); - - for (i = 0; i < 32; i++) { - if (!(status & (1 << i))) - continue; + int i; + for_each_set_bit(i, &status, 32) { if (info->severity == AER_CORRECTABLE) errmsg = i < ARRAY_SIZE(aer_correctable_error_string) ? aer_correctable_error_string[i] : NULL; @@ -1204,7 +1261,8 @@ static void aer_isr_one_error(struct aer_rpc *rpc, /** * aer_isr - consume errors detected by root port - * @work: definition of this work item + * @irq: IRQ assigned to Root Port + * @context: pointer to Root Port data structure * * Invoked, as DPC, when root port records new detected error */ diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 652ef23bba35..0dcd44308228 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -64,6 +64,7 @@ struct pcie_link_state { u32 clkpm_capable:1; /* Clock PM capable? */ u32 clkpm_enabled:1; /* Current Clock PM state */ u32 clkpm_default:1; /* Default Clock PM state by BIOS */ + u32 clkpm_disable:1; /* Clock PM disabled */ /* Exit latencies */ struct aspm_latency latency_up; /* Upstream direction exit latency */ @@ -161,8 +162,11 @@ static void pcie_set_clkpm_nocheck(struct pcie_link_state *link, int enable) static void pcie_set_clkpm(struct pcie_link_state *link, int enable) { - /* Don't enable Clock PM if the link is not Clock PM capable */ - if (!link->clkpm_capable) + /* + * Don't enable Clock PM if the link is not Clock PM capable + * or Clock PM is disabled + */ + if (!link->clkpm_capable || link->clkpm_disable) enable = 0; /* Need nothing if the specified equals to current state */ if (link->clkpm_enabled == enable) @@ -192,7 +196,8 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) } link->clkpm_enabled = enabled; link->clkpm_default = enabled; - link->clkpm_capable = (blacklist) ? 0 : capable; + link->clkpm_capable = capable; + link->clkpm_disable = blacklist ? 1 : 0; } static bool pcie_retrain_link(struct pcie_link_state *link) @@ -894,6 +899,14 @@ static struct pcie_link_state *alloc_pcie_link_state(struct pci_dev *pdev) return link; } +static void pcie_aspm_update_sysfs_visibility(struct pci_dev *pdev) +{ + struct pci_dev *child; + + list_for_each_entry(child, &pdev->subordinate->devices, bus_list) + sysfs_update_group(&child->dev.kobj, &aspm_ctrl_attr_group); +} + /* * pcie_aspm_init_link_state: Initiate PCI express link state. * It is called after the pcie and its children devices are scanned. @@ -955,6 +968,8 @@ void pcie_aspm_init_link_state(struct pci_dev *pdev) pcie_set_clkpm(link, policy_to_clkpm_state(link)); } + pcie_aspm_update_sysfs_visibility(pdev); + unlock: mutex_unlock(&aspm_lock); out: @@ -1061,19 +1076,26 @@ void pcie_aspm_powersave_config_link(struct pci_dev *pdev) up_read(&pci_bus_sem); } -static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) +static struct pcie_link_state *pcie_aspm_get_link(struct pci_dev *pdev) { - struct pci_dev *parent = pdev->bus->self; - struct pcie_link_state *link; + struct pci_dev *bridge; if (!pci_is_pcie(pdev)) - return 0; + return NULL; - if (pcie_downstream_port(pdev)) - parent = pdev; - if (!parent || !parent->link_state) - return -EINVAL; + bridge = pci_upstream_bridge(pdev); + if (!bridge || !pci_is_pcie(bridge)) + return NULL; + return bridge->link_state; +} + +static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) +{ + struct pcie_link_state *link = pcie_aspm_get_link(pdev); + + if (!link) + return -EINVAL; /* * A driver requested that ASPM be disabled on this device, but * if we don't have permission to manage ASPM (e.g., on ACPI @@ -1090,17 +1112,24 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem) if (sem) down_read(&pci_bus_sem); mutex_lock(&aspm_lock); - link = parent->link_state; if (state & PCIE_LINK_STATE_L0S) link->aspm_disable |= ASPM_STATE_L0S; if (state & PCIE_LINK_STATE_L1) - link->aspm_disable |= ASPM_STATE_L1; + /* L1 PM substates require L1 */ + link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS; + if (state & PCIE_LINK_STATE_L1_1) + link->aspm_disable |= ASPM_STATE_L1_1; + if (state & PCIE_LINK_STATE_L1_2) + link->aspm_disable |= ASPM_STATE_L1_2; + if (state & PCIE_LINK_STATE_L1_1_PCIPM) + link->aspm_disable |= ASPM_STATE_L1_1_PCIPM; + if (state & PCIE_LINK_STATE_L1_2_PCIPM) + link->aspm_disable |= ASPM_STATE_L1_2_PCIPM; pcie_config_aspm_link(link, policy_to_aspm_state(link)); - if (state & PCIE_LINK_STATE_CLKPM) { - link->clkpm_capable = 0; - pcie_set_clkpm(link, 0); - } + if (state & PCIE_LINK_STATE_CLKPM) + link->clkpm_disable = 1; + pcie_set_clkpm(link, policy_to_clkpm_state(link)); mutex_unlock(&aspm_lock); if (sem) up_read(&pci_bus_sem); @@ -1172,127 +1201,161 @@ module_param_call(policy, pcie_aspm_set_policy, pcie_aspm_get_policy, /** * pcie_aspm_enabled - Check if PCIe ASPM has been enabled for a device. * @pdev: Target device. + * + * Relies on the upstream bridge's link_state being valid. The link_state + * is deallocated only when the last child of the bridge (i.e., @pdev or a + * sibling) is removed, and the caller should be holding a reference to + * @pdev, so this should be safe. */ bool pcie_aspm_enabled(struct pci_dev *pdev) { - struct pci_dev *bridge = pci_upstream_bridge(pdev); - bool ret; + struct pcie_link_state *link = pcie_aspm_get_link(pdev); - if (!bridge) + if (!link) return false; - mutex_lock(&aspm_lock); - ret = bridge->link_state ? !!bridge->link_state->aspm_enabled : false; - mutex_unlock(&aspm_lock); - - return ret; + return link->aspm_enabled; } EXPORT_SYMBOL_GPL(pcie_aspm_enabled); -#ifdef CONFIG_PCIEASPM_DEBUG -static ssize_t link_state_show(struct device *dev, - struct device_attribute *attr, - char *buf) +static ssize_t aspm_attr_show_common(struct device *dev, + struct device_attribute *attr, + char *buf, u8 state) { - struct pci_dev *pci_device = to_pci_dev(dev); - struct pcie_link_state *link_state = pci_device->link_state; + struct pci_dev *pdev = to_pci_dev(dev); + struct pcie_link_state *link = pcie_aspm_get_link(pdev); - return sprintf(buf, "%d\n", link_state->aspm_enabled); + return sprintf(buf, "%d\n", (link->aspm_enabled & state) ? 1 : 0); } -static ssize_t link_state_store(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t n) +static ssize_t aspm_attr_store_common(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len, u8 state) { struct pci_dev *pdev = to_pci_dev(dev); - struct pcie_link_state *link, *root = pdev->link_state->root; - u32 state; - - if (aspm_disabled) - return -EPERM; + struct pcie_link_state *link = pcie_aspm_get_link(pdev); + bool state_enable; - if (kstrtouint(buf, 10, &state)) - return -EINVAL; - if ((state & ~ASPM_STATE_ALL) != 0) + if (strtobool(buf, &state_enable) < 0) return -EINVAL; down_read(&pci_bus_sem); mutex_lock(&aspm_lock); - list_for_each_entry(link, &link_list, sibling) { - if (link->root != root) - continue; - pcie_config_aspm_link(link, state); + + if (state_enable) { + link->aspm_disable &= ~state; + /* need to enable L1 for substates */ + if (state & ASPM_STATE_L1SS) + link->aspm_disable &= ~ASPM_STATE_L1; + } else { + link->aspm_disable |= state; } + + pcie_config_aspm_link(link, policy_to_aspm_state(link)); + mutex_unlock(&aspm_lock); up_read(&pci_bus_sem); - return n; + + return len; } -static ssize_t clk_ctl_show(struct device *dev, - struct device_attribute *attr, - char *buf) +#define ASPM_ATTR(_f, _s) \ +static ssize_t _f##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ return aspm_attr_show_common(dev, attr, buf, ASPM_STATE_##_s); } \ + \ +static ssize_t _f##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t len) \ +{ return aspm_attr_store_common(dev, attr, buf, len, ASPM_STATE_##_s); } + +ASPM_ATTR(l0s_aspm, L0S) +ASPM_ATTR(l1_aspm, L1) +ASPM_ATTR(l1_1_aspm, L1_1) +ASPM_ATTR(l1_2_aspm, L1_2) +ASPM_ATTR(l1_1_pcipm, L1_1_PCIPM) +ASPM_ATTR(l1_2_pcipm, L1_2_PCIPM) + +static ssize_t clkpm_show(struct device *dev, + struct device_attribute *attr, char *buf) { - struct pci_dev *pci_device = to_pci_dev(dev); - struct pcie_link_state *link_state = pci_device->link_state; + struct pci_dev *pdev = to_pci_dev(dev); + struct pcie_link_state *link = pcie_aspm_get_link(pdev); - return sprintf(buf, "%d\n", link_state->clkpm_enabled); + return sprintf(buf, "%d\n", link->clkpm_enabled); } -static ssize_t clk_ctl_store(struct device *dev, - struct device_attribute *attr, - const char *buf, - size_t n) +static ssize_t clkpm_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) { struct pci_dev *pdev = to_pci_dev(dev); - bool state; + struct pcie_link_state *link = pcie_aspm_get_link(pdev); + bool state_enable; - if (strtobool(buf, &state)) + if (strtobool(buf, &state_enable) < 0) return -EINVAL; down_read(&pci_bus_sem); mutex_lock(&aspm_lock); - pcie_set_clkpm_nocheck(pdev->link_state, state); + + link->clkpm_disable = !state_enable; + pcie_set_clkpm(link, policy_to_clkpm_state(link)); + mutex_unlock(&aspm_lock); up_read(&pci_bus_sem); - return n; + return len; } -static DEVICE_ATTR_RW(link_state); -static DEVICE_ATTR_RW(clk_ctl); +static DEVICE_ATTR_RW(clkpm); +static DEVICE_ATTR_RW(l0s_aspm); +static DEVICE_ATTR_RW(l1_aspm); +static DEVICE_ATTR_RW(l1_1_aspm); +static DEVICE_ATTR_RW(l1_2_aspm); +static DEVICE_ATTR_RW(l1_1_pcipm); +static DEVICE_ATTR_RW(l1_2_pcipm); + +static struct attribute *aspm_ctrl_attrs[] = { + &dev_attr_clkpm.attr, + &dev_attr_l0s_aspm.attr, + &dev_attr_l1_aspm.attr, + &dev_attr_l1_1_aspm.attr, + &dev_attr_l1_2_aspm.attr, + &dev_attr_l1_1_pcipm.attr, + &dev_attr_l1_2_pcipm.attr, + NULL +}; -static char power_group[] = "power"; -void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev) +static umode_t aspm_ctrl_attrs_are_visible(struct kobject *kobj, + struct attribute *a, int n) { - struct pcie_link_state *link_state = pdev->link_state; - - if (!link_state) - return; - - if (link_state->aspm_support) - sysfs_add_file_to_group(&pdev->dev.kobj, - &dev_attr_link_state.attr, power_group); - if (link_state->clkpm_capable) - sysfs_add_file_to_group(&pdev->dev.kobj, - &dev_attr_clk_ctl.attr, power_group); -} + struct device *dev = kobj_to_dev(kobj); + struct pci_dev *pdev = to_pci_dev(dev); + struct pcie_link_state *link = pcie_aspm_get_link(pdev); + static const u8 aspm_state_map[] = { + ASPM_STATE_L0S, + ASPM_STATE_L1, + ASPM_STATE_L1_1, + ASPM_STATE_L1_2, + ASPM_STATE_L1_1_PCIPM, + ASPM_STATE_L1_2_PCIPM, + }; -void pcie_aspm_remove_sysfs_dev_files(struct pci_dev *pdev) -{ - struct pcie_link_state *link_state = pdev->link_state; + if (aspm_disabled || !link) + return 0; - if (!link_state) - return; + if (n == 0) + return link->clkpm_capable ? a->mode : 0; - if (link_state->aspm_support) - sysfs_remove_file_from_group(&pdev->dev.kobj, - &dev_attr_link_state.attr, power_group); - if (link_state->clkpm_capable) - sysfs_remove_file_from_group(&pdev->dev.kobj, - &dev_attr_clk_ctl.attr, power_group); + return link->aspm_capable & aspm_state_map[n - 1] ? a->mode : 0; } -#endif + +const struct attribute_group aspm_ctrl_attr_group = { + .name = "link", + .attrs = aspm_ctrl_attrs, + .is_visible = aspm_ctrl_attrs_are_visible, +}; static int __init pcie_aspm_disable(char *str) { diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c index a32ec3487a8d..e06f42f58d3d 100644 --- a/drivers/pci/pcie/dpc.c +++ b/drivers/pci/pcie/dpc.c @@ -291,7 +291,7 @@ static int dpc_probe(struct pcie_device *dev) int status; u16 ctl, cap; - if (pcie_aer_get_firmware_first(pdev)) + if (pcie_aer_get_firmware_first(pdev) && !pcie_ports_dpc_native) return -ENOTSUPP; dpc = devm_kzalloc(device, sizeof(*dpc), GFP_KERNEL); diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index 944827a8c7d3..1e673619b101 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h @@ -25,6 +25,8 @@ #define PCIE_PORT_DEVICE_MAXSERVICES 5 +extern bool pcie_ports_dpc_native; + #ifdef CONFIG_PCIEAER int pcie_aer_init(void); #else diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index 1b330129089f..5075cb9e850c 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c @@ -250,8 +250,13 @@ static int get_port_device_capability(struct pci_dev *dev) pcie_pme_interrupt_enable(dev, false); } + /* + * With dpc-native, allow Linux to use DPC even if it doesn't have + * permission to use AER. + */ if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC) && - pci_aer_available() && services & PCIE_PORT_SERVICE_AER) + pci_aer_available() && + (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER))) services |= PCIE_PORT_SERVICE_DPC; if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index 0a87091a0800..160d67c59310 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c @@ -29,12 +29,20 @@ bool pcie_ports_disabled; */ bool pcie_ports_native; +/* + * If the user specified "pcie_ports=dpc-native", use the Linux DPC PCIe + * service even if the platform hasn't given us permission. + */ +bool pcie_ports_dpc_native; + static int __init pcie_port_setup(char *str) { if (!strncmp(str, "compat", 6)) pcie_ports_disabled = true; else if (!strncmp(str, "native", 6)) pcie_ports_native = true; + else if (!strncmp(str, "dpc-native", 10)) + pcie_ports_dpc_native = true; return 1; } diff --git a/drivers/pci/pcie/ptm.c b/drivers/pci/pcie/ptm.c index 98cfa30f3fae..9361f3aa26ab 100644 --- a/drivers/pci/pcie/ptm.c +++ b/drivers/pci/pcie/ptm.c @@ -21,7 +21,7 @@ static void pci_ptm_info(struct pci_dev *dev) snprintf(clock_desc, sizeof(clock_desc), ">254ns"); break; default: - snprintf(clock_desc, sizeof(clock_desc), "%udns", + snprintf(clock_desc, sizeof(clock_desc), "%uns", dev->ptm_granularity); break; } diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 3d5271a7a849..512cb4312ddd 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -7,6 +7,7 @@ #include <linux/delay.h> #include <linux/init.h> #include <linux/pci.h> +#include <linux/msi.h> #include <linux/of_device.h> #include <linux/of_pci.h> #include <linux/pci_hotplug.h> @@ -572,6 +573,7 @@ static void devm_pci_release_host_bridge_dev(struct device *dev) bridge->release_fn(bridge); pci_free_resource_list(&bridge->windows); + pci_free_resource_list(&bridge->dma_ranges); } static void pci_release_host_bridge_dev(struct device *dev) @@ -897,6 +899,9 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) else pr_info("PCI host bridge to bus %s\n", name); + if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE) + dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n"); + /* Add initial resources to the bus */ resource_list_for_each_entry_safe(window, n, &resources) { list_move_tail(&window->node, &bridge->windows); @@ -1089,14 +1094,15 @@ static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus, * @sec: updated with secondary bus number from EA * @sub: updated with subordinate bus number from EA * - * If @dev is a bridge with EA capability, update @sec and @sub with - * fixed bus numbers from the capability and return true. Otherwise, - * return false. + * If @dev is a bridge with EA capability that specifies valid secondary + * and subordinate bus numbers, return true with the bus numbers in @sec + * and @sub. Otherwise return false. */ static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub) { int ea, offset; u32 dw; + u8 ea_sec, ea_sub; if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) return false; @@ -1108,8 +1114,13 @@ static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub) offset = ea + PCI_EA_FIRST_ENT; pci_read_config_dword(dev, offset, &dw); - *sec = dw & PCI_EA_SEC_BUS_MASK; - *sub = (dw & PCI_EA_SUB_BUS_MASK) >> PCI_EA_SUB_BUS_SHIFT; + ea_sec = dw & PCI_EA_SEC_BUS_MASK; + ea_sub = (dw & PCI_EA_SUB_BUS_MASK) >> PCI_EA_SUB_BUS_SHIFT; + if (ea_sec == 0 || ea_sub < ea_sec) + return false; + + *sec = ea_sec; + *sub = ea_sub; return true; } @@ -2300,8 +2311,7 @@ void pcie_report_downtraining(struct pci_dev *dev) static void pci_init_capabilities(struct pci_dev *dev) { - /* Enhanced Allocation */ - pci_ea_init(dev); + pci_ea_init(dev); /* Enhanced Allocation */ /* Setup MSI caps & disable MSI/MSI-X interrupts */ pci_msi_setup_pci_dev(dev); @@ -2309,29 +2319,16 @@ static void pci_init_capabilities(struct pci_dev *dev) /* Buffers for saving PCIe and PCI-X capabilities */ pci_allocate_cap_save_buffers(dev); - /* Power Management */ - pci_pm_init(dev); - - /* Vital Product Data */ - pci_vpd_init(dev); - - /* Alternative Routing-ID Forwarding */ - pci_configure_ari(dev); - - /* Single Root I/O Virtualization */ - pci_iov_init(dev); - - /* Address Translation Services */ - pci_ats_init(dev); - - /* Enable ACS P2P upstream forwarding */ - pci_enable_acs(dev); - - /* Precision Time Measurement */ - pci_ptm_init(dev); - - /* Advanced Error Reporting */ - pci_aer_init(dev); + pci_pm_init(dev); /* Power Management */ + pci_vpd_init(dev); /* Vital Product Data */ + pci_configure_ari(dev); /* Alternative Routing-ID Forwarding */ + pci_iov_init(dev); /* Single Root I/O Virtualization */ + pci_ats_init(dev); /* Address Translation Services */ + pci_pri_init(dev); /* Page Request Interface */ + pci_pasid_init(dev); /* Process Address Space ID */ + pci_enable_acs(dev); /* Enable ACS P2P upstream forwarding */ + pci_ptm_init(dev); /* Precision Time Measurement */ + pci_aer_init(dev); /* Advanced Error Reporting */ pcie_report_downtraining(dev); @@ -2403,13 +2400,10 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus) /* Fix up broken headers */ pci_fixup_device(pci_fixup_header, dev); - /* Moved out from quirk header fixup code */ pci_reassigndev_resource_alignment(dev); - /* Clear the state_saved flag */ dev->state_saved = false; - /* Initialize various capabilities */ pci_init_capabilities(dev); /* diff --git a/drivers/pci/proc.c b/drivers/pci/proc.c index 5495537c60c2..6ef74bf5013f 100644 --- a/drivers/pci/proc.c +++ b/drivers/pci/proc.c @@ -258,13 +258,13 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma) } /* Make sure the caller is mapping a real resource for this device */ - for (i = 0; i < PCI_ROM_RESOURCE; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (dev->resource[i].flags & res_bit && pci_mmap_fits(dev, i, vma, PCI_MMAP_PROCFS)) break; } - if (i >= PCI_ROM_RESOURCE) + if (i >= PCI_STD_NUM_BARS) return -ENODEV; if (fpriv->mmap_state == pci_mmap_mem && diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 320255e5e8f8..4937a088d7d8 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -474,7 +474,7 @@ static void quirk_extend_bar_to_page(struct pci_dev *dev) { int i; - for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { struct resource *r = &dev->resource[i]; if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) { @@ -1809,7 +1809,7 @@ static void quirk_alder_ioapic(struct pci_dev *pdev) * The next five BARs all seem to be rubbish, so just clean * them out. */ - for (i = 1; i < 6; i++) + for (i = 1; i < PCI_STD_NUM_BARS; i++) memset(&pdev->resource[i], 0, sizeof(pdev->resource[i])); } DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic); @@ -4033,7 +4033,6 @@ static void quirk_fixed_dma_alias(struct pci_dev *dev) if (id) pci_add_dma_alias(dev, id->driver_data); } - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ADAPTEC2, 0x0285, quirk_fixed_dma_alias); /* @@ -4081,6 +4080,40 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2260, quirk_mic_x200_dma_alias); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2264, quirk_mic_x200_dma_alias); /* + * Intel Visual Compute Accelerator (VCA) is a family of PCIe add-in devices + * exposing computational units via Non Transparent Bridges (NTB, PEX 87xx). + * + * Similarly to MIC x200, we need to add DMA aliases to allow buffer access + * when IOMMU is enabled. These aliases allow computational unit access to + * host memory. These aliases mark the whole VCA device as one IOMMU + * group. + * + * All possible slot numbers (0x20) are used, since we are unable to tell + * what slot is used on other side. This quirk is intended for both host + * and computational unit sides. The VCA devices have up to five functions + * (four for DMA channels and one additional). + */ +static void quirk_pex_vca_alias(struct pci_dev *pdev) +{ + const unsigned int num_pci_slots = 0x20; + unsigned int slot; + + for (slot = 0; slot < num_pci_slots; slot++) { + pci_add_dma_alias(pdev, PCI_DEVFN(slot, 0x0)); + pci_add_dma_alias(pdev, PCI_DEVFN(slot, 0x1)); + pci_add_dma_alias(pdev, PCI_DEVFN(slot, 0x2)); + pci_add_dma_alias(pdev, PCI_DEVFN(slot, 0x3)); + pci_add_dma_alias(pdev, PCI_DEVFN(slot, 0x4)); + } +} +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2954, quirk_pex_vca_alias); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2955, quirk_pex_vca_alias); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2956, quirk_pex_vca_alias); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2958, quirk_pex_vca_alias); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2959, quirk_pex_vca_alias); +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x295A, quirk_pex_vca_alias); + +/* * The IOMMU and interrupt controller on Broadcom Vulcan/Cavium ThunderX2 are * associated not at the root bus, but at a bridge below. This quirk avoids * generating invalid DMA aliases. @@ -4263,6 +4296,24 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, quirk_chelsio_T5_disable_root_port_attributes); /* + * pci_acs_ctrl_enabled - compare desired ACS controls with those provided + * by a device + * @acs_ctrl_req: Bitmask of desired ACS controls + * @acs_ctrl_ena: Bitmask of ACS controls enabled or provided implicitly by + * the hardware design + * + * Return 1 if all ACS controls in the @acs_ctrl_req bitmask are included + * in @acs_ctrl_ena, i.e., the device provides all the access controls the + * caller desires. Return 0 otherwise. + */ +static int pci_acs_ctrl_enabled(u16 acs_ctrl_req, u16 acs_ctrl_ena) +{ + if ((acs_ctrl_req & acs_ctrl_ena) == acs_ctrl_req) + return 1; + return 0; +} + +/* * AMD has indicated that the devices below do not support peer-to-peer * in any system where they are found in the southbridge with an AMD * IOMMU in the system. Multifunction devices that do not support @@ -4305,7 +4356,7 @@ static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags) /* Filter out flags not applicable to multifunction */ acs_flags &= (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC | PCI_ACS_DT); - return acs_flags & ~(PCI_ACS_RR | PCI_ACS_CR) ? 0 : 1; + return pci_acs_ctrl_enabled(acs_flags, PCI_ACS_RR | PCI_ACS_CR); #else return -ENODEV; #endif @@ -4313,33 +4364,38 @@ static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags) static bool pci_quirk_cavium_acs_match(struct pci_dev *dev) { + if (!pci_is_pcie(dev) || pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) + return false; + + switch (dev->device) { /* - * Effectively selects all downstream ports for whole ThunderX 1 - * family by 0xf800 mask (which represents 8 SoCs), while the lower - * bits of device ID are used to indicate which subdevice is used - * within the SoC. + * Effectively selects all downstream ports for whole ThunderX1 + * (which represents 8 SoCs). */ - return (pci_is_pcie(dev) && - (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) && - ((dev->device & 0xf800) == 0xa000)); + case 0xa000 ... 0xa7ff: /* ThunderX1 */ + case 0xaf84: /* ThunderX2 */ + case 0xb884: /* ThunderX3 */ + return true; + default: + return false; + } } static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags) { + if (!pci_quirk_cavium_acs_match(dev)) + return -ENOTTY; + /* - * Cavium root ports don't advertise an ACS capability. However, + * Cavium Root Ports don't advertise an ACS capability. However, * the RTL internally implements similar protection as if ACS had - * Request Redirection, Completion Redirection, Source Validation, + * Source Validation, Request Redirection, Completion Redirection, * and Upstream Forwarding features enabled. Assert that the * hardware implements and enables equivalent ACS functionality for * these flags. */ - acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF); - - if (!pci_quirk_cavium_acs_match(dev)) - return -ENOTTY; - - return acs_flags ? 0 : 1; + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); } static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags) @@ -4349,13 +4405,12 @@ static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags) * transactions with others, allowing masking out these bits as if they * were unimplemented in the ACS capability. */ - acs_flags &= ~(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); - - return acs_flags ? 0 : 1; + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); } /* - * Many Intel PCH root ports do provide ACS-like features to disable peer + * Many Intel PCH Root Ports do provide ACS-like features to disable peer * transactions and validate bus numbers in requests, but do not provide an * actual PCIe ACS capability. This is the list of device IDs known to fall * into that category as provided by Intel in Red Hat bugzilla 1037684. @@ -4403,37 +4458,32 @@ static bool pci_quirk_intel_pch_acs_match(struct pci_dev *dev) return false; } -#define INTEL_PCH_ACS_FLAGS (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV) - static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags) { - u16 flags = dev->dev_flags & PCI_DEV_FLAGS_ACS_ENABLED_QUIRK ? - INTEL_PCH_ACS_FLAGS : 0; - if (!pci_quirk_intel_pch_acs_match(dev)) return -ENOTTY; - return acs_flags & ~flags ? 0 : 1; + if (dev->dev_flags & PCI_DEV_FLAGS_ACS_ENABLED_QUIRK) + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); + + return pci_acs_ctrl_enabled(acs_flags, 0); } /* - * These QCOM root ports do provide ACS-like features to disable peer + * These QCOM Root Ports do provide ACS-like features to disable peer * transactions and validate bus numbers in requests, but do not provide an * actual PCIe ACS capability. Hardware supports source validation but it * will report the issue as Completer Abort instead of ACS Violation. - * Hardware doesn't support peer-to-peer and each root port is a root - * complex with unique segment numbers. It is not possible for one root - * port to pass traffic to another root port. All PCIe transactions are - * terminated inside the root port. + * Hardware doesn't support peer-to-peer and each Root Port is a Root + * Complex with unique segment numbers. It is not possible for one Root + * Port to pass traffic to another Root Port. All PCIe transactions are + * terminated inside the Root Port. */ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags) { - u16 flags = (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV); - int ret = acs_flags & ~flags ? 0 : 1; - - pci_info(dev, "Using QCOM ACS Quirk (%d)\n", ret); - - return ret; + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); } static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags) @@ -4534,7 +4584,7 @@ static int pci_quirk_intel_spt_pch_acs(struct pci_dev *dev, u16 acs_flags) pci_read_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, &ctrl); - return acs_flags & ~ctrl ? 0 : 1; + return pci_acs_ctrl_enabled(acs_flags, ctrl); } static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) @@ -4548,10 +4598,9 @@ static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) * perform peer-to-peer with other functions, allowing us to mask out * these bits as if they were unimplemented in the ACS capability. */ - acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | - PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); - - return acs_flags ? 0 : 1; + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | + PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); } static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags) @@ -4562,9 +4611,8 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags) * Allow each Root Port to be in a separate IOMMU group by masking * SV/RR/CR/UF bits. */ - acs_flags &= ~(PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); - - return acs_flags ? 0 : 1; + return pci_acs_ctrl_enabled(acs_flags, + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); } static const struct pci_dev_acs_enabled { @@ -4666,6 +4714,17 @@ static const struct pci_dev_acs_enabled { { 0 } }; +/* + * pci_dev_specific_acs_enabled - check whether device provides ACS controls + * @dev: PCI device + * @acs_flags: Bitmask of desired ACS controls + * + * Returns: + * -ENOTTY: No quirk applies to this device; we can't tell whether the + * device provides the desired controls + * 0: Device does not provide all the desired controls + * >0: Device provides all the controls in @acs_flags + */ int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags) { const struct pci_dev_acs_enabled *i; @@ -4706,7 +4765,7 @@ int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags) #define INTEL_BSPR_REG_BPPD (1 << 9) /* Upstream Peer Decode Configuration Register */ -#define INTEL_UPDCR_REG 0x1114 +#define INTEL_UPDCR_REG 0x1014 /* 5:0 Peer Decode Enable bits */ #define INTEL_UPDCR_REG_MASK 0x3f diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index e7dbe21705ba..f279826204eb 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -752,24 +752,32 @@ static void pci_bridge_check_ranges(struct pci_bus *bus) } /* - * Helper function for sizing routines: find first available bus resource - * of a given type. Note: we intentionally skip the bus resources which - * have already been assigned (that is, have non-NULL parent resource). + * Helper function for sizing routines. Assigned resources have non-NULL + * parent resource. + * + * Return first unassigned resource of the correct type. If there is none, + * return first assigned resource of the correct type. If none of the + * above, return NULL. + * + * Returning an assigned resource of the correct type allows the caller to + * distinguish between already assigned and no resource of the correct type. */ -static struct resource *find_free_bus_resource(struct pci_bus *bus, - unsigned long type_mask, - unsigned long type) +static struct resource *find_bus_resource_of_type(struct pci_bus *bus, + unsigned long type_mask, + unsigned long type) { + struct resource *r, *r_assigned = NULL; int i; - struct resource *r; pci_bus_for_each_resource(bus, r, i) { if (r == &ioport_resource || r == &iomem_resource) continue; if (r && (r->flags & type_mask) == type && !r->parent) return r; + if (r && (r->flags & type_mask) == type && !r_assigned) + r_assigned = r; } - return NULL; + return r_assigned; } static resource_size_t calculate_iosize(resource_size_t size, @@ -866,8 +874,8 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size, struct list_head *realloc_head) { struct pci_dev *dev; - struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO, - IORESOURCE_IO); + struct resource *b_res = find_bus_resource_of_type(bus, IORESOURCE_IO, + IORESOURCE_IO); resource_size_t size = 0, size0 = 0, size1 = 0; resource_size_t children_add_size = 0; resource_size_t min_align, align; @@ -875,6 +883,10 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size, if (!b_res) return; + /* If resource is already assigned, nothing more to do */ + if (b_res->parent) + return; + min_align = window_alignment(bus, IORESOURCE_IO); list_for_each_entry(dev, &bus->devices, bus_list) { int i; @@ -978,7 +990,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, resource_size_t min_align, align, size, size0, size1; resource_size_t aligns[18]; /* Alignments from 1MB to 128GB */ int order, max_order; - struct resource *b_res = find_free_bus_resource(bus, + struct resource *b_res = find_bus_resource_of_type(bus, mask | IORESOURCE_PREFETCH, type); resource_size_t children_add_size = 0; resource_size_t children_add_align = 0; @@ -987,6 +999,10 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, if (!b_res) return -ENOSPC; + /* If resource is already assigned, nothing more to do */ + if (b_res->parent) + return 0; + memset(aligns, 0, sizeof(aligns)); max_order = 0; size = 0; @@ -1178,7 +1194,8 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) { struct pci_dev *dev; unsigned long mask, prefmask, type2 = 0, type3 = 0; - resource_size_t additional_mem_size = 0, additional_io_size = 0; + resource_size_t additional_io_size = 0, additional_mmio_size = 0, + additional_mmio_pref_size = 0; struct resource *b_res; int ret; @@ -1212,7 +1229,8 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) pci_bridge_check_ranges(bus); if (bus->self->is_hotplug_bridge) { additional_io_size = pci_hotplug_io_size; - additional_mem_size = pci_hotplug_mem_size; + additional_mmio_size = pci_hotplug_mmio_size; + additional_mmio_pref_size = pci_hotplug_mmio_pref_size; } /* Fall through */ default: @@ -1230,9 +1248,9 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) if (b_res[2].flags & IORESOURCE_MEM_64) { prefmask |= IORESOURCE_MEM_64; ret = pbus_size_mem(bus, prefmask, prefmask, - prefmask, prefmask, - realloc_head ? 0 : additional_mem_size, - additional_mem_size, realloc_head); + prefmask, prefmask, + realloc_head ? 0 : additional_mmio_pref_size, + additional_mmio_pref_size, realloc_head); /* * If successful, all non-prefetchable resources @@ -1254,9 +1272,9 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) if (!type2) { prefmask &= ~IORESOURCE_MEM_64; ret = pbus_size_mem(bus, prefmask, prefmask, - prefmask, prefmask, - realloc_head ? 0 : additional_mem_size, - additional_mem_size, realloc_head); + prefmask, prefmask, + realloc_head ? 0 : additional_mmio_pref_size, + additional_mmio_pref_size, realloc_head); /* * If successful, only non-prefetchable resources @@ -1265,7 +1283,7 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) if (ret == 0) mask = prefmask; else - additional_mem_size += additional_mem_size; + additional_mmio_size += additional_mmio_pref_size; type2 = type3 = IORESOURCE_MEM; } @@ -1285,8 +1303,8 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) * prefetchable resource in a 64-bit prefetchable window. */ pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3, - realloc_head ? 0 : additional_mem_size, - additional_mem_size, realloc_head); + realloc_head ? 0 : additional_mmio_size, + additional_mmio_size, realloc_head); break; } } @@ -2066,6 +2084,8 @@ int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type) unsigned int i; int ret; + down_read(&pci_bus_sem); + /* Walk to the root hub, releasing bridge BARs when possible */ next = bridge; do { @@ -2100,8 +2120,10 @@ int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type) next = bridge->bus ? bridge->bus->self : NULL; } while (next); - if (list_empty(&saved)) + if (list_empty(&saved)) { + up_read(&pci_bus_sem); return -ENOENT; + } __pci_bus_size_bridges(bridge->subordinate, &added); __pci_bridge_assign_resources(bridge, &added, &failed); @@ -2122,6 +2144,7 @@ int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type) } free_list(&saved); + up_read(&pci_bus_sem); return 0; cleanup: @@ -2150,6 +2173,7 @@ cleanup: pci_setup_bridge(bridge->subordinate); } free_list(&saved); + up_read(&pci_bus_sem); return ret; } diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c index 66610f04d76d..88091bbfe77f 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -675,7 +675,7 @@ static int ioctl_event_summary(struct switchtec_dev *stdev, return -ENOMEM; s->global = ioread32(&stdev->mmio_sw_event->global_summary); - s->part_bitmap = ioread32(&stdev->mmio_sw_event->part_event_bitmap); + s->part_bitmap = ioread64(&stdev->mmio_sw_event->part_event_bitmap); s->local_part = ioread32(&stdev->mmio_part_cfg->part_event_summary); for (i = 0; i < stdev->partition_count; i++) { diff --git a/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c b/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c index ac322d643c7a..08e322789e59 100644 --- a/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c +++ b/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c @@ -50,6 +50,8 @@ #define PHY_R5_PHY_CR_ACK BIT(16) #define PHY_R5_PHY_BS_OUT BIT(17) +#define PCIE_RESET_DELAY 500 + struct phy_g12a_usb3_pcie_priv { struct regmap *regmap; struct regmap *regmap_cr; @@ -196,6 +198,10 @@ static int phy_g12a_usb3_init(struct phy *phy) struct phy_g12a_usb3_pcie_priv *priv = phy_get_drvdata(phy); int data, ret; + ret = reset_control_reset(priv->reset); + if (ret) + return ret; + /* Switch PHY to USB3 */ /* TODO figure out how to handle when PCIe was set in the bootloader */ regmap_update_bits(priv->regmap, PHY_R0, @@ -272,24 +278,64 @@ static int phy_g12a_usb3_init(struct phy *phy) return 0; } -static int phy_g12a_usb3_pcie_init(struct phy *phy) +static int phy_g12a_usb3_pcie_power_on(struct phy *phy) +{ + struct phy_g12a_usb3_pcie_priv *priv = phy_get_drvdata(phy); + + if (priv->mode == PHY_TYPE_USB3) + return 0; + + regmap_update_bits(priv->regmap, PHY_R0, + PHY_R0_PCIE_POWER_STATE, + FIELD_PREP(PHY_R0_PCIE_POWER_STATE, 0x1c)); + + return 0; +} + +static int phy_g12a_usb3_pcie_power_off(struct phy *phy) +{ + struct phy_g12a_usb3_pcie_priv *priv = phy_get_drvdata(phy); + + if (priv->mode == PHY_TYPE_USB3) + return 0; + + regmap_update_bits(priv->regmap, PHY_R0, + PHY_R0_PCIE_POWER_STATE, + FIELD_PREP(PHY_R0_PCIE_POWER_STATE, 0x1d)); + + return 0; +} + +static int phy_g12a_usb3_pcie_reset(struct phy *phy) { struct phy_g12a_usb3_pcie_priv *priv = phy_get_drvdata(phy); int ret; - ret = reset_control_reset(priv->reset); + if (priv->mode == PHY_TYPE_USB3) + return 0; + + ret = reset_control_assert(priv->reset); if (ret) return ret; + udelay(PCIE_RESET_DELAY); + + ret = reset_control_deassert(priv->reset); + if (ret) + return ret; + + udelay(PCIE_RESET_DELAY); + + return 0; +} + +static int phy_g12a_usb3_pcie_init(struct phy *phy) +{ + struct phy_g12a_usb3_pcie_priv *priv = phy_get_drvdata(phy); + if (priv->mode == PHY_TYPE_USB3) return phy_g12a_usb3_init(phy); - /* Power UP PCIE */ - /* TODO figure out when the bootloader has set USB3 mode before */ - regmap_update_bits(priv->regmap, PHY_R0, - PHY_R0_PCIE_POWER_STATE, - FIELD_PREP(PHY_R0_PCIE_POWER_STATE, 0x1c)); - return 0; } @@ -297,7 +343,10 @@ static int phy_g12a_usb3_pcie_exit(struct phy *phy) { struct phy_g12a_usb3_pcie_priv *priv = phy_get_drvdata(phy); - return reset_control_reset(priv->reset); + if (priv->mode == PHY_TYPE_USB3) + return reset_control_reset(priv->reset); + + return 0; } static struct phy *phy_g12a_usb3_pcie_xlate(struct device *dev, @@ -326,6 +375,9 @@ static struct phy *phy_g12a_usb3_pcie_xlate(struct device *dev, static const struct phy_ops phy_g12a_usb3_pcie_ops = { .init = phy_g12a_usb3_pcie_init, .exit = phy_g12a_usb3_pcie_exit, + .power_on = phy_g12a_usb3_pcie_power_on, + .power_off = phy_g12a_usb3_pcie_power_off, + .reset = phy_g12a_usb3_pcie_reset, .owner = THIS_MODULE, }; diff --git a/drivers/rapidio/devices/tsi721.c b/drivers/rapidio/devices/tsi721.c index 125a173bed45..4dd31dd9feea 100644 --- a/drivers/rapidio/devices/tsi721.c +++ b/drivers/rapidio/devices/tsi721.c @@ -2755,7 +2755,7 @@ static int tsi721_probe(struct pci_dev *pdev, { int i; - for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { tsi_debug(INIT, &pdev->dev, "res%d %pR", i, &pdev->resource[i]); } diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c index 1cbcade747fe..2328ff1349ac 100644 --- a/drivers/scsi/pm8001/pm8001_hwi.c +++ b/drivers/scsi/pm8001/pm8001_hwi.c @@ -1186,7 +1186,7 @@ static void pm8001_hw_chip_rst(struct pm8001_hba_info *pm8001_ha) void pm8001_chip_iounmap(struct pm8001_hba_info *pm8001_ha) { s8 bar, logical = 0; - for (bar = 0; bar < 6; bar++) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { /* ** logical BARs for SPC: ** bar 0 and 1 - logical BAR0 diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c index c6d9da7b546e..ff618ad80ebd 100644 --- a/drivers/scsi/pm8001/pm8001_init.c +++ b/drivers/scsi/pm8001/pm8001_init.c @@ -414,7 +414,7 @@ static int pm8001_ioremap(struct pm8001_hba_info *pm8001_ha) pdev = pm8001_ha->pdev; /* map pci mem (PMC pci base 0-3)*/ - for (bar = 0; bar < 6; bar++) { + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { /* ** logical BARs for SPC: ** bar 0 and 1 - logical BAR0 diff --git a/drivers/staging/gasket/gasket_constants.h b/drivers/staging/gasket/gasket_constants.h index 50d87c7b178c..9ea9c8833f27 100644 --- a/drivers/staging/gasket/gasket_constants.h +++ b/drivers/staging/gasket/gasket_constants.h @@ -13,9 +13,6 @@ /* The maximum devices per each type. */ #define GASKET_DEV_MAX 256 -/* The number of supported (and possible) PCI BARs. */ -#define GASKET_NUM_BARS 6 - /* The number of supported Gasket page tables per device. */ #define GASKET_MAX_NUM_PAGE_TABLES 1 diff --git a/drivers/staging/gasket/gasket_core.c b/drivers/staging/gasket/gasket_core.c index 13179f063a61..cd8be80d2076 100644 --- a/drivers/staging/gasket/gasket_core.c +++ b/drivers/staging/gasket/gasket_core.c @@ -371,7 +371,7 @@ static int gasket_setup_pci(struct pci_dev *pci_dev, { int i, mapped_bars, ret; - for (i = 0; i < GASKET_NUM_BARS; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { ret = gasket_map_pci_bar(gasket_dev, i); if (ret) { mapped_bars = i; @@ -393,7 +393,7 @@ static void gasket_cleanup_pci(struct gasket_dev *gasket_dev) { int i; - for (i = 0; i < GASKET_NUM_BARS; i++) + for (i = 0; i < PCI_STD_NUM_BARS; i++) gasket_unmap_pci_bar(gasket_dev, i); } @@ -493,7 +493,7 @@ static ssize_t gasket_sysfs_data_show(struct device *device, (enum gasket_sysfs_attribute_type)gasket_attr->data.attr_type; switch (sysfs_type) { case ATTR_BAR_OFFSETS: - for (i = 0; i < GASKET_NUM_BARS; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { bar_desc = &driver_desc->bar_descriptions[i]; if (bar_desc->size == 0) continue; @@ -505,7 +505,7 @@ static ssize_t gasket_sysfs_data_show(struct device *device, } break; case ATTR_BAR_SIZES: - for (i = 0; i < GASKET_NUM_BARS; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { bar_desc = &driver_desc->bar_descriptions[i]; if (bar_desc->size == 0) continue; @@ -556,7 +556,7 @@ static ssize_t gasket_sysfs_data_show(struct device *device, ret = snprintf(buf, PAGE_SIZE, "%d\n", gasket_dev->reset_count); break; case ATTR_USER_MEM_RANGES: - for (i = 0; i < GASKET_NUM_BARS; ++i) { + for (i = 0; i < PCI_STD_NUM_BARS; ++i) { current_written = gasket_write_mappable_regions(buf, driver_desc, i); @@ -736,7 +736,7 @@ static int gasket_get_bar_index(const struct gasket_dev *gasket_dev, const struct gasket_driver_desc *driver_desc; driver_desc = gasket_dev->internal_desc->driver_desc; - for (i = 0; i < GASKET_NUM_BARS; ++i) { + for (i = 0; i < PCI_STD_NUM_BARS; ++i) { struct gasket_bar_desc bar_desc = driver_desc->bar_descriptions[i]; diff --git a/drivers/staging/gasket/gasket_core.h b/drivers/staging/gasket/gasket_core.h index be44ac1e3118..c417acadb0d5 100644 --- a/drivers/staging/gasket/gasket_core.h +++ b/drivers/staging/gasket/gasket_core.h @@ -268,7 +268,7 @@ struct gasket_dev { char kobj_name[GASKET_NAME_MAX]; /* Virtual address of mapped BAR memory range. */ - struct gasket_bar_data bar_data[GASKET_NUM_BARS]; + struct gasket_bar_data bar_data[PCI_STD_NUM_BARS]; /* Coherent buffer. */ struct gasket_coherent_buffer coherent_buffer; @@ -369,7 +369,7 @@ struct gasket_driver_desc { /* Set of 6 bar descriptions that describe all PCIe bars. * Note that BUS/AXI devices (i.e. non PCI devices) use those. */ - struct gasket_bar_desc bar_descriptions[GASKET_NUM_BARS]; + struct gasket_bar_desc bar_descriptions[PCI_STD_NUM_BARS]; /* * Coherent buffer description. diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c index 6adbadd6a56a..0b8784cd6d71 100644 --- a/drivers/tty/serial/8250/8250_pci.c +++ b/drivers/tty/serial/8250/8250_pci.c @@ -48,8 +48,6 @@ struct f815xxa_data { int idx; }; -#define PCI_NUM_BAR_RESOURCES 6 - struct serial_private { struct pci_dev *dev; unsigned int nr; @@ -89,7 +87,7 @@ setup_port(struct serial_private *priv, struct uart_8250_port *port, { struct pci_dev *dev = priv->dev; - if (bar >= PCI_NUM_BAR_RESOURCES) + if (bar >= PCI_STD_NUM_BARS) return -EINVAL; if (pci_resource_flags(dev, bar) & IORESOURCE_MEM) { @@ -4060,7 +4058,7 @@ serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board) return -ENODEV; num_iomem = num_port = 0; - for (i = 0; i < PCI_NUM_BAR_RESOURCES; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_flags(dev, i) & IORESOURCE_IO) { num_port++; if (first_port == -1) @@ -4088,7 +4086,7 @@ serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board) */ first_port = -1; num_port = 0; - for (i = 0; i < PCI_NUM_BAR_RESOURCES; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { if (pci_resource_flags(dev, i) & IORESOURCE_IO && pci_resource_len(dev, i) == 8 && (first_port == -1 || (first_port + num_port) == i)) { diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c index 9e26b0143a59..9ae2a7a93df2 100644 --- a/drivers/usb/core/hcd-pci.c +++ b/drivers/usb/core/hcd-pci.c @@ -234,7 +234,7 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) /* UHCI */ int region; - for (region = 0; region < PCI_ROM_RESOURCE; region++) { + for (region = 0; region < PCI_STD_NUM_BARS; region++) { if (!(pci_resource_flags(dev, region) & IORESOURCE_IO)) continue; diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c index f6d04491df60..6c7f0a876b96 100644 --- a/drivers/usb/host/pci-quirks.c +++ b/drivers/usb/host/pci-quirks.c @@ -728,7 +728,7 @@ static void quirk_usb_handoff_uhci(struct pci_dev *pdev) if (!pio_enabled(pdev)) return; - for (i = 0; i < PCI_ROM_RESOURCE; i++) + for (i = 0; i < PCI_STD_NUM_BARS; i++) if ((pci_resource_flags(pdev, i) & IORESOURCE_IO)) { base = pci_resource_start(pdev, i); break; diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 02206162eaa9..379a02c36e37 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -110,13 +110,15 @@ static inline bool vfio_pci_is_vga(struct pci_dev *pdev) static void vfio_pci_probe_mmaps(struct vfio_pci_device *vdev) { struct resource *res; - int bar; + int i; struct vfio_pci_dummy_resource *dummy_res; INIT_LIST_HEAD(&vdev->dummy_resources_list); - for (bar = PCI_STD_RESOURCES; bar <= PCI_STD_RESOURCE_END; bar++) { - res = vdev->pdev->resource + bar; + for (i = 0; i < PCI_STD_NUM_BARS; i++) { + int bar = i + PCI_STD_RESOURCES; + + res = &vdev->pdev->resource[bar]; if (!IS_ENABLED(CONFIG_VFIO_PCI_MMAP)) goto no_mmap; @@ -399,7 +401,8 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev) vfio_config_free(vdev); - for (bar = PCI_STD_RESOURCES; bar <= PCI_STD_RESOURCE_END; bar++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { + bar = i + PCI_STD_RESOURCES; if (!vdev->barmap[bar]) continue; pci_iounmap(pdev, vdev->barmap[bar]); diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index f0891bd8444c..90c0b80f8acf 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -450,30 +450,32 @@ static void vfio_bar_fixup(struct vfio_pci_device *vdev) { struct pci_dev *pdev = vdev->pdev; int i; - __le32 *bar; + __le32 *vbar; u64 mask; - bar = (__le32 *)&vdev->vconfig[PCI_BASE_ADDRESS_0]; + vbar = (__le32 *)&vdev->vconfig[PCI_BASE_ADDRESS_0]; - for (i = PCI_STD_RESOURCES; i <= PCI_STD_RESOURCE_END; i++, bar++) { - if (!pci_resource_start(pdev, i)) { - *bar = 0; /* Unmapped by host = unimplemented to user */ + for (i = 0; i < PCI_STD_NUM_BARS; i++, vbar++) { + int bar = i + PCI_STD_RESOURCES; + + if (!pci_resource_start(pdev, bar)) { + *vbar = 0; /* Unmapped by host = unimplemented to user */ continue; } - mask = ~(pci_resource_len(pdev, i) - 1); + mask = ~(pci_resource_len(pdev, bar) - 1); - *bar &= cpu_to_le32((u32)mask); - *bar |= vfio_generate_bar_flags(pdev, i); + *vbar &= cpu_to_le32((u32)mask); + *vbar |= vfio_generate_bar_flags(pdev, bar); - if (*bar & cpu_to_le32(PCI_BASE_ADDRESS_MEM_TYPE_64)) { - bar++; - *bar &= cpu_to_le32((u32)(mask >> 32)); + if (*vbar & cpu_to_le32(PCI_BASE_ADDRESS_MEM_TYPE_64)) { + vbar++; + *vbar &= cpu_to_le32((u32)(mask >> 32)); i++; } } - bar = (__le32 *)&vdev->vconfig[PCI_ROM_ADDRESS]; + vbar = (__le32 *)&vdev->vconfig[PCI_ROM_ADDRESS]; /* * NB. REGION_INFO will have reported zero size if we weren't able @@ -483,14 +485,14 @@ static void vfio_bar_fixup(struct vfio_pci_device *vdev) if (pci_resource_start(pdev, PCI_ROM_RESOURCE)) { mask = ~(pci_resource_len(pdev, PCI_ROM_RESOURCE) - 1); mask |= PCI_ROM_ADDRESS_ENABLE; - *bar &= cpu_to_le32((u32)mask); + *vbar &= cpu_to_le32((u32)mask); } else if (pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW) { mask = ~(0x20000 - 1); mask |= PCI_ROM_ADDRESS_ENABLE; - *bar &= cpu_to_le32((u32)mask); + *vbar &= cpu_to_le32((u32)mask); } else - *bar = 0; + *vbar = 0; vdev->bardirty = false; } diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index ee6ee91718a4..8a2c7607d513 100644 --- a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -86,8 +86,8 @@ struct vfio_pci_reflck { struct vfio_pci_device { struct pci_dev *pdev; - void __iomem *barmap[PCI_STD_RESOURCE_END + 1]; - bool bar_mmap_supported[PCI_STD_RESOURCE_END + 1]; + void __iomem *barmap[PCI_STD_NUM_BARS]; + bool bar_mmap_supported[PCI_STD_NUM_BARS]; u8 *pci_config_map; u8 *vconfig; struct perm_bits *msi_perm; diff --git a/drivers/video/fbdev/aty/radeon_pm.c b/drivers/video/fbdev/aty/radeon_pm.c index 2dc5703eac51..7c4483c7f313 100644 --- a/drivers/video/fbdev/aty/radeon_pm.c +++ b/drivers/video/fbdev/aty/radeon_pm.c @@ -2593,7 +2593,7 @@ static void radeon_set_suspend(struct radeonfb_info *rinfo, int suspend) * calling pci_set_power_state() */ radeonfb_whack_power_state(rinfo, PCI_D2); - __pci_complete_power_transition(rinfo->pdev, PCI_D2); + pci_platform_power_transition(rinfo->pdev, PCI_D2); } else { printk(KERN_DEBUG "radeonfb (%s): switching to D0 state...\n", pci_name(rinfo->pdev)); diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c index 95c32952fa8a..6f6fc785b545 100644 --- a/drivers/video/fbdev/core/fbmem.c +++ b/drivers/video/fbdev/core/fbmem.c @@ -1772,7 +1772,7 @@ int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, const char *name) bool primary = false; int err, idx, bar; - for (idx = 0, bar = 0; bar < PCI_ROM_RESOURCE; bar++) { + for (idx = 0, bar = 0; bar < PCI_STD_NUM_BARS; bar++) { if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) continue; idx++; @@ -1782,7 +1782,7 @@ int remove_conflicting_pci_framebuffers(struct pci_dev *pdev, const char *name) if (!ap) return -ENOMEM; - for (idx = 0, bar = 0; bar < PCI_ROM_RESOURCE; bar++) { + for (idx = 0, bar = 0; bar < PCI_STD_NUM_BARS; bar++) { if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) continue; ap->ranges[idx].base = pci_resource_start(pdev, bar); diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c index 51d97ec4f58f..1caa3726cb45 100644 --- a/drivers/video/fbdev/efifb.c +++ b/drivers/video/fbdev/efifb.c @@ -653,7 +653,7 @@ static void efifb_fixup_resources(struct pci_dev *dev) if (!base) return; - for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { + for (i = 0; i < PCI_STD_NUM_BARS; i++) { struct resource *res = &dev->resource[i]; if (!(res->flags & IORESOURCE_MEM)) diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c index 5e30602fdbad..59e85e408c23 100644 --- a/drivers/xen/platform-pci.c +++ b/drivers/xen/platform-pci.c @@ -74,7 +74,7 @@ static int xen_allocate_irq(struct pci_dev *pdev) "xen-platform-pci", pdev); } -static int platform_pci_resume(struct pci_dev *pdev) +static int platform_pci_resume(struct device *dev) { int err; @@ -83,7 +83,7 @@ static int platform_pci_resume(struct pci_dev *pdev) err = xen_set_callback_via(callback_via); if (err) { - dev_err(&pdev->dev, "platform_pci_resume failure!\n"); + dev_err(dev, "platform_pci_resume failure!\n"); return err; } return 0; @@ -168,13 +168,17 @@ static const struct pci_device_id platform_pci_tbl[] = { {0,} }; +static struct dev_pm_ops platform_pm_ops = { + .resume_noirq = platform_pci_resume, +}; + static struct pci_driver platform_driver = { .name = DRV_NAME, .probe = platform_pci_probe, .id_table = platform_pci_tbl, -#ifdef CONFIG_PM - .resume_early = platform_pci_resume, -#endif + .driver = { + .pm = &platform_pm_ops, + }, }; builtin_pci_driver(platform_driver); |