pci-v6.15-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmfmt1AUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vxqNw//cC1BlVe4UUVR5r9nfpoFeGAZeDJz
 32naGvZKjwL0tR6dStO/BEZx4QrBp+smVfJfuxtQxfzLHLgMigM2jVhfa7XUmaun
 7yZlJZu4Jmydc57sPf56CFOYOFP6zyPzSaE8u1Eb4IIqpvuoYpvDayDt6PSsLmFS
 PDzqmicT3nuNbbcfE4rYLyL6JsXooKCR1h+NNcDjy7Run9DvQbE6N2PPvXCu6O97
 aC3+kYUydEpgn9DfjBDghe+GBQCkBPldwnWqXBxKDFmYj5bKFujNccS9/IDSEuuX
 oWntDRAXgLWg048sBC1AuJQajF3UaqffRGJkzUBaZWbU/jB9t5N/Z3GpYlXzizRx
 CAqnt/ciGUKVbaESKwoeeIqgK+wG1bnrmoEaJHXFGqjr6sjm2A2T5EzyBMJ1hFwE
 wUq6SDnkp5igG7rWtsBPo/lGa5h/pNlaXng11570ikD2ZfHVfRgwy2MpXYxChrkt
 X2q/lRYU7yyfNJQ8O5LQJ6bYztatjxT0TxXNFv+cxfVrdI7vnQMuaeBE352jn+Lo
 aZ8fTJDFbRXtrZolcoetZjBdHwPIS42wYQtvxo/ylUl64xKEzZEzN09XODjwn74v
 nuOzDtbM0TAjZWxi6bwRFPnemTEAQxPuv1i4VdPQ7yblvC0at2agNBsz6Fdq60GS
 s80YMJVUU0UcVoc=
 =gM1i
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Enable Configuration RRS SV, which makes device readiness visible,
     early instead of during child bus scanning (Bjorn Helgaas)

   - Log debug messages about reset methods being used (Bjorn Helgaas)

   - Avoid reset when it has been disabled via sysfs (Nishanth
     Aravamudan)

   - Add common pci-ep-bus.yaml schema for exporting several peripherals
     of a single PCI function via devicetree (Andrea della Porta)

   - Create DT nodes for PCI host bridges to enable loading device tree
     overlays to create platform devices for PCI devices that have
     several features that require multiple drivers (Herve Codina)

  Resource management:

   - Enlarge devres table[] to accommodate bridge windows, ROM, IOV
     BARs, etc., and validate BAR index in devres interfaces (Philipp
     Stanner)

   - Fix typo that repeatedly distributed resources to a bridge instead
     of iterating over subordinate bridges, which resulted in too little
     space to assign some BARs (Kai-Heng Feng)

   - Relax bridge window tail sizing for optional resources, e.g., IOV
     BARs, to avoid failures when removing and re-adding devices (Ilpo
     Järvinen)

   - Allow drivers to enable devices even if we haven't assigned
     optional IOV resources to them (Ilpo Järvinen)

   - Rework handling of optional resources (IOV BARs, ROMs) to reduce
     failures if we can't allocate them (Ilpo Järvinen)

   - Fix a NULL dereference in the SR-IOV VF creation error path (Shay
     Drory)

   - Fix s390 mmio_read/write syscalls, which didn't cause page faults
     in some cases, which broke vfio-pci lazy mapping on first access
     (Niklas Schnelle)

   - Add pdev->non_mappable_bars to replace CONFIG_VFIO_PCI_MMAP, which
     was disabled only for s390 (Niklas Schnelle)

   - Support mmap of PCI resources on s390 except for ISM devices
     (Niklas Schnelle)

  ASPM:

   - Delay pcie_link_state deallocation to avoid dangling pointers that
     cause invalid references during hot-unplug (Daniel Stodden)

  Power management:

   - Allow PCI bridges to go to D3Hot when suspending on all non-x86
     systems (Manivannan Sadhasivam)

  Power control:

   - Create pwrctrl devices in pci_scan_device() to make it more
     symmetric with pci_pwrctrl_unregister() and make pwrctrl devices
     for PCI bridges possible (Manivannan Sadhasivam)

   - Unregister pwrctrl devices in pci_destroy_dev() so DOE, ASPM, etc.
     can still access devices after pci_stop_dev() (Manivannan
     Sadhasivam)

   - If there's a pwrctrl device for a PCI device, skip scanning it
     because the pwrctrl core will rescan the bus after the device is
     powered on (Manivannan Sadhasivam)

   - Add a pwrctrl driver for PCI slots based on voltage regulators
     described via devicetree (Manivannan Sadhasivam)

  Bandwidth control:

   - Add set_pcie_speed.sh to TEST_PROGS to fix issue when executing the
     set_pcie_cooling_state.sh test case (Yi Lai)

   - Avoid a NULL pointer dereference when we run out of bus numbers to
     assign for a bridge secondary bus (Lukas Wunner)

  Hotplug:

   - Drop superfluous pci_hotplug_slot_list, try_module_get() calls, and
     NULL pointer checks (Lukas Wunner)

   - Drop shpchp module init/exit logging, replace shpchp dbg() with
     ctrl_dbg(), and remove unused dbg(), err(), info(), warn() wrappers
     (Ilpo Järvinen)

   - Drop 'shpchp_debug' module parameter in favor of standard dynamic
     debugging (Ilpo Järvinen)

   - Drop unused cpcihp .get_power(), .set_power() function pointers
     (Guilherme Giacomo Simoes)

   - Disable hotplug interrupts in portdrv only when pciehp is not
     enabled to avoid issuing two hotplug commands too close together
     (Feng Tang)

   - Skip pciehp 'device replaced' check if the device has been removed
     to address a deadlock when resuming after a device was removed
     during system sleep (Lukas Wunner)

   - Don't enable pciehp hotplug interupt when resuming in poll mode
     (Ilpo Järvinen)

  Virtualization:

   - Fix bugs in 'pci=config_acs=' kernel command line parameter (Tushar
     Dave)

  DOE:

   - Expose supported DOE features via sysfs (Alistair Francis)

   - Allow DOE support to be enabled even if CXL isn't enabled (Alistair
     Francis)

  Endpoint framework:

   - Convert PCI device data so pci-epf-test works correctly on
     big-endian endpoint systems (Niklas Cassel)

   - Add BAR_RESIZABLE type to endpoint framework and add DWC core
     support for EPF drivers to set BAR_RESIZABLE type and size (Niklas
     Cassel)

   - Fix pci-epf-test double free that causes an oops if the host
     reboots and PERST# deassertion restarts endpoint BAR allocation
     (Christian Bruel)

   - Fix endpoint BAR testing so tests can skip disabled BARs instead of
     reporting them as failures (Niklas Cassel)

   - Widen endpoint test BAR size variable to accommodate BARs larger
     than INT_MAX (Niklas Cassel)

   - Remove unused tools 'pci' build target left over after moving tests
     to tools/testing/selftests/pci_endpoint (Jianfeng Liu)

  Altera PCIe controller driver:

   - Add DT binding and driver support for Agilex family (P-Tile,
     F-Tile, R-Tile) (Matthew Gerlach and D M, Sharath Kumar)

  AMD MDB PCIe controller driver:

   - Add DT binding and driver for AMD MDB (Multimedia DMA Bridge)
     (Thippeswamy Havalige)

  Broadcom STB PCIe controller driver:

   - Add BCM2712 MSI-X DT binding and interrupt controller drivers and
     add softdep on irq_bcm2712_mip driver to ensure that it is loaded
     first (Stanimir Varbanov)

   - Expand inbound window map to 64GB so it can accommodate BCM2712
     (Stanimir Varbanov)

   - Add BCM2712 support and DT updates (Stanimir Varbanov)

   - Apply link speed restriction before bringing link up, not after
     (Jim Quinlan)

   - Update Max Link Speed in Link Capabilities via the internal
     writable register, not the read-only config register (Jim Quinlan)

   - Handle regulator_bulk_get() error to avoid panic when we call
     regulator_bulk_free() later (Jim Quinlan)

   - Disable regulators only when removing the bus immediately below a
     Root Port because we don't support regulators deeper in the
     hierarchy (Jim Quinlan)

   - Make const read-only arrays static (Colin Ian King)

  Cadence PCIe endpoint driver:

   - Correct MSG TLP generation so endpoints can generate INTx messages
     (Hans Zhang)

  Freescale i.MX6 PCIe controller driver:

   - Identify the second controller on i.MX8MQ based on devicetree
     'linux,pci-domain' instead of DBI 'reg' address (Richard Zhu)

   - Remove imx_pcie_cpu_addr_fixup() since dwc core can now derive the
     ATU input address (using parent_bus_offset) from devicetree (Frank
     Li)

  Freescale Layerscape PCIe controller driver:

   - Drop deprecated 'num-ib-windows' and 'num-ob-windows' and
     unnecessary 'status' from example (Krzysztof Kozlowski)

   - Correct the syscon_regmap_lookup_by_phandle_args("fsl,pcie-scfg")
     arg_count to fix probe failure on LS1043A (Ioana Ciornei)

  HiSilicon STB PCIe controller driver:

   - Call phy_exit() to clean up if histb_pcie_probe() fails (Christophe
     JAILLET)

  Intel Gateway PCIe controller driver:

   - Remove intel_pcie_cpu_addr() since dwc core can now derive the ATU
     input address (using parent_bus_offset) from devicetree (Frank Li)

  Intel VMD host bridge driver:

   - Convert vmd_dev.cfg_lock from spinlock_t to raw_spinlock_t so
     pci_ops.read() will never sleep, even on PREEMPT_RT where
     spinlock_t becomes a sleepable lock, to avoid calling a sleeping
     function from invalid context (Ryo Takakura)

  MediaTek PCIe Gen3 controller driver:

   - Remove leftover mac_reset assert for Airoha EN7581 SoC (Lorenzo
     Bianconi)

   - Add EN7581 PBUS controller 'mediatek,pbus-csr' DT property and
     program host bridge memory aperture to this syscon node (Lorenzo
     Bianconi)

  Qualcomm PCIe controller driver:

   - Add qcom,pcie-ipq5332 binding (Varadarajan Narayanan)

   - Add qcom i.MX8QM and i.MX8QXP/DXP optional DMA interrupt (Alexander
     Stein)

   - Add optional dma-coherent DT property for Qualcomm SA8775P (Dmitry
     Baryshkov)

   - Make DT iommu property required for SA8775P and prohibited for
     SDX55 (Dmitry Baryshkov)

   - Add DT IOMMU and DMA-related properties for Qualcomm SM8450 (Dmitry
     Baryshkov)

   - Add endpoint DT properties for SAR2130P and enable endpoint mode in
     driver (Dmitry Baryshkov)

   - Describe endpoint BAR0 and BAR2 as 64-bit only and BAR1 and BAR3 as
     RESERVED (Manivannan Sadhasivam)

  Rockchip DesignWare PCIe controller driver:

   - Describe rk3568 and rk3588 BARs as Resizable, not Fixed (Niklas
     Cassel)

  Synopsys DesignWare PCIe controller driver:

   - Add debugfs-based Silicon Debug, Error Injection, Statistical
     Counter support for DWC (Shradha Todi)

   - Add debugfs property to expose LTSSM status of DWC PCIe link (Hans
     Zhang)

   - Add Rockchip support for DWC debugfs features (Niklas Cassel)

   - Add dw_pcie_parent_bus_offset() to look up the parent bus address
     of a specified 'reg' property and return the offset from the CPU
     physical address (Frank Li)

   - Use dw_pcie_parent_bus_offset() to derive CPU -> ATU addr offset
     via 'reg[config]' for host controllers and 'reg[addr_space]' for
     endpoint controllers (Frank Li)

   - Apply struct dw_pcie.parent_bus_offset in ATU users to remove use
     of .cpu_addr_fixup() when programming ATU (Frank Li)

  TI J721E PCIe driver:

   - Correct the 'link down' interrupt bit for J784S4 (Siddharth
     Vadapalli)

  TI Keystone PCIe controller driver:

   - Describe AM65x BARs 2 and 5 as Resizable (not Fixed) and reduce
     alignment requirement from 1MB to 64KB (Niklas Cassel)

  Xilinx Versal CPM PCIe controller driver:

   - Free IRQ domain in probe error path to avoid leaking it
     (Thippeswamy Havalige)

   - Add DT .compatible "xlnx,versal-cpm5nc-host" and driver support for
     Versal Net CPM5NC Root Port controller (Thippeswamy Havalige)

   - Add driver support for CPM5_HOST1 (Thippeswamy Havalige)

  Miscellaneous:

   - Convert fsl,mpc83xx-pcie binding to YAML (J. Neuschäfer)

   - Use for_each_available_child_of_node_scoped() to simplify apple,
     kirin, mediatek, mt7621, tegra drivers (Zhang Zekun)"

* tag 'pci-v6.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (197 commits)
  PCI: layerscape: Fix arg_count to syscon_regmap_lookup_by_phandle_args()
  PCI: j721e: Fix the value of .linkdown_irq_regfield for J784S4
  misc: pci_endpoint_test: Add support for PCITEST_IRQ_TYPE_AUTO
  PCI: endpoint: pci-epf-test: Expose supported IRQ types in CAPS register
  PCI: dw-rockchip: Endpoint mode cannot raise INTx interrupts
  PCI: endpoint: Add intx_capable to epc_features struct
  dt-bindings: PCI: Add common schema for devices accessible through PCI BARs
  PCI: intel-gw: Remove intel_pcie_cpu_addr()
  PCI: imx6: Remove imx_pcie_cpu_addr_fixup()
  PCI: dwc: Use parent_bus_offset to remove need for .cpu_addr_fixup()
  PCI: dwc: ep: Ensure proper iteration over outbound map windows
  PCI: dwc: ep: Use devicetree 'reg[addr_space]' to derive CPU -> ATU addr offset
  PCI: dwc: ep: Consolidate devicetree handling in dw_pcie_ep_get_resources()
  PCI: dwc: ep: Call epc_create() early in dw_pcie_ep_init()
  PCI: dwc: Use devicetree 'reg[config]' to derive CPU -> ATU addr offset
  PCI: dwc: Add dw_pcie_parent_bus_offset() checking and debug
  PCI: dwc: Add dw_pcie_parent_bus_offset()
  PCI/bwctrl: Fix NULL pointer dereference on bus number exhaustion
  PCI: xilinx-cpm: Add cpm_csr register mapping for CPM5_HOST1 variant
  PCI: brcmstb: Make const read-only arrays static
  ...
This commit is contained in:
Linus Torvalds 2025-03-28 19:36:53 -07:00
commit 7d06015d93
126 changed files with 5062 additions and 1464 deletions

View File

@ -0,0 +1,157 @@
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_debug/lane_detect
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RW) Write the lane number to be checked for detection. Read
will return whether PHY indicates receiver detection on the
selected lane. The default selected lane is Lane0.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_debug/rx_valid
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RW) Write the lane number to be checked as valid or invalid.
Read will return the status of PIPE RXVALID signal of the
selected lane. The default selected lane is Lane0.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: The "rasdes_err_inj" is a directory which can be used to inject
errors into the system. The possible errors that can be injected
are:
1) tx_lcrc - TLP LCRC error injection TX Path
2) b16_crc_dllp - 16b CRC error injection of ACK/NAK DLLP
3) b16_crc_upd_fc - 16b CRC error injection of Update-FC DLLP
4) tx_ecrc - TLP ECRC error injection TX Path
5) fcrc_tlp - TLP's FCRC error injection TX Path
6) parity_tsos - Parity error of TSOS
7) parity_skpos - Parity error on SKPOS
8) rx_lcrc - LCRC error injection RX Path
9) rx_ecrc - ECRC error injection RX Path
10) tlp_err_seq - TLPs SEQ# error
11) ack_nak_dllp_seq - DLLPS ACK/NAK SEQ# error
12) ack_nak_dllp - ACK/NAK DLLPs transmission block
13) upd_fc_dllp - UpdateFC DLLPs transmission block
14) nak_dllp - Always transmission for NAK DLLP
15) inv_sync_hdr_sym - Invert SYNC header
16) com_pad_ts1 - COM/PAD TS1 order set
17) com_pad_ts2 - COM/PAD TS2 order set
18) com_fts - COM/FTS FTS order set
19) com_idl - COM/IDL E-idle order set
20) end_edb - END/EDB symbol
21) stp_sdp - STP/SDP symbol
22) com_skp - COM/SKP SKP order set
23) posted_tlp_hdr - Posted TLP Header credit value control
24) non_post_tlp_hdr - Non-Posted TLP Header credit value control
25) cmpl_tlp_hdr - Completion TLP Header credit value control
26) posted_tlp_data - Posted TLP Data credit value control
27) non_post_tlp_data - Non-Posted TLP Data credit value control
28) cmpl_tlp_data - Completion TLP Data credit value control
29) duplicate_tlp - Generates duplicate TLPs
30) nullified_tlp - Generates Nullified TLPs
(WO) Write to the attribute will prepare controller to inject
the respective error in the next transmission of data.
Parameter required to write will change in the following ways:
- Errors 9 and 10 are sequence errors. The write command:
echo <count> <diff> > /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
<count>
Number of errors to be injected
<diff>
The difference to add or subtract from natural
sequence number to generate sequence error.
Allowed range from -4095 to 4095
- Errors 23 to 28 are credit value error insertions. The write
command:
echo <count> <diff> <vc> > /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
<count>
Number of errors to be injected
<diff>
The difference to add or subtract from UpdateFC
credit value. Allowed range from -4095 to 4095
<vc>
Target VC number
- All other errors. The write command:
echo <count> > /sys/kernel/debug/dwc_pcie_<dev>/rasdes_err_inj/<error>
<count>
Number of errors to be injected
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_event_counters/<event>/counter_enable
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: The "rasdes_event_counters" is the directory which can be used
to collect statistical data about the number of times a certain
event has occurred in the controller. The list of possible
events are:
1) EBUF Overflow
2) EBUF Underrun
3) Decode Error
4) Running Disparity Error
5) SKP OS Parity Error
6) SYNC Header Error
7) Rx Valid De-assertion
8) CTL SKP OS Parity Error
9) 1st Retimer Parity Error
10) 2nd Retimer Parity Error
11) Margin CRC and Parity Error
12) Detect EI Infer
13) Receiver Error
14) RX Recovery Req
15) N_FTS Timeout
16) Framing Error
17) Deskew Error
18) Framing Error In L0
19) Deskew Uncompleted Error
20) Bad TLP
21) LCRC Error
22) Bad DLLP
23) Replay Number Rollover
24) Replay Timeout
25) Rx Nak DLLP
26) Tx Nak DLLP
27) Retry TLP
28) FC Timeout
29) Poisoned TLP
30) ECRC Error
31) Unsupported Request
32) Completer Abort
33) Completion Timeout
34) EBUF SKP Add
35) EBUF SKP Del
(RW) Write 1 to enable the event counter and write 0 to disable
the event counter. Read will return whether the counter is
currently enabled or disabled. Counter is disabled by default.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_event_counters/<event>/counter_value
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RO) Read will return the current value of the event counter.
To reset the counter, counter should be disabled first and then
enabled back using the "counter_enable" attribute.
What: /sys/kernel/debug/dwc_pcie_<dev>/rasdes_event_counters/<event>/lane_select
Date: February 2025
Contact: Shradha Todi <shradha.t@samsung.com>
Description: (RW) Some lanes in the event list are lane specific events.
These include events from 1 to 11, as well as, 34 and 35. Write
the lane number for which you wish the counter to be enabled,
disabled, or value dumped. Read will return the current
selected lane number. Lane0 is selected by default.
What: /sys/kernel/debug/dwc_pcie_<dev>/ltssm_status
Date: February 2025
Contact: Hans Zhang <18255117159@163.com>
Description: (RO) Read will return the current PCIe LTSSM state in both
string and raw value.

View File

@ -583,3 +583,32 @@ Description:
enclosure-specific indications "specific0" to "specific7",
hence the corresponding led class devices are unavailable if
the DSM interface is used.
What: /sys/bus/pci/devices/.../doe_features
Date: March 2025
Contact: Linux PCI developers <linux-pci@vger.kernel.org>
Description:
This directory contains a list of the supported Data Object
Exchange (DOE) features. The features are the file name.
The contents of each file is the raw Vendor ID and data
object feature values.
The value comes from the device and specifies the vendor and
data object type supported. The lower (RHS of the colon) is
the data object type in hex. The upper (LHS of the colon)
is the vendor ID.
As all DOE devices must support the DOE discovery feature,
if DOE is supported you will at least see the doe_discovery
file, with this contents:
# cat doe_features/doe_discovery
0001:00
If the device supports other features you will see other
files as well. For example if CMA/SPDM and secure CMA/SPDM
are supported the doe_features directory will look like
this:
# ls doe_features
0001:01 0001:02 doe_discovery

View File

@ -57,11 +57,10 @@ by the PCI controller driver.
The PCI controller driver can then create a new EPC device by invoking
devm_pci_epc_create()/pci_epc_create().
* devm_pci_epc_destroy()/pci_epc_destroy()
* pci_epc_destroy()
The PCI controller driver can destroy the EPC device created by either
devm_pci_epc_create() or pci_epc_create() using devm_pci_epc_destroy() or
pci_epc_destroy().
The PCI controller driver can destroy the EPC device created by
pci_epc_create() using pci_epc_destroy().
* pci_epc_linkup()

View File

@ -0,0 +1,60 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interrupt-controller/brcm,bcm2712-msix.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Broadcom bcm2712 MSI-X Interrupt Peripheral support
maintainers:
- Stanimir Varbanov <svarbanov@suse.de>
description:
This interrupt controller is used to provide interrupt vectors to the
generic interrupt controller (GIC) on bcm2712. It will be used as
external MSI-X controller for PCIe root complex.
allOf:
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
properties:
compatible:
const: brcm,bcm2712-mip
reg:
items:
- description: Base register address
- description: PCIe message address
"#msi-cells":
const: 0
brcm,msi-offset:
$ref: /schemas/types.yaml#/definitions/uint32
description: Shift the allocated MSI's.
unevaluatedProperties: false
required:
- compatible
- reg
- msi-controller
- msi-ranges
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
axi {
#address-cells = <2>;
#size-cells = <2>;
msi-controller@1000130000 {
compatible = "brcm,bcm2712-mip";
reg = <0x10 0x00130000 0x00 0xc0>,
<0xff 0xfffff000 0x00 0x1000>;
msi-controller;
#msi-cells = <0>;
msi-ranges = <&gicv2 GIC_SPI 128 IRQ_TYPE_EDGE_RISING 64>;
};
};

View File

@ -12,9 +12,19 @@ maintainers:
properties:
compatible:
description: Each family of socfpga has its own implementation of the
PCI controller. The altr,pcie-root-port-1.0 is used for the Cyclone5
family of chips. The Stratix10 family of chips is supported by the
altr,pcie-root-port-2.0. The Agilex family of chips has three,
non-register compatible, variants of PCIe Hard IP referred to as the
F-Tile, P-Tile, and R-Tile, depending on the specific chip instance.
enum:
- altr,pcie-root-port-1.0
- altr,pcie-root-port-2.0
- altr,pcie-root-port-3.0-f-tile
- altr,pcie-root-port-3.0-p-tile
- altr,pcie-root-port-3.0-r-tile
reg:
items:

View File

@ -0,0 +1,121 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/amd,versal2-mdb-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: AMD Versal2 MDB(Multimedia DMA Bridge) Host Controller
maintainers:
- Thippeswamy Havalige <thippeswamy.havalige@amd.com>
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: /schemas/pci/snps,dw-pcie.yaml#
properties:
compatible:
const: amd,versal2-mdb-host
reg:
items:
- description: MDB System Level Control and Status Register (SLCR) Base
- description: configuration region
- description: data bus interface
- description: address translation unit register
reg-names:
items:
- const: slcr
- const: config
- const: dbi
- const: atu
ranges:
maxItems: 2
msi-map:
maxItems: 1
interrupts:
maxItems: 1
interrupt-map-mask:
items:
- const: 0
- const: 0
- const: 0
- const: 7
interrupt-map:
maxItems: 4
"#interrupt-cells":
const: 1
interrupt-controller:
description: identifies the node as an interrupt controller
type: object
additionalProperties: false
properties:
interrupt-controller: true
"#address-cells":
const: 0
"#interrupt-cells":
const: 1
required:
- interrupt-controller
- "#address-cells"
- "#interrupt-cells"
required:
- reg
- reg-names
- interrupts
- interrupt-map
- interrupt-map-mask
- msi-map
- "#interrupt-cells"
- interrupt-controller
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@ed931000 {
compatible = "amd,versal2-mdb-host";
reg = <0x0 0xed931000 0x0 0x2000>,
<0x1000 0x100000 0x0 0xff00000>,
<0x1000 0x0 0x0 0x1000>,
<0x0 0xed860000 0x0 0x2000>;
reg-names = "slcr", "config", "dbi", "atu";
ranges = <0x2000000 0x00 0xa0000000 0x00 0xa0000000 0x00 0x10000000>,
<0x43000000 0x1100 0x00 0x1100 0x00 0x00 0x1000000>;
interrupts = <GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>;
interrupt-parent = <&gic>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc_0 0>,
<0 0 0 2 &pcie_intc_0 1>,
<0 0 0 3 &pcie_intc_0 2>,
<0 0 0 4 &pcie_intc_0 3>;
msi-map = <0x0 &gic_its 0x00 0x10000>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
device_type = "pci";
pcie_intc_0: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
};
};
};

View File

@ -14,6 +14,7 @@ properties:
items:
- enum:
- brcm,bcm2711-pcie # The Raspberry Pi 4
- brcm,bcm2712-pcie # Raspberry Pi 5
- brcm,bcm4908-pcie
- brcm,bcm7211-pcie # Broadcom STB version of RPi4
- brcm,bcm7216-pcie # Broadcom 7216 Arm
@ -101,7 +102,10 @@ properties:
reset-names:
minItems: 1
maxItems: 3
items:
- enum: [perst, rescal]
- const: bridge
- const: swinit
required:
- compatible

View File

@ -47,12 +47,16 @@ properties:
maxItems: 5
interrupts:
minItems: 1
items:
- description: builtin MSI controller.
- description: builtin DMA controller.
interrupt-names:
minItems: 1
items:
- const: msi
- const: dma
reset-gpio:
description: Should specify the GPIO for controlling the PCI bus device

View File

@ -94,9 +94,6 @@ examples:
reg-names = "regs", "addr_space";
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* PME interrupt */
interrupt-names = "pme";
num-ib-windows = <6>;
num-ob-windows = <8>;
status = "disabled";
};
};
...

View File

@ -0,0 +1,113 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/fsl,mpc8xxx-pci.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale MPC83xx PCI/PCI-X/PCIe controllers
description:
Binding for the PCI/PCI-X/PCIe host bridges on MPC8xxx SoCs
maintainers:
- J. Neuschäfer <j.neuschaefer@gmx.net>
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
compatible:
oneOf:
- enum:
- fsl,mpc8314-pcie
- fsl,mpc8349-pci
- fsl,mpc8540-pci
- fsl,mpc8548-pcie
- fsl,mpc8641-pcie
- items:
- enum:
- fsl,mpc8308-pcie
- fsl,mpc8315-pcie
- fsl,mpc8377-pcie
- fsl,mpc8378-pcie
- const: fsl,mpc8314-pcie
- items:
- const: fsl,mpc8360-pci
- const: fsl,mpc8349-pci
- items:
- const: fsl,mpc8540-pcix
- const: fsl,mpc8540-pci
reg:
minItems: 1
items:
- description: internal registers
- description: config space access registers
clock-frequency: true
interrupts:
items:
- description: Consolidated PCI interrupt
fsl,pci-agent-force-enum:
type: boolean
description:
Typically any Freescale PCI-X bridge hardware strapped into Agent mode is
prevented from enumerating the bus. The PrPMC form-factor requires all
mezzanines to be PCI-X Agents, but one per system may still enumerate the
bus.
This property allows a PCI-X bridge to be used for bus enumeration
despite being strapped into Agent mode.
required:
- reg
- compatible
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
pcie@e0009000 {
compatible = "fsl,mpc8315-pcie", "fsl,mpc8314-pcie";
reg = <0xe0009000 0x00001000>;
ranges = <0x02000000 0 0xa0000000 0xa0000000 0 0x10000000
0x01000000 0 0x00000000 0xb1000000 0 0x00800000>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
device_type = "pci";
bus-range = <0 255>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <0 0 0 1 &ipic 1 IRQ_TYPE_LEVEL_LOW
0 0 0 2 &ipic 1 IRQ_TYPE_LEVEL_LOW
0 0 0 3 &ipic 1 IRQ_TYPE_LEVEL_LOW
0 0 0 4 &ipic 1 IRQ_TYPE_LEVEL_LOW>;
clock-frequency = <0>;
};
- |
pci@ef008000 {
compatible = "fsl,mpc8540-pcix", "fsl,mpc8540-pci";
reg = <0xef008000 0x1000>;
ranges = <0x02000000 0 0x80000000 0x80000000 0 0x20000000
0x01000000 0 0x00000000 0xd0000000 0 0x01000000>;
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
clock-frequency = <33333333>;
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = </* IDSEL */
0xe000 0 0 1 &mpic 2 1
0xe000 0 0 2 &mpic 3 1>;
interrupts-extended = <&mpic 24 2>;
bus-range = <0 0>;
fsl,pci-agent-force-enum;
};
...

View File

@ -1,27 +0,0 @@
* Bus Enumeration by Freescale PCI-X Agent
Typically any Freescale PCI-X bridge hardware strapped into Agent mode
is prevented from enumerating the bus. The PrPMC form-factor requires
all mezzanines to be PCI-X Agents, but one per system may still
enumerate the bus.
The property defined below will allow a PCI-X bridge to be used for bus
enumeration despite being strapped into Agent mode.
Required properties:
- fsl,pci-agent-force-enum : There is no value associated with this
property. The property itself is treated as a boolean.
Example:
/* PCI-X bridge known to be PrPMC Monarch */
pci0: pci@ef008000 {
fsl,pci-agent-force-enum;
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
compatible = "fsl,mpc8540-pcix", "fsl,mpc8540-pci";
device_type = "pci";
...
...
};

View File

@ -109,6 +109,17 @@ properties:
power-domains:
maxItems: 1
mediatek,pbus-csr:
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- items:
- description: phandle to pbus-csr syscon
- description: offset of pbus-csr base address register
- description: offset of pbus-csr base address mask register
description:
Phandle with two arguments to the syscon node used to detect if
a given address is accessible on PCIe controller.
'#interrupt-cells':
const: 1
@ -168,6 +179,8 @@ allOf:
minItems: 1
maxItems: 2
mediatek,pbus-csr: false
- if:
properties:
compatible:
@ -197,6 +210,8 @@ allOf:
minItems: 1
maxItems: 2
mediatek,pbus-csr: false
- if:
properties:
compatible:
@ -224,6 +239,8 @@ allOf:
minItems: 1
maxItems: 2
mediatek,pbus-csr: false
- if:
properties:
compatible:

View File

@ -0,0 +1,58 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/pci-ep-bus.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Common Properties for PCI MFD EP with Peripherals Addressable from BARs
maintainers:
- A. della Porta <andrea.porta@suse.com>
description:
Define a generic node representing a PCI endpoint which contains several sub-
peripherals. The peripherals can be accessed through one or more BARs.
This common schema is intended to be referenced from device tree bindings and
does not represent a device tree binding by itself.
properties:
'#address-cells':
const: 3
'#size-cells':
const: 2
ranges:
minItems: 1
maxItems: 6
items:
maxItems: 8
additionalItems: true
items:
- maximum: 5 # The BAR number
- const: 0
- const: 0
patternProperties:
'^pci-ep-bus@[0-5]$':
type: object
description:
One node for each BAR used by peripherals contained in the PCI endpoint.
Each node represents a bus on which peripherals are connected.
This allows for some segmentation, e.g., one peripheral is accessible
through BAR0 and another through BAR1, and you don't want the two
peripherals to be able to act on the other BAR. Alternatively, when
different peripherals need to share BARs, you can define only one node
and use a 'ranges' property to map all the used BARs.
additionalProperties: true
properties:
compatible:
const: simple-bus
required:
- compatible
additionalProperties: true
...

View File

@ -14,6 +14,7 @@ properties:
oneOf:
- enum:
- qcom,sa8775p-pcie-ep
- qcom,sar2130p-pcie-ep
- qcom,sdx55-pcie-ep
- qcom,sm8450-pcie-ep
- items:
@ -44,11 +45,11 @@ properties:
clocks:
minItems: 5
maxItems: 8
maxItems: 9
clock-names:
minItems: 5
maxItems: 8
maxItems: 9
qcom,perst-regs:
description: Reference to a syscon representing TCSR followed by the two
@ -75,6 +76,9 @@ properties:
- const: doorbell
- const: dma
iommus:
maxItems: 1
reset-gpios:
description: GPIO used as PERST# input signal
maxItems: 1
@ -91,6 +95,8 @@ properties:
- const: pcie-mem
- const: cpu-pcie
dma-coherent: true
resets:
maxItems: 1
@ -126,6 +132,38 @@ required:
allOf:
- $ref: pci-ep.yaml#
- if:
properties:
compatible:
contains:
enum:
- qcom,sar2130p-pcie-ep
then:
properties:
clocks:
items:
- description: PCIe Auxiliary clock
- description: PCIe CFG AHB clock
- description: PCIe Master AXI clock
- description: PCIe Slave AXI clock
- description: PCIe Slave Q2A AXI clock
- description: PCIe DDRSS SF TBU clock
- description: PCIe AGGRE NOC AXI clock
- description: PCIe CFG NOC AXI clock
- description: PCIe QMIP AHB clock
clock-names:
items:
- const: aux
- const: cfg
- const: bus_master
- const: bus_slave
- const: slave_q2a
- const: ddrss_sf_tbu
- const: aggre_noc_axi
- const: cnoc_sf_axi
- const: qmip_pcie_ahb
- if:
properties:
compatible:
@ -135,9 +173,43 @@ allOf:
then:
properties:
reg:
minItems: 6
maxItems: 6
reg-names:
minItems: 6
maxItems: 6
interrupts:
minItems: 2
maxItems: 2
interrupt-names:
minItems: 2
maxItems: 2
iommus: false
else:
properties:
reg:
minItems: 7
maxItems: 7
reg-names:
minItems: 7
maxItems: 7
interrupts:
minItems: 3
maxItems: 3
interrupt-names:
minItems: 3
maxItems: 3
required:
- iommus
- if:
properties:
compatible:
contains:
enum:
- qcom,sdx55-pcie-ep
then:
properties:
clocks:
items:
- description: PCIe Auxiliary clock
@ -156,10 +228,6 @@ allOf:
- const: slave_q2a
- const: sleep
- const: ref
interrupts:
maxItems: 2
interrupt-names:
maxItems: 2
- if:
properties:
@ -169,10 +237,6 @@ allOf:
- qcom,sm8450-pcie-ep
then:
properties:
reg:
maxItems: 6
reg-names:
maxItems: 6
clocks:
items:
- description: PCIe Auxiliary clock
@ -193,10 +257,6 @@ allOf:
- const: ref
- const: ddrss_sf_tbu
- const: aggre_noc_axi
interrupts:
maxItems: 2
interrupt-names:
maxItems: 2
- if:
properties:
@ -206,12 +266,6 @@ allOf:
- qcom,sa8775p-pcie-ep
then:
properties:
reg:
minItems: 7
maxItems: 7
reg-names:
minItems: 7
maxItems: 7
clocks:
items:
- description: PCIe Auxiliary clock
@ -226,12 +280,6 @@ allOf:
- const: bus_master
- const: bus_slave
- const: slave_q2a
interrupts:
minItems: 3
maxItems: 3
interrupt-names:
minItems: 3
maxItems: 3
unevaluatedProperties: false

View File

@ -33,6 +33,7 @@ properties:
- qcom,pcie-sdx55
- items:
- enum:
- qcom,pcie-ipq5332
- qcom,pcie-ipq5424
- const: qcom,pcie-ipq9574
- items:
@ -49,11 +50,11 @@ properties:
interrupts:
minItems: 1
maxItems: 8
maxItems: 9
interrupt-names:
minItems: 1
maxItems: 8
maxItems: 9
iommu-map:
minItems: 1
@ -443,6 +444,7 @@ allOf:
interrupts:
minItems: 8
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
@ -452,6 +454,7 @@ allOf:
- const: msi5
- const: msi6
- const: msi7
- const: global
- if:
properties:
@ -599,6 +602,7 @@ allOf:
- properties:
interrupts:
minItems: 8
maxItems: 8
interrupt-names:
items:
- const: msi0

View File

@ -113,6 +113,8 @@ properties:
enum: [ smu, mpu ]
- description: Tegra234 aperture
enum: [ ecam ]
- description: AMD MDB PCIe SLCR region
const: slcr
allOf:
- contains:
const: dbi

View File

@ -18,6 +18,7 @@ properties:
- xlnx,versal-cpm-host-1.00
- xlnx,versal-cpm5-host
- xlnx,versal-cpm5-host1
- xlnx,versal-cpm5nc-host
reg:
items:

View File

@ -18,7 +18,7 @@ patternProperties:
# DO NOT ADD NEW PROPERTIES TO THIS LIST
"^(at25|bm|devbus|dmacap|dsa|exynos|fsi[ab]|gpio-fan|gpio-key|gpio|gpmc|hdmi|i2c-gpio),.*": true
"^(keypad|m25p|max8952|max8997|max8998|mpmc),.*": true
"^(pinctrl-single|#pinctrl-single|PowerPC),.*": true
"^(pciclass|pinctrl-single|#pinctrl-single|PowerPC),.*": true
"^(pl022|pxa-mmc|rcar_sound|rotary-encoder|s5m8767|sdhci),.*": true
"^(simple-audio-card|st-plgpio|st-spics|ts),.*": true

View File

@ -18340,6 +18340,7 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
F: Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml
F: drivers/pci/controller/dwc/*designware*
F: include/linux/pcie-dwc.h
PCI DRIVER FOR TI DRA7XX/J721E
M: Vignesh Raghavendra <vigneshr@ti.com>

View File

@ -41,9 +41,6 @@ config AUDIT_ARCH
config NO_IOPORT_MAP
def_bool y
config PCI_QUIRKS
def_bool n
config ARCH_SUPPORTS_UPROBES
def_bool y
@ -259,6 +256,7 @@ config S390
select PCI_DOMAINS if PCI
select PCI_MSI if PCI
select PCI_MSI_ARCH_FALLBACKS if PCI_MSI
select PCI_QUIRKS if PCI
select SPARSE_IRQ
select SWIOTLB
select SYSCTL_EXCEPTION_TRACE

View File

@ -11,6 +11,9 @@
#include <asm/pci_insn.h>
#include <asm/sclp.h>
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
#define arch_can_pci_mmap_wc() 1
#define PCIBIOS_MIN_IO 0x1000
#define PCIBIOS_MIN_MEM 0x10000000

View File

@ -5,6 +5,6 @@
obj-$(CONFIG_PCI) += pci.o pci_irq.o pci_clp.o \
pci_event.o pci_debug.o pci_insn.o pci_mmio.o \
pci_bus.o pci_kvm_hook.o pci_report.o
pci_bus.o pci_kvm_hook.o pci_report.o pci_fixup.o
obj-$(CONFIG_PCI_IOV) += pci_iov.o
obj-$(CONFIG_SYSFS) += pci_sysfs.o

23
arch/s390/pci/pci_fixup.c Normal file
View File

@ -0,0 +1,23 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Exceptions for specific devices,
*
* Copyright IBM Corp. 2025
*
* Author(s):
* Niklas Schnelle <schnelle@linux.ibm.com>
*/
#include <linux/pci.h>
static void zpci_ism_bar_no_mmap(struct pci_dev *pdev)
{
/*
* ISM's BAR is special. Drivers written for ISM know
* how to handle this but others need to be aware of their
* special nature e.g. to prevent attempts to mmap() it.
*/
pdev->non_mappable_bars = 1;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_IBM,
PCI_DEVICE_ID_IBM_ISM,
zpci_ism_bar_no_mmap);

View File

@ -175,8 +175,12 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
args.address = mmio_addr;
args.vma = vma;
ret = follow_pfnmap_start(&args);
if (ret)
goto out_unlock_mmap;
if (ret) {
fixup_user_fault(current->mm, mmio_addr, FAULT_FLAG_WRITE, NULL);
ret = follow_pfnmap_start(&args);
if (ret)
goto out_unlock_mmap;
}
io_addr = (void __iomem *)((args.pfn << PAGE_SHIFT) |
(mmio_addr & ~PAGE_MASK));
@ -315,14 +319,18 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
goto out_unlock_mmap;
ret = -EACCES;
if (!(vma->vm_flags & VM_WRITE))
if (!(vma->vm_flags & VM_READ))
goto out_unlock_mmap;
args.vma = vma;
args.address = mmio_addr;
ret = follow_pfnmap_start(&args);
if (ret)
goto out_unlock_mmap;
if (ret) {
fixup_user_fault(current->mm, mmio_addr, 0, NULL);
ret = follow_pfnmap_start(&args);
if (ret)
goto out_unlock_mmap;
}
io_addr = (void __iomem *)((args.pfn << PAGE_SHIFT) |
(mmio_addr & ~PAGE_MASK));

View File

@ -5171,6 +5171,67 @@ void set_secondary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
}
EXPORT_SYMBOL_GPL(set_secondary_fwnode);
/**
* device_remove_of_node - Remove an of_node from a device
* @dev: device whose device tree node is being removed
*/
void device_remove_of_node(struct device *dev)
{
dev = get_device(dev);
if (!dev)
return;
if (!dev->of_node)
goto end;
if (dev->fwnode == of_fwnode_handle(dev->of_node))
dev->fwnode = NULL;
of_node_put(dev->of_node);
dev->of_node = NULL;
end:
put_device(dev);
}
EXPORT_SYMBOL_GPL(device_remove_of_node);
/**
* device_add_of_node - Add an of_node to an existing device
* @dev: device whose device tree node is being added
* @of_node: of_node to add
*
* Return: 0 on success or error code on failure.
*/
int device_add_of_node(struct device *dev, struct device_node *of_node)
{
int ret;
if (!of_node)
return -EINVAL;
dev = get_device(dev);
if (!dev)
return -EINVAL;
if (dev->of_node) {
dev_err(dev, "Cannot replace node %pOF with %pOF\n",
dev->of_node, of_node);
ret = -EBUSY;
goto end;
}
dev->of_node = of_node_get(of_node);
if (!dev->fwnode)
dev->fwnode = of_fwnode_handle(of_node);
ret = 0;
end:
put_device(dev);
return ret;
}
EXPORT_SYMBOL_GPL(device_add_of_node);
/**
* device_set_of_node_from_dev - reuse device-tree node of another device
* @dev: device whose device-tree node is being set

View File

@ -112,6 +112,22 @@ config I8259
bool
select IRQ_DOMAIN
config BCM2712_MIP
tristate "Broadcom BCM2712 MSI-X Interrupt Peripheral support"
depends on ARCH_BRCMSTB || COMPILE_TEST
default m if ARCH_BRCMSTB
depends on ARM_GIC
select GENERIC_IRQ_CHIP
select IRQ_DOMAIN_HIERARCHY
select GENERIC_MSI_IRQ
select IRQ_MSI_LIB
help
Enable support for the Broadcom BCM2712 MSI-X target peripheral
(MIP) needed by brcmstb PCIe to handle MSI-X interrupts on
Raspberry Pi 5.
If unsure say n.
config BCM6345_L1_IRQ
bool
select GENERIC_IRQ_CHIP

View File

@ -63,6 +63,7 @@ obj-$(CONFIG_XTENSA_MX) += irq-xtensa-mx.o
obj-$(CONFIG_XILINX_INTC) += irq-xilinx-intc.o
obj-$(CONFIG_IRQ_CROSSBAR) += irq-crossbar.o
obj-$(CONFIG_SOC_VF610) += irq-vf610-mscm-ir.o
obj-$(CONFIG_BCM2712_MIP) += irq-bcm2712-mip.o
obj-$(CONFIG_BCM6345_L1_IRQ) += irq-bcm6345-l1.o
obj-$(CONFIG_BCM7038_L1_IRQ) += irq-bcm7038-l1.o
obj-$(CONFIG_BCM7120_L2_IRQ) += irq-bcm7120-l2.o

View File

@ -0,0 +1,292 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2024 Raspberry Pi Ltd., All Rights Reserved.
* Copyright (c) 2024 SUSE
*/
#include <linux/bitmap.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include "irq-msi-lib.h"
#define MIP_INT_RAISE 0x00
#define MIP_INT_CLEAR 0x10
#define MIP_INT_CFGL_HOST 0x20
#define MIP_INT_CFGH_HOST 0x30
#define MIP_INT_MASKL_HOST 0x40
#define MIP_INT_MASKH_HOST 0x50
#define MIP_INT_MASKL_VPU 0x60
#define MIP_INT_MASKH_VPU 0x70
#define MIP_INT_STATUSL_HOST 0x80
#define MIP_INT_STATUSH_HOST 0x90
#define MIP_INT_STATUSL_VPU 0xa0
#define MIP_INT_STATUSH_VPU 0xb0
/**
* struct mip_priv - MSI-X interrupt controller data
* @lock: Used to protect bitmap alloc/free
* @base: Base address of MMIO area
* @msg_addr: PCIe MSI-X address
* @msi_base: MSI base
* @num_msis: Count of MSIs
* @msi_offset: MSI offset
* @bitmap: A bitmap for hwirqs
* @parent: Parent domain (GIC)
* @dev: A device pointer
*/
struct mip_priv {
spinlock_t lock;
void __iomem *base;
u64 msg_addr;
u32 msi_base;
u32 num_msis;
u32 msi_offset;
unsigned long *bitmap;
struct irq_domain *parent;
struct device *dev;
};
static void mip_compose_msi_msg(struct irq_data *d, struct msi_msg *msg)
{
struct mip_priv *mip = irq_data_get_irq_chip_data(d);
msg->address_hi = upper_32_bits(mip->msg_addr);
msg->address_lo = lower_32_bits(mip->msg_addr);
msg->data = d->hwirq;
}
static struct irq_chip mip_middle_irq_chip = {
.name = "MIP",
.irq_mask = irq_chip_mask_parent,
.irq_unmask = irq_chip_unmask_parent,
.irq_eoi = irq_chip_eoi_parent,
.irq_set_affinity = irq_chip_set_affinity_parent,
.irq_set_type = irq_chip_set_type_parent,
.irq_compose_msi_msg = mip_compose_msi_msg,
};
static int mip_alloc_hwirq(struct mip_priv *mip, unsigned int nr_irqs)
{
guard(spinlock)(&mip->lock);
return bitmap_find_free_region(mip->bitmap, mip->num_msis, ilog2(nr_irqs));
}
static void mip_free_hwirq(struct mip_priv *mip, unsigned int hwirq,
unsigned int nr_irqs)
{
guard(spinlock)(&mip->lock);
bitmap_release_region(mip->bitmap, hwirq, ilog2(nr_irqs));
}
static int mip_middle_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *arg)
{
struct mip_priv *mip = domain->host_data;
struct irq_fwspec fwspec = {0};
unsigned int hwirq, i;
struct irq_data *irqd;
int irq, ret;
irq = mip_alloc_hwirq(mip, nr_irqs);
if (irq < 0)
return irq;
hwirq = irq + mip->msi_offset;
fwspec.fwnode = domain->parent->fwnode;
fwspec.param_count = 3;
fwspec.param[0] = 0;
fwspec.param[1] = hwirq + mip->msi_base;
fwspec.param[2] = IRQ_TYPE_EDGE_RISING;
ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &fwspec);
if (ret)
goto err_free_hwirq;
for (i = 0; i < nr_irqs; i++) {
irqd = irq_domain_get_irq_data(domain->parent, virq + i);
irqd->chip->irq_set_type(irqd, IRQ_TYPE_EDGE_RISING);
ret = irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i,
&mip_middle_irq_chip, mip);
if (ret)
goto err_free;
irqd = irq_get_irq_data(virq + i);
irqd_set_single_target(irqd);
irqd_set_affinity_on_activate(irqd);
}
return 0;
err_free:
irq_domain_free_irqs_parent(domain, virq, nr_irqs);
err_free_hwirq:
mip_free_hwirq(mip, irq, nr_irqs);
return ret;
}
static void mip_middle_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *irqd = irq_domain_get_irq_data(domain, virq);
struct mip_priv *mip;
unsigned int hwirq;
if (!irqd)
return;
mip = irq_data_get_irq_chip_data(irqd);
hwirq = irqd_to_hwirq(irqd);
irq_domain_free_irqs_parent(domain, virq, nr_irqs);
mip_free_hwirq(mip, hwirq - mip->msi_offset, nr_irqs);
}
static const struct irq_domain_ops mip_middle_domain_ops = {
.select = msi_lib_irq_domain_select,
.alloc = mip_middle_domain_alloc,
.free = mip_middle_domain_free,
};
#define MIP_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \
MSI_FLAG_USE_DEF_CHIP_OPS | \
MSI_FLAG_PCI_MSI_MASK_PARENT)
#define MIP_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \
MSI_FLAG_MULTI_PCI_MSI | \
MSI_FLAG_PCI_MSIX)
static const struct msi_parent_ops mip_msi_parent_ops = {
.supported_flags = MIP_MSI_FLAGS_SUPPORTED,
.required_flags = MIP_MSI_FLAGS_REQUIRED,
.bus_select_token = DOMAIN_BUS_GENERIC_MSI,
.bus_select_mask = MATCH_PCI_MSI,
.prefix = "MIP-MSI-",
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
static int mip_init_domains(struct mip_priv *mip, struct device_node *np)
{
struct irq_domain *middle;
middle = irq_domain_add_hierarchy(mip->parent, 0, mip->num_msis, np,
&mip_middle_domain_ops, mip);
if (!middle)
return -ENOMEM;
irq_domain_update_bus_token(middle, DOMAIN_BUS_GENERIC_MSI);
middle->dev = mip->dev;
middle->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
middle->msi_parent_ops = &mip_msi_parent_ops;
/*
* All MSI-X unmasked for the host, masked for the VPU, and edge-triggered.
*/
writel(0, mip->base + MIP_INT_MASKL_HOST);
writel(0, mip->base + MIP_INT_MASKH_HOST);
writel(~0, mip->base + MIP_INT_MASKL_VPU);
writel(~0, mip->base + MIP_INT_MASKH_VPU);
writel(~0, mip->base + MIP_INT_CFGL_HOST);
writel(~0, mip->base + MIP_INT_CFGH_HOST);
return 0;
}
static int mip_parse_dt(struct mip_priv *mip, struct device_node *np)
{
struct of_phandle_args args;
u64 size;
int ret;
ret = of_property_read_u32(np, "brcm,msi-offset", &mip->msi_offset);
if (ret)
mip->msi_offset = 0;
ret = of_parse_phandle_with_args(np, "msi-ranges", "#interrupt-cells",
0, &args);
if (ret)
return ret;
ret = of_property_read_u32_index(np, "msi-ranges", args.args_count + 1,
&mip->num_msis);
if (ret)
goto err_put;
ret = of_property_read_reg(np, 1, &mip->msg_addr, &size);
if (ret)
goto err_put;
mip->msi_base = args.args[1];
mip->parent = irq_find_host(args.np);
if (!mip->parent)
ret = -EINVAL;
err_put:
of_node_put(args.np);
return ret;
}
static int __init mip_of_msi_init(struct device_node *node, struct device_node *parent)
{
struct platform_device *pdev;
struct mip_priv *mip;
int ret;
pdev = of_find_device_by_node(node);
of_node_put(node);
if (!pdev)
return -EPROBE_DEFER;
mip = kzalloc(sizeof(*mip), GFP_KERNEL);
if (!mip)
return -ENOMEM;
spin_lock_init(&mip->lock);
mip->dev = &pdev->dev;
ret = mip_parse_dt(mip, node);
if (ret)
goto err_priv;
mip->base = of_iomap(node, 0);
if (!mip->base) {
ret = -ENXIO;
goto err_priv;
}
mip->bitmap = bitmap_zalloc(mip->num_msis, GFP_KERNEL);
if (!mip->bitmap) {
ret = -ENOMEM;
goto err_base;
}
ret = mip_init_domains(mip, node);
if (ret)
goto err_map;
dev_dbg(&pdev->dev, "MIP: MSI-X count: %u, base: %u, offset: %u, msg_addr: %llx\n",
mip->num_msis, mip->msi_base, mip->msi_offset, mip->msg_addr);
return 0;
err_map:
bitmap_free(mip->bitmap);
err_base:
iounmap(mip->base);
err_priv:
kfree(mip);
return ret;
}
IRQCHIP_PLATFORM_DRIVER_BEGIN(mip_msi)
IRQCHIP_MATCH("brcm,bcm2712-mip", mip_of_msi_init)
IRQCHIP_PLATFORM_DRIVER_END(mip_msi)
MODULE_DESCRIPTION("Broadcom BCM2712 MSI-X interrupt controller");
MODULE_AUTHOR("Phil Elwell <phil@raspberrypi.com>");
MODULE_AUTHOR("Stanimir Varbanov <svarbanov@suse.de>");
MODULE_LICENSE("GPL");

View File

@ -28,11 +28,6 @@
#define DRV_MODULE_NAME "pci-endpoint-test"
#define IRQ_TYPE_UNDEFINED -1
#define IRQ_TYPE_INTX 0
#define IRQ_TYPE_MSI 1
#define IRQ_TYPE_MSIX 2
#define PCI_ENDPOINT_TEST_MAGIC 0x0
#define PCI_ENDPOINT_TEST_COMMAND 0x4
@ -71,6 +66,9 @@
#define PCI_ENDPOINT_TEST_CAPS 0x30
#define CAP_UNALIGNED_ACCESS BIT(0)
#define CAP_MSI BIT(1)
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define PCI_DEVICE_ID_TI_AM654 0xb00c
#define PCI_DEVICE_ID_TI_J7200 0xb00f
@ -88,7 +86,6 @@
#define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025
#define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031
#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
#define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
static DEFINE_IDA(pci_endpoint_test_ida);
@ -96,14 +93,6 @@ static DEFINE_IDA(pci_endpoint_test_ida);
#define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \
miscdev)
static bool no_msi;
module_param(no_msi, bool, 0444);
MODULE_PARM_DESC(no_msi, "Disable MSI interrupt in pci_endpoint_test");
static int irq_type = IRQ_TYPE_MSI;
module_param(irq_type, int, 0444);
MODULE_PARM_DESC(irq_type, "IRQ mode selection in pci_endpoint_test (0 - Legacy, 1 - MSI, 2 - MSI-X)");
enum pci_barno {
BAR_0,
BAR_1,
@ -126,6 +115,7 @@ struct pci_endpoint_test {
struct miscdevice miscdev;
enum pci_barno test_reg_bar;
size_t alignment;
u32 ep_caps;
const char *name;
};
@ -166,7 +156,7 @@ static void pci_endpoint_test_free_irq_vectors(struct pci_endpoint_test *test)
struct pci_dev *pdev = test->pdev;
pci_free_irq_vectors(pdev);
test->irq_type = IRQ_TYPE_UNDEFINED;
test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED;
}
static int pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
@ -177,7 +167,7 @@ static int pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
struct device *dev = &pdev->dev;
switch (type) {
case IRQ_TYPE_INTX:
case PCITEST_IRQ_TYPE_INTX:
irq = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX);
if (irq < 0) {
dev_err(dev, "Failed to get Legacy interrupt\n");
@ -185,7 +175,7 @@ static int pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
}
break;
case IRQ_TYPE_MSI:
case PCITEST_IRQ_TYPE_MSI:
irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI);
if (irq < 0) {
dev_err(dev, "Failed to get MSI interrupts\n");
@ -193,7 +183,7 @@ static int pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
}
break;
case IRQ_TYPE_MSIX:
case PCITEST_IRQ_TYPE_MSIX:
irq = pci_alloc_irq_vectors(pdev, 1, 2048, PCI_IRQ_MSIX);
if (irq < 0) {
dev_err(dev, "Failed to get MSI-X interrupts\n");
@ -216,10 +206,9 @@ static void pci_endpoint_test_release_irq(struct pci_endpoint_test *test)
{
int i;
struct pci_dev *pdev = test->pdev;
struct device *dev = &pdev->dev;
for (i = 0; i < test->num_irqs; i++)
devm_free_irq(dev, pci_irq_vector(pdev, i), test);
free_irq(pci_irq_vector(pdev, i), test);
test->num_irqs = 0;
}
@ -232,9 +221,9 @@ static int pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
struct device *dev = &pdev->dev;
for (i = 0; i < test->num_irqs; i++) {
ret = devm_request_irq(dev, pci_irq_vector(pdev, i),
pci_endpoint_test_irqhandler,
IRQF_SHARED, test->name, test);
ret = request_irq(pci_irq_vector(pdev, i),
pci_endpoint_test_irqhandler, IRQF_SHARED,
test->name, test);
if (ret)
goto fail;
}
@ -242,23 +231,26 @@ static int pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
return 0;
fail:
switch (irq_type) {
case IRQ_TYPE_INTX:
switch (test->irq_type) {
case PCITEST_IRQ_TYPE_INTX:
dev_err(dev, "Failed to request IRQ %d for Legacy\n",
pci_irq_vector(pdev, i));
break;
case IRQ_TYPE_MSI:
case PCITEST_IRQ_TYPE_MSI:
dev_err(dev, "Failed to request IRQ %d for MSI %d\n",
pci_irq_vector(pdev, i),
i + 1);
break;
case IRQ_TYPE_MSIX:
case PCITEST_IRQ_TYPE_MSIX:
dev_err(dev, "Failed to request IRQ %d for MSI-X %d\n",
pci_irq_vector(pdev, i),
i + 1);
break;
}
test->num_irqs = i;
pci_endpoint_test_release_irq(test);
return ret;
}
@ -272,9 +264,9 @@ static const u32 bar_test_pattern[] = {
};
static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test,
enum pci_barno barno, int offset,
void *write_buf, void *read_buf,
int size)
enum pci_barno barno,
resource_size_t offset, void *write_buf,
void *read_buf, int size)
{
memset(write_buf, bar_test_pattern[barno], size);
memcpy_toio(test->bar[barno] + offset, write_buf, size);
@ -287,16 +279,19 @@ static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test,
static int pci_endpoint_test_bar(struct pci_endpoint_test *test,
enum pci_barno barno)
{
int j, bar_size, buf_size, iters;
resource_size_t bar_size, offset = 0;
void *write_buf __free(kfree) = NULL;
void *read_buf __free(kfree) = NULL;
struct pci_dev *pdev = test->pdev;
int buf_size;
bar_size = pci_resource_len(pdev, barno);
if (!bar_size)
return -ENODATA;
if (!test->bar[barno])
return -ENOMEM;
bar_size = pci_resource_len(pdev, barno);
if (barno == test->test_reg_bar)
bar_size = 0x4;
@ -314,11 +309,12 @@ static int pci_endpoint_test_bar(struct pci_endpoint_test *test,
if (!read_buf)
return -ENOMEM;
iters = bar_size / buf_size;
for (j = 0; j < iters; j++)
if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * j,
write_buf, read_buf, buf_size))
while (offset < bar_size) {
if (pci_endpoint_test_bar_memcmp(test, barno, offset, write_buf,
read_buf, buf_size))
return -EIO;
offset += buf_size;
}
return 0;
}
@ -382,7 +378,7 @@ static int pci_endpoint_test_bars_read_bar(struct pci_endpoint_test *test,
static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
{
enum pci_barno bar;
bool ret;
int ret;
/* Write all BARs in order (without reading). */
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
@ -398,7 +394,7 @@ static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (test->bar[bar]) {
ret = pci_endpoint_test_bars_read_bar(test, bar);
if (!ret)
if (ret)
return ret;
}
}
@ -411,7 +407,7 @@ static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test)
u32 val;
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE,
IRQ_TYPE_INTX);
PCITEST_IRQ_TYPE_INTX);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 0);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
COMMAND_RAISE_INTX_IRQ);
@ -431,7 +427,8 @@ static int pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
int ret;
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE,
msix ? IRQ_TYPE_MSIX : IRQ_TYPE_MSI);
msix ? PCITEST_IRQ_TYPE_MSIX :
PCITEST_IRQ_TYPE_MSI);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, msi_num);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
msix ? COMMAND_RAISE_MSIX_IRQ :
@ -507,7 +504,8 @@ static int pci_endpoint_test_copy(struct pci_endpoint_test *test,
if (use_dma)
flags |= FLAG_USE_DMA;
if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) {
if (irq_type < PCITEST_IRQ_TYPE_INTX ||
irq_type > PCITEST_IRQ_TYPE_MSIX) {
dev_err(dev, "Invalid IRQ type option\n");
return -EINVAL;
}
@ -639,7 +637,8 @@ static int pci_endpoint_test_write(struct pci_endpoint_test *test,
if (use_dma)
flags |= FLAG_USE_DMA;
if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) {
if (irq_type < PCITEST_IRQ_TYPE_INTX ||
irq_type > PCITEST_IRQ_TYPE_MSIX) {
dev_err(dev, "Invalid IRQ type option\n");
return -EINVAL;
}
@ -735,7 +734,8 @@ static int pci_endpoint_test_read(struct pci_endpoint_test *test,
if (use_dma)
flags |= FLAG_USE_DMA;
if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) {
if (irq_type < PCITEST_IRQ_TYPE_INTX ||
irq_type > PCITEST_IRQ_TYPE_MSIX) {
dev_err(dev, "Invalid IRQ type option\n");
return -EINVAL;
}
@ -805,11 +805,24 @@ static int pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
struct device *dev = &pdev->dev;
int ret;
if (req_irq_type < IRQ_TYPE_INTX || req_irq_type > IRQ_TYPE_MSIX) {
if (req_irq_type < PCITEST_IRQ_TYPE_INTX ||
req_irq_type > PCITEST_IRQ_TYPE_AUTO) {
dev_err(dev, "Invalid IRQ type option\n");
return -EINVAL;
}
if (req_irq_type == PCITEST_IRQ_TYPE_AUTO) {
if (test->ep_caps & CAP_MSI)
req_irq_type = PCITEST_IRQ_TYPE_MSI;
else if (test->ep_caps & CAP_MSIX)
req_irq_type = PCITEST_IRQ_TYPE_MSIX;
else if (test->ep_caps & CAP_INTX)
req_irq_type = PCITEST_IRQ_TYPE_INTX;
else
/* fallback to MSI if no caps defined */
req_irq_type = PCITEST_IRQ_TYPE_MSI;
}
if (test->irq_type == req_irq_type)
return 0;
@ -874,7 +887,7 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
ret = pci_endpoint_test_set_irq(test, arg);
break;
case PCITEST_GET_IRQTYPE:
ret = irq_type;
ret = test->irq_type;
break;
case PCITEST_CLEAR_IRQ:
ret = pci_endpoint_test_clear_irq(test);
@ -895,13 +908,12 @@ static void pci_endpoint_test_get_capabilities(struct pci_endpoint_test *test)
{
struct pci_dev *pdev = test->pdev;
struct device *dev = &pdev->dev;
u32 caps;
caps = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CAPS);
dev_dbg(dev, "PCI_ENDPOINT_TEST_CAPS: %#x\n", caps);
test->ep_caps = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CAPS);
dev_dbg(dev, "PCI_ENDPOINT_TEST_CAPS: %#x\n", test->ep_caps);
/* CAP_UNALIGNED_ACCESS is set if the EP can do unaligned access */
if (caps & CAP_UNALIGNED_ACCESS)
if (test->ep_caps & CAP_UNALIGNED_ACCESS)
test->alignment = 0;
}
@ -910,7 +922,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
{
int ret;
int id;
char name[24];
char name[29];
enum pci_barno bar;
void __iomem *base;
struct device *dev = &pdev->dev;
@ -929,17 +941,14 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
test->test_reg_bar = 0;
test->alignment = 0;
test->pdev = pdev;
test->irq_type = IRQ_TYPE_UNDEFINED;
if (no_msi)
irq_type = IRQ_TYPE_INTX;
test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED;
data = (struct pci_endpoint_test_data *)ent->driver_data;
if (data) {
test_reg_bar = data->test_reg_bar;
test->test_reg_bar = test_reg_bar;
test->alignment = data->alignment;
irq_type = data->irq_type;
test->irq_type = data->irq_type;
}
init_completion(&test->irq_raised);
@ -961,7 +970,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
pci_set_master(pdev);
ret = pci_endpoint_test_alloc_irq_vectors(test, irq_type);
ret = pci_endpoint_test_alloc_irq_vectors(test, test->irq_type);
if (ret)
goto err_disable_irq;
@ -1083,23 +1092,23 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
static const struct pci_endpoint_test_data default_data = {
.test_reg_bar = BAR_0,
.alignment = SZ_4K,
.irq_type = IRQ_TYPE_MSI,
.irq_type = PCITEST_IRQ_TYPE_MSI,
};
static const struct pci_endpoint_test_data am654_data = {
.test_reg_bar = BAR_2,
.alignment = SZ_64K,
.irq_type = IRQ_TYPE_MSI,
.irq_type = PCITEST_IRQ_TYPE_MSI,
};
static const struct pci_endpoint_test_data j721e_data = {
.alignment = 256,
.irq_type = IRQ_TYPE_MSI,
.irq_type = PCITEST_IRQ_TYPE_MSI,
};
static const struct pci_endpoint_test_data rk3588_data = {
.alignment = SZ_64K,
.irq_type = IRQ_TYPE_MSI,
.irq_type = PCITEST_IRQ_TYPE_MSI,
};
/*

View File

@ -122,7 +122,10 @@ config PCI_ATS
bool
config PCI_DOE
bool
bool "Enable PCI Data Object Exchange (DOE) support"
help
Say Y here if you want be able to communicate with PCIe DOE
mailboxes.
config PCI_ECAM
bool

View File

@ -331,47 +331,6 @@ void __weak pcibios_resource_survey_bus(struct pci_bus *bus) { }
void __weak pcibios_bus_add_device(struct pci_dev *pdev) { }
/*
* Create pwrctrl devices (if required) for the PCI devices to handle the power
* state.
*/
static void pci_pwrctrl_create_devices(struct pci_dev *dev)
{
struct device_node *np = dev_of_node(&dev->dev);
struct device *parent = &dev->dev;
struct platform_device *pdev;
/*
* First ensure that we are starting from a PCI bridge and it has a
* corresponding devicetree node.
*/
if (np && pci_is_bridge(dev)) {
/*
* Now look for the child PCI device nodes and create pwrctrl
* devices for them. The pwrctrl device drivers will manage the
* power state of the devices.
*/
for_each_available_child_of_node_scoped(np, child) {
/*
* First check whether the pwrctrl device really
* needs to be created or not. This is decided
* based on at least one of the power supplies
* being defined in the devicetree node of the
* device.
*/
if (!of_pci_supply_present(child)) {
pci_dbg(dev, "skipping OF node: %s\n", child->name);
return;
}
/* Now create the pwrctrl device */
pdev = of_platform_device_create(child, NULL, parent);
if (!pdev)
pci_err(dev, "failed to create OF node: %s\n", child->name);
}
}
}
/**
* pci_bus_add_device - start driver for a single device
* @dev: device to add
@ -396,8 +355,6 @@ void pci_bus_add_device(struct pci_dev *dev)
pci_proc_attach_device(dev);
pci_bridge_d3_update(dev);
pci_pwrctrl_create_devices(dev);
/*
* If the PCI device is associated with a pwrctrl device with a
* power supply, create a device link between the PCI device and

View File

@ -355,6 +355,7 @@ static const struct j721e_pcie_data j7200_pcie_rc_data = {
static const struct j721e_pcie_data j7200_pcie_ep_data = {
.mode = PCI_MODE_EP,
.quirk_detect_quiet_flag = true,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.quirk_disable_flr = true,
.max_lanes = 2,
};
@ -376,13 +377,13 @@ static const struct j721e_pcie_data j784s4_pcie_rc_data = {
.mode = PCI_MODE_RC,
.quirk_retrain_flag = true,
.byte_access_allowed = false,
.linkdown_irq_regfield = LINK_DOWN,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.max_lanes = 4,
};
static const struct j721e_pcie_data j784s4_pcie_ep_data = {
.mode = PCI_MODE_EP,
.linkdown_irq_regfield = LINK_DOWN,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.max_lanes = 4,
};

View File

@ -301,12 +301,12 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
val |= interrupts;
cdns_pcie_ep_fn_writew(pcie, fn, reg, val);
/* Set MSIX BAR and offset */
/* Set MSI-X BAR and offset */
reg = cap + PCI_MSIX_TABLE;
val = offset | bir;
cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
/* Set PBA BAR and offset. BAR must match MSIX BAR */
/* Set PBA BAR and offset. BAR must match MSI-X BAR */
reg = cap + PCI_MSIX_PBA;
val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
@ -352,8 +352,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
spin_unlock_irqrestore(&ep->lock, flags);
offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
CDNS_PCIE_MSG_NO_DATA;
CDNS_PCIE_NORMAL_MSG_CODE(msg_code);
writel(0, ep->irq_cpu_addr + offset);
}
@ -573,8 +572,8 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
/*
* Next function field in ARI_CAP_AND_CTR register for last function
* should be 0.
* Clearing Next Function Number field for the last function used.
* should be 0. Clear Next Function Number field for the last
* function used.
*/
last_fn = find_last_bit(&epc->function_num_map, BITS_PER_LONG);
reg = CDNS_PCIE_CORE_PF_I_ARI_CAP_AND_CTRL(last_fn);

View File

@ -246,7 +246,7 @@ struct cdns_pcie_rp_ib_bar {
#define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
(((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
#define CDNS_PCIE_MSG_NO_DATA BIT(16)
#define CDNS_PCIE_MSG_DATA BIT(16)
struct cdns_pcie;

View File

@ -6,6 +6,16 @@ menu "DesignWare-based PCIe controllers"
config PCIE_DW
bool
config PCIE_DW_DEBUGFS
bool "DesignWare PCIe debugfs entries"
depends on DEBUG_FS
depends on PCIE_DW_HOST || PCIE_DW_EP
help
Say Y here to enable debugfs entries for the PCIe controller. These
entries provide various debug features related to the controller and
expose the RAS DES capabilities such as Silicon Debug, Error Injection
and Statistical Counters.
config PCIE_DW_HOST
bool
select PCIE_DW
@ -27,6 +37,17 @@ config PCIE_AL
required only for DT-based platforms. ACPI platforms with the
Annapurna Labs PCIe controller don't need to enable this.
config PCIE_AMD_MDB
bool "AMD MDB Versal2 PCIe controller"
depends on OF && (ARM64 || COMPILE_TEST)
depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want to enable PCIe controller support on AMD
Versal2 SoCs. The AMD MDB Versal2 PCIe controller is based on
DesignWare IP and therefore the driver re-uses the DesignWare
core functions to implement the driver.
config PCI_MESON
tristate "Amlogic Meson PCIe controller"
default m if ARCH_MESON

View File

@ -1,8 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
obj-$(CONFIG_PCIE_DW_DEBUGFS) += pcie-designware-debugfs.o
obj-$(CONFIG_PCIE_DW_HOST) += pcie-designware-host.o
obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o
obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o
obj-$(CONFIG_PCIE_AMD_MDB) += pcie-amd-mdb.o
obj-$(CONFIG_PCIE_BT1) += pcie-bt1.o
obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o

View File

@ -41,7 +41,6 @@
#define IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE BIT(11)
#define IMX8MQ_GPR_PCIE_VREG_BYPASS BIT(12)
#define IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE GENMASK(11, 8)
#define IMX8MQ_PCIE2_BASE_ADDR 0x33c00000
#define IMX95_PCIE_PHY_GEN_CTRL 0x0
#define IMX95_PCIE_REF_USE_PAD BIT(17)
@ -109,7 +108,6 @@ enum imx_pcie_variants {
#define imx_check_flag(pci, val) (pci->drvdata->flags & val)
#define IMX_PCIE_MAX_CLKS 6
#define IMX_PCIE_MAX_INSTANCES 2
struct imx_pcie;
@ -120,9 +118,6 @@ struct imx_pcie_drvdata {
u32 flags;
int dbi_length;
const char *gpr;
const char * const *clk_names;
const u32 clks_cnt;
const u32 clks_optional_cnt;
const u32 ltssm_off;
const u32 ltssm_mask;
const u32 mode_off[IMX_PCIE_MAX_INSTANCES];
@ -137,7 +132,8 @@ struct imx_pcie_drvdata {
struct imx_pcie {
struct dw_pcie *pci;
struct gpio_desc *reset_gpiod;
struct clk_bulk_data clks[IMX_PCIE_MAX_CLKS];
struct clk_bulk_data *clks;
int num_clks;
struct regmap *iomuxc_gpr;
u16 msi_ctrl;
u32 controller_id;
@ -470,13 +466,14 @@ static int imx_setup_phy_mpll(struct imx_pcie *imx_pcie)
int mult, div;
u16 val;
int i;
struct clk_bulk_data *clks = imx_pcie->clks;
if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_IMX_PHY))
return 0;
for (i = 0; i < imx_pcie->drvdata->clks_cnt; i++)
if (strncmp(imx_pcie->clks[i].id, "pcie_phy", 8) == 0)
phy_rate = clk_get_rate(imx_pcie->clks[i].clk);
for (i = 0; i < imx_pcie->num_clks; i++)
if (strncmp(clks[i].id, "pcie_phy", 8) == 0)
phy_rate = clk_get_rate(clks[i].clk);
switch (phy_rate) {
case 125000000:
@ -668,7 +665,7 @@ static int imx_pcie_clk_enable(struct imx_pcie *imx_pcie)
struct device *dev = pci->dev;
int ret;
ret = clk_bulk_prepare_enable(imx_pcie->drvdata->clks_cnt, imx_pcie->clks);
ret = clk_bulk_prepare_enable(imx_pcie->num_clks, imx_pcie->clks);
if (ret)
return ret;
@ -685,7 +682,7 @@ static int imx_pcie_clk_enable(struct imx_pcie *imx_pcie)
return 0;
err_ref_clk:
clk_bulk_disable_unprepare(imx_pcie->drvdata->clks_cnt, imx_pcie->clks);
clk_bulk_disable_unprepare(imx_pcie->num_clks, imx_pcie->clks);
return ret;
}
@ -694,7 +691,7 @@ static void imx_pcie_clk_disable(struct imx_pcie *imx_pcie)
{
if (imx_pcie->drvdata->enable_ref_clk)
imx_pcie->drvdata->enable_ref_clk(imx_pcie, false);
clk_bulk_disable_unprepare(imx_pcie->drvdata->clks_cnt, imx_pcie->clks);
clk_bulk_disable_unprepare(imx_pcie->num_clks, imx_pcie->clks);
}
static int imx6sx_pcie_core_reset(struct imx_pcie *imx_pcie, bool assert)
@ -1217,22 +1214,6 @@ static void imx_pcie_host_exit(struct dw_pcie_rp *pp)
regulator_disable(imx_pcie->vpcie);
}
static u64 imx_pcie_cpu_addr_fixup(struct dw_pcie *pcie, u64 cpu_addr)
{
struct imx_pcie *imx_pcie = to_imx_pcie(pcie);
struct dw_pcie_rp *pp = &pcie->pp;
struct resource_entry *entry;
if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_CPU_ADDR_FIXUP))
return cpu_addr;
entry = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM);
if (!entry)
return cpu_addr;
return cpu_addr - entry->offset;
}
/*
* In old DWC implementations, PCIE_ATU_INHIBIT_PAYLOAD in iATU Ctrl2
* register is reserved, so the generic DWC implementation of sending the
@ -1263,7 +1244,6 @@ static const struct dw_pcie_host_ops imx_pcie_host_dw_pme_ops = {
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = imx_pcie_start_link,
.stop_link = imx_pcie_stop_link,
.cpu_addr_fixup = imx_pcie_cpu_addr_fixup,
};
static void imx_pcie_ep_init(struct dw_pcie_ep *ep)
@ -1474,9 +1454,8 @@ static int imx_pcie_probe(struct platform_device *pdev)
struct dw_pcie *pci;
struct imx_pcie *imx_pcie;
struct device_node *np;
struct resource *dbi_base;
struct device_node *node = dev->of_node;
int i, ret, req_cnt;
int ret, domain;
u16 val;
imx_pcie = devm_kzalloc(dev, sizeof(*imx_pcie), GFP_KERNEL);
@ -1515,10 +1494,6 @@ static int imx_pcie_probe(struct platform_device *pdev)
return PTR_ERR(imx_pcie->phy_base);
}
pci->dbi_base = devm_platform_get_and_ioremap_resource(pdev, 0, &dbi_base);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/* Fetch GPIOs */
imx_pcie->reset_gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(imx_pcie->reset_gpiod))
@ -1526,20 +1501,11 @@ static int imx_pcie_probe(struct platform_device *pdev)
"unable to get reset gpio\n");
gpiod_set_consumer_name(imx_pcie->reset_gpiod, "PCIe reset");
if (imx_pcie->drvdata->clks_cnt >= IMX_PCIE_MAX_CLKS)
return dev_err_probe(dev, -ENOMEM, "clks_cnt is too big\n");
for (i = 0; i < imx_pcie->drvdata->clks_cnt; i++)
imx_pcie->clks[i].id = imx_pcie->drvdata->clk_names[i];
/* Fetch clocks */
req_cnt = imx_pcie->drvdata->clks_cnt - imx_pcie->drvdata->clks_optional_cnt;
ret = devm_clk_bulk_get(dev, req_cnt, imx_pcie->clks);
if (ret)
return ret;
imx_pcie->clks[req_cnt].clk = devm_clk_get_optional(dev, "ref");
if (IS_ERR(imx_pcie->clks[req_cnt].clk))
return PTR_ERR(imx_pcie->clks[req_cnt].clk);
imx_pcie->num_clks = devm_clk_bulk_get_all(dev, &imx_pcie->clks);
if (imx_pcie->num_clks < 0)
return dev_err_probe(dev, imx_pcie->num_clks,
"failed to get clocks\n");
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_PHYDRV)) {
imx_pcie->phy = devm_phy_get(dev, "pcie-phy");
@ -1565,8 +1531,11 @@ static int imx_pcie_probe(struct platform_device *pdev)
switch (imx_pcie->drvdata->variant) {
case IMX8MQ:
case IMX8MQ_EP:
if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR)
imx_pcie->controller_id = 1;
domain = of_get_pci_domain_nr(node);
if (domain < 0 || domain > 1)
return dev_err_probe(dev, -ENODEV, "no \"linux,pci-domain\" property in devicetree\n");
imx_pcie->controller_id = domain;
break;
default:
break;
@ -1645,6 +1614,7 @@ static int imx_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
pci->use_parent_dt_ranges = true;
if (imx_pcie->drvdata->mode == DW_PCIE_EP_TYPE) {
ret = imx_add_pcie_ep(imx_pcie, pdev);
if (ret < 0)
@ -1675,13 +1645,6 @@ static void imx_pcie_shutdown(struct platform_device *pdev)
imx_pcie_assert_core_reset(imx_pcie);
}
static const char * const imx6q_clks[] = {"pcie_bus", "pcie", "pcie_phy"};
static const char * const imx8mm_clks[] = {"pcie_bus", "pcie", "pcie_aux"};
static const char * const imx8mq_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_aux"};
static const char * const imx6sx_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_inbound_axi"};
static const char * const imx8q_clks[] = {"mstr", "slv", "dbi"};
static const char * const imx95_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_aux", "ref"};
static const struct imx_pcie_drvdata drvdata[] = {
[IMX6Q] = {
.variant = IMX6Q,
@ -1691,8 +1654,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.dbi_length = 0x200,
.gpr = "fsl,imx6q-iomuxc-gpr",
.clk_names = imx6q_clks,
.clks_cnt = ARRAY_SIZE(imx6q_clks),
.ltssm_off = IOMUXC_GPR12,
.ltssm_mask = IMX6Q_GPR12_PCIE_CTL_2,
.mode_off[0] = IOMUXC_GPR12,
@ -1707,8 +1668,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_IMX_SPEED_CHANGE |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.gpr = "fsl,imx6q-iomuxc-gpr",
.clk_names = imx6sx_clks,
.clks_cnt = ARRAY_SIZE(imx6sx_clks),
.ltssm_off = IOMUXC_GPR12,
.ltssm_mask = IMX6Q_GPR12_PCIE_CTL_2,
.mode_off[0] = IOMUXC_GPR12,
@ -1725,8 +1684,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.dbi_length = 0x200,
.gpr = "fsl,imx6q-iomuxc-gpr",
.clk_names = imx6q_clks,
.clks_cnt = ARRAY_SIZE(imx6q_clks),
.ltssm_off = IOMUXC_GPR12,
.ltssm_mask = IMX6Q_GPR12_PCIE_CTL_2,
.mode_off[0] = IOMUXC_GPR12,
@ -1742,8 +1699,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_APP_RESET |
IMX_PCIE_FLAG_HAS_PHY_RESET,
.gpr = "fsl,imx7d-iomuxc-gpr",
.clk_names = imx6q_clks,
.clks_cnt = ARRAY_SIZE(imx6q_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.enable_ref_clk = imx7d_pcie_enable_ref_clk,
@ -1755,8 +1710,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_PHY_RESET |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.gpr = "fsl,imx8mq-iomuxc-gpr",
.clk_names = imx8mq_clks,
.clks_cnt = ARRAY_SIZE(imx8mq_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.mode_off[1] = IOMUXC_GPR12,
@ -1770,8 +1723,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_PHYDRV |
IMX_PCIE_FLAG_HAS_APP_RESET,
.gpr = "fsl,imx8mm-iomuxc-gpr",
.clk_names = imx8mm_clks,
.clks_cnt = ARRAY_SIZE(imx8mm_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
@ -1782,8 +1733,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_PHYDRV |
IMX_PCIE_FLAG_HAS_APP_RESET,
.gpr = "fsl,imx8mp-iomuxc-gpr",
.clk_names = imx8mm_clks,
.clks_cnt = ARRAY_SIZE(imx8mm_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
@ -1793,17 +1742,12 @@ static const struct imx_pcie_drvdata drvdata[] = {
.flags = IMX_PCIE_FLAG_HAS_PHYDRV |
IMX_PCIE_FLAG_CPU_ADDR_FIXUP |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.clk_names = imx8q_clks,
.clks_cnt = ARRAY_SIZE(imx8q_clks),
},
[IMX95] = {
.variant = IMX95,
.flags = IMX_PCIE_FLAG_HAS_SERDES |
IMX_PCIE_FLAG_HAS_LUT |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.clk_names = imx95_clks,
.clks_cnt = ARRAY_SIZE(imx95_clks),
.clks_optional_cnt = 1,
.ltssm_off = IMX95_PE0_GEN_CTRL_3,
.ltssm_mask = IMX95_PCIE_LTSSM_EN,
.mode_off[0] = IMX95_PE0_GEN_CTRL_1,
@ -1816,8 +1760,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_PHY_RESET,
.mode = DW_PCIE_EP_TYPE,
.gpr = "fsl,imx8mq-iomuxc-gpr",
.clk_names = imx8mq_clks,
.clks_cnt = ARRAY_SIZE(imx8mq_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.mode_off[1] = IOMUXC_GPR12,
@ -1832,8 +1774,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_PHYDRV,
.mode = DW_PCIE_EP_TYPE,
.gpr = "fsl,imx8mm-iomuxc-gpr",
.clk_names = imx8mm_clks,
.clks_cnt = ARRAY_SIZE(imx8mm_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.epc_features = &imx8m_pcie_epc_features,
@ -1845,8 +1785,6 @@ static const struct imx_pcie_drvdata drvdata[] = {
IMX_PCIE_FLAG_HAS_PHYDRV,
.mode = DW_PCIE_EP_TYPE,
.gpr = "fsl,imx8mp-iomuxc-gpr",
.clk_names = imx8mm_clks,
.clks_cnt = ARRAY_SIZE(imx8mm_clks),
.mode_off[0] = IOMUXC_GPR12,
.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
.epc_features = &imx8m_pcie_epc_features,
@ -1857,15 +1795,11 @@ static const struct imx_pcie_drvdata drvdata[] = {
.flags = IMX_PCIE_FLAG_HAS_PHYDRV,
.mode = DW_PCIE_EP_TYPE,
.epc_features = &imx8q_pcie_epc_features,
.clk_names = imx8q_clks,
.clks_cnt = ARRAY_SIZE(imx8q_clks),
},
[IMX95_EP] = {
.variant = IMX95_EP,
.flags = IMX_PCIE_FLAG_HAS_SERDES |
IMX_PCIE_FLAG_SUPPORT_64BIT,
.clk_names = imx8mq_clks,
.clks_cnt = ARRAY_SIZE(imx8mq_clks),
.ltssm_off = IMX95_PE0_GEN_CTRL_3,
.ltssm_mask = IMX95_PCIE_LTSSM_EN,
.mode_off[0] = IMX95_PE0_GEN_CTRL_1,

View File

@ -966,11 +966,11 @@ static const struct pci_epc_features ks_pcie_am654_epc_features = {
.msix_capable = true,
.bar[BAR_0] = { .type = BAR_RESERVED, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_2] = { .type = BAR_RESIZABLE, },
.bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_64K, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256, },
.bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.align = SZ_1M,
.bar[BAR_5] = { .type = BAR_RESIZABLE, },
.align = SZ_64K,
};
static const struct pci_epc_features*

View File

@ -356,7 +356,7 @@ static int ls_pcie_probe(struct platform_device *pdev)
if (pcie->drvdata->scfg_support) {
pcie->scfg =
syscon_regmap_lookup_by_phandle_args(dev->of_node,
"fsl,pcie-scfg", 2,
"fsl,pcie-scfg", 1,
index);
if (IS_ERR(pcie->scfg)) {
dev_err(dev, "No syscfg phandle specified\n");

View File

@ -0,0 +1,476 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for AMD MDB PCIe Bridge
*
* Copyright (C) 2024-2025, Advanced Micro Devices, Inc.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/of_device.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/types.h>
#include "pcie-designware.h"
#define AMD_MDB_TLP_IR_STATUS_MISC 0x4C0
#define AMD_MDB_TLP_IR_MASK_MISC 0x4C4
#define AMD_MDB_TLP_IR_ENABLE_MISC 0x4C8
#define AMD_MDB_TLP_IR_DISABLE_MISC 0x4CC
#define AMD_MDB_TLP_PCIE_INTX_MASK GENMASK(23, 16)
#define AMD_MDB_PCIE_INTR_INTX_ASSERT(x) BIT((x) * 2)
/* Interrupt registers definitions. */
#define AMD_MDB_PCIE_INTR_CMPL_TIMEOUT 15
#define AMD_MDB_PCIE_INTR_INTX 16
#define AMD_MDB_PCIE_INTR_PM_PME_RCVD 24
#define AMD_MDB_PCIE_INTR_PME_TO_ACK_RCVD 25
#define AMD_MDB_PCIE_INTR_MISC_CORRECTABLE 26
#define AMD_MDB_PCIE_INTR_NONFATAL 27
#define AMD_MDB_PCIE_INTR_FATAL 28
#define IMR(x) BIT(AMD_MDB_PCIE_INTR_ ##x)
#define AMD_MDB_PCIE_IMR_ALL_MASK \
( \
IMR(CMPL_TIMEOUT) | \
IMR(PM_PME_RCVD) | \
IMR(PME_TO_ACK_RCVD) | \
IMR(MISC_CORRECTABLE) | \
IMR(NONFATAL) | \
IMR(FATAL) | \
AMD_MDB_TLP_PCIE_INTX_MASK \
)
/**
* struct amd_mdb_pcie - PCIe port information
* @pci: DesignWare PCIe controller structure
* @slcr: MDB System Level Control and Status Register (SLCR) base
* @intx_domain: INTx IRQ domain pointer
* @mdb_domain: MDB IRQ domain pointer
* @intx_irq: INTx IRQ interrupt number
*/
struct amd_mdb_pcie {
struct dw_pcie pci;
void __iomem *slcr;
struct irq_domain *intx_domain;
struct irq_domain *mdb_domain;
int intx_irq;
};
static const struct dw_pcie_host_ops amd_mdb_pcie_host_ops = {
};
static void amd_mdb_intx_irq_mask(struct irq_data *data)
{
struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(data);
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *port = &pci->pp;
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = FIELD_PREP(AMD_MDB_TLP_PCIE_INTX_MASK,
AMD_MDB_PCIE_INTR_INTX_ASSERT(data->hwirq));
/*
* Writing '1' to a bit in AMD_MDB_TLP_IR_DISABLE_MISC disables that
* interrupt, writing '0' has no effect.
*/
writel_relaxed(val, pcie->slcr + AMD_MDB_TLP_IR_DISABLE_MISC);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static void amd_mdb_intx_irq_unmask(struct irq_data *data)
{
struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(data);
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *port = &pci->pp;
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = FIELD_PREP(AMD_MDB_TLP_PCIE_INTX_MASK,
AMD_MDB_PCIE_INTR_INTX_ASSERT(data->hwirq));
/*
* Writing '1' to a bit in AMD_MDB_TLP_IR_ENABLE_MISC enables that
* interrupt, writing '0' has no effect.
*/
writel_relaxed(val, pcie->slcr + AMD_MDB_TLP_IR_ENABLE_MISC);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static struct irq_chip amd_mdb_intx_irq_chip = {
.name = "AMD MDB INTx",
.irq_mask = amd_mdb_intx_irq_mask,
.irq_unmask = amd_mdb_intx_irq_unmask,
};
/**
* amd_mdb_pcie_intx_map - Set the handler for the INTx and mark IRQ as valid
* @domain: IRQ domain
* @irq: Virtual IRQ number
* @hwirq: Hardware interrupt number
*
* Return: Always returns '0'.
*/
static int amd_mdb_pcie_intx_map(struct irq_domain *domain,
unsigned int irq, irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &amd_mdb_intx_irq_chip,
handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
irq_set_status_flags(irq, IRQ_LEVEL);
return 0;
}
/* INTx IRQ domain operations. */
static const struct irq_domain_ops amd_intx_domain_ops = {
.map = amd_mdb_pcie_intx_map,
};
static irqreturn_t dw_pcie_rp_intx(int irq, void *args)
{
struct amd_mdb_pcie *pcie = args;
unsigned long val;
int i, int_status;
val = readl_relaxed(pcie->slcr + AMD_MDB_TLP_IR_STATUS_MISC);
int_status = FIELD_GET(AMD_MDB_TLP_PCIE_INTX_MASK, val);
for (i = 0; i < PCI_NUM_INTX; i++) {
if (int_status & AMD_MDB_PCIE_INTR_INTX_ASSERT(i))
generic_handle_domain_irq(pcie->intx_domain, i);
}
return IRQ_HANDLED;
}
#define _IC(x, s)[AMD_MDB_PCIE_INTR_ ## x] = { __stringify(x), s }
static const struct {
const char *sym;
const char *str;
} intr_cause[32] = {
_IC(CMPL_TIMEOUT, "Completion timeout"),
_IC(PM_PME_RCVD, "PM_PME message received"),
_IC(PME_TO_ACK_RCVD, "PME_TO_ACK message received"),
_IC(MISC_CORRECTABLE, "Correctable error message"),
_IC(NONFATAL, "Non fatal error message"),
_IC(FATAL, "Fatal error message"),
};
static void amd_mdb_event_irq_mask(struct irq_data *d)
{
struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *port = &pci->pp;
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = BIT(d->hwirq);
writel_relaxed(val, pcie->slcr + AMD_MDB_TLP_IR_DISABLE_MISC);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static void amd_mdb_event_irq_unmask(struct irq_data *d)
{
struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *port = &pci->pp;
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = BIT(d->hwirq);
writel_relaxed(val, pcie->slcr + AMD_MDB_TLP_IR_ENABLE_MISC);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static struct irq_chip amd_mdb_event_irq_chip = {
.name = "AMD MDB RC-Event",
.irq_mask = amd_mdb_event_irq_mask,
.irq_unmask = amd_mdb_event_irq_unmask,
};
static int amd_mdb_pcie_event_map(struct irq_domain *domain,
unsigned int irq, irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &amd_mdb_event_irq_chip,
handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
irq_set_status_flags(irq, IRQ_LEVEL);
return 0;
}
static const struct irq_domain_ops event_domain_ops = {
.map = amd_mdb_pcie_event_map,
};
static irqreturn_t amd_mdb_pcie_event(int irq, void *args)
{
struct amd_mdb_pcie *pcie = args;
unsigned long val;
int i;
val = readl_relaxed(pcie->slcr + AMD_MDB_TLP_IR_STATUS_MISC);
val &= ~readl_relaxed(pcie->slcr + AMD_MDB_TLP_IR_MASK_MISC);
for_each_set_bit(i, &val, 32)
generic_handle_domain_irq(pcie->mdb_domain, i);
writel_relaxed(val, pcie->slcr + AMD_MDB_TLP_IR_STATUS_MISC);
return IRQ_HANDLED;
}
static void amd_mdb_pcie_free_irq_domains(struct amd_mdb_pcie *pcie)
{
if (pcie->intx_domain) {
irq_domain_remove(pcie->intx_domain);
pcie->intx_domain = NULL;
}
if (pcie->mdb_domain) {
irq_domain_remove(pcie->mdb_domain);
pcie->mdb_domain = NULL;
}
}
static int amd_mdb_pcie_init_port(struct amd_mdb_pcie *pcie)
{
unsigned long val;
/* Disable all TLP interrupts. */
writel_relaxed(AMD_MDB_PCIE_IMR_ALL_MASK,
pcie->slcr + AMD_MDB_TLP_IR_DISABLE_MISC);
/* Clear pending TLP interrupts. */
val = readl_relaxed(pcie->slcr + AMD_MDB_TLP_IR_STATUS_MISC);
val &= AMD_MDB_PCIE_IMR_ALL_MASK;
writel_relaxed(val, pcie->slcr + AMD_MDB_TLP_IR_STATUS_MISC);
/* Enable all TLP interrupts. */
writel_relaxed(AMD_MDB_PCIE_IMR_ALL_MASK,
pcie->slcr + AMD_MDB_TLP_IR_ENABLE_MISC);
return 0;
}
/**
* amd_mdb_pcie_init_irq_domains - Initialize IRQ domain
* @pcie: PCIe port information
* @pdev: Platform device
*
* Return: Returns '0' on success and error value on failure.
*/
static int amd_mdb_pcie_init_irq_domains(struct amd_mdb_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node;
int err;
pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
return -ENODEV;
}
pcie->mdb_domain = irq_domain_add_linear(pcie_intc_node, 32,
&event_domain_ops, pcie);
if (!pcie->mdb_domain) {
err = -ENOMEM;
dev_err(dev, "Failed to add MDB domain\n");
goto out;
}
irq_domain_update_bus_token(pcie->mdb_domain, DOMAIN_BUS_NEXUS);
pcie->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&amd_intx_domain_ops, pcie);
if (!pcie->intx_domain) {
err = -ENOMEM;
dev_err(dev, "Failed to add INTx domain\n");
goto mdb_out;
}
of_node_put(pcie_intc_node);
irq_domain_update_bus_token(pcie->intx_domain, DOMAIN_BUS_WIRED);
raw_spin_lock_init(&pp->lock);
return 0;
mdb_out:
amd_mdb_pcie_free_irq_domains(pcie);
out:
of_node_put(pcie_intc_node);
return err;
}
static irqreturn_t amd_mdb_pcie_intr_handler(int irq, void *args)
{
struct amd_mdb_pcie *pcie = args;
struct device *dev;
struct irq_data *d;
dev = pcie->pci.dev;
/*
* In the future, error reporting will be hooked to the AER subsystem.
* Currently, the driver prints a warning message to the user.
*/
d = irq_domain_get_irq_data(pcie->mdb_domain, irq);
if (intr_cause[d->hwirq].str)
dev_warn(dev, "%s\n", intr_cause[d->hwirq].str);
else
dev_warn_once(dev, "Unknown IRQ %ld\n", d->hwirq);
return IRQ_HANDLED;
}
static int amd_mdb_setup_irq(struct amd_mdb_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = &pdev->dev;
int i, irq, err;
amd_mdb_pcie_init_port(pcie);
pp->irq = platform_get_irq(pdev, 0);
if (pp->irq < 0)
return pp->irq;
for (i = 0; i < ARRAY_SIZE(intr_cause); i++) {
if (!intr_cause[i].str)
continue;
irq = irq_create_mapping(pcie->mdb_domain, i);
if (!irq) {
dev_err(dev, "Failed to map MDB domain interrupt\n");
return -ENOMEM;
}
err = devm_request_irq(dev, irq, amd_mdb_pcie_intr_handler,
IRQF_NO_THREAD, intr_cause[i].sym, pcie);
if (err) {
dev_err(dev, "Failed to request IRQ %d, err=%d\n",
irq, err);
return err;
}
}
pcie->intx_irq = irq_create_mapping(pcie->mdb_domain,
AMD_MDB_PCIE_INTR_INTX);
if (!pcie->intx_irq) {
dev_err(dev, "Failed to map INTx interrupt\n");
return -ENXIO;
}
err = devm_request_irq(dev, pcie->intx_irq, dw_pcie_rp_intx,
IRQF_NO_THREAD, NULL, pcie);
if (err) {
dev_err(dev, "Failed to request INTx IRQ %d, err=%d\n",
irq, err);
return err;
}
/* Plug the main event handler. */
err = devm_request_irq(dev, pp->irq, amd_mdb_pcie_event, IRQF_NO_THREAD,
"amd_mdb pcie_irq", pcie);
if (err) {
dev_err(dev, "Failed to request event IRQ %d, err=%d\n",
pp->irq, err);
return err;
}
return 0;
}
static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = &pcie->pci;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = &pdev->dev;
int err;
pcie->slcr = devm_platform_ioremap_resource_byname(pdev, "slcr");
if (IS_ERR(pcie->slcr))
return PTR_ERR(pcie->slcr);
err = amd_mdb_pcie_init_irq_domains(pcie, pdev);
if (err)
return err;
err = amd_mdb_setup_irq(pcie, pdev);
if (err) {
dev_err(dev, "Failed to set up interrupts, err=%d\n", err);
goto out;
}
pp->ops = &amd_mdb_pcie_host_ops;
err = dw_pcie_host_init(pp);
if (err) {
dev_err(dev, "Failed to initialize host, err=%d\n", err);
goto out;
}
return 0;
out:
amd_mdb_pcie_free_irq_domains(pcie);
return err;
}
static int amd_mdb_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct amd_mdb_pcie *pcie;
struct dw_pcie *pci;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pci = &pcie->pci;
pci->dev = dev;
platform_set_drvdata(pdev, pcie);
return amd_mdb_add_pcie_port(pcie, pdev);
}
static const struct of_device_id amd_mdb_pcie_of_match[] = {
{
.compatible = "amd,versal2-mdb-host",
},
{},
};
static struct platform_driver amd_mdb_pcie_driver = {
.driver = {
.name = "amd-mdb-pcie",
.of_match_table = amd_mdb_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = amd_mdb_pcie_probe,
};
builtin_platform_driver(amd_mdb_pcie_driver);

View File

@ -0,0 +1,677 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Synopsys DesignWare PCIe controller debugfs driver
*
* Copyright (C) 2025 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Author: Shradha Todi <shradha.t@samsung.com>
*/
#include <linux/debugfs.h>
#include "pcie-designware.h"
#define SD_STATUS_L1LANE_REG 0xb0
#define PIPE_RXVALID BIT(18)
#define PIPE_DETECT_LANE BIT(17)
#define LANE_SELECT GENMASK(3, 0)
#define ERR_INJ0_OFF 0x34
#define EINJ_VAL_DIFF GENMASK(28, 16)
#define EINJ_VC_NUM GENMASK(14, 12)
#define EINJ_TYPE_SHIFT 8
#define EINJ0_TYPE GENMASK(11, 8)
#define EINJ1_TYPE BIT(8)
#define EINJ2_TYPE GENMASK(9, 8)
#define EINJ3_TYPE GENMASK(10, 8)
#define EINJ4_TYPE GENMASK(10, 8)
#define EINJ5_TYPE BIT(8)
#define EINJ_COUNT GENMASK(7, 0)
#define ERR_INJ_ENABLE_REG 0x30
#define RAS_DES_EVENT_COUNTER_DATA_REG 0xc
#define RAS_DES_EVENT_COUNTER_CTRL_REG 0x8
#define EVENT_COUNTER_GROUP_SELECT GENMASK(27, 24)
#define EVENT_COUNTER_EVENT_SELECT GENMASK(23, 16)
#define EVENT_COUNTER_LANE_SELECT GENMASK(11, 8)
#define EVENT_COUNTER_STATUS BIT(7)
#define EVENT_COUNTER_ENABLE GENMASK(4, 2)
#define PER_EVENT_ON 0x3
#define PER_EVENT_OFF 0x1
#define DWC_DEBUGFS_BUF_MAX 128
/**
* struct dwc_pcie_rasdes_info - Stores controller common information
* @ras_cap_offset: RAS DES vendor specific extended capability offset
* @reg_event_lock: Mutex used for RAS DES shadow event registers
*
* Any parameter constant to all files of the debugfs hierarchy for a single
* controller will be stored in this struct. It is allocated and assigned to
* controller specific struct dw_pcie during initialization.
*/
struct dwc_pcie_rasdes_info {
u32 ras_cap_offset;
struct mutex reg_event_lock;
};
/**
* struct dwc_pcie_rasdes_priv - Stores file specific private data information
* @pci: Reference to the dw_pcie structure
* @idx: Index of specific file related information in array of structs
*
* All debugfs files will have this struct as its private data.
*/
struct dwc_pcie_rasdes_priv {
struct dw_pcie *pci;
int idx;
};
/**
* struct dwc_pcie_err_inj - Store details about each error injection
* supported by DWC RAS DES
* @name: Name of the error that can be injected
* @err_inj_group: Group number to which the error belongs. The value
* can range from 0 to 5
* @err_inj_type: Each group can have multiple types of error
*/
struct dwc_pcie_err_inj {
const char *name;
u32 err_inj_group;
u32 err_inj_type;
};
static const struct dwc_pcie_err_inj err_inj_list[] = {
{"tx_lcrc", 0x0, 0x0},
{"b16_crc_dllp", 0x0, 0x1},
{"b16_crc_upd_fc", 0x0, 0x2},
{"tx_ecrc", 0x0, 0x3},
{"fcrc_tlp", 0x0, 0x4},
{"parity_tsos", 0x0, 0x5},
{"parity_skpos", 0x0, 0x6},
{"rx_lcrc", 0x0, 0x8},
{"rx_ecrc", 0x0, 0xb},
{"tlp_err_seq", 0x1, 0x0},
{"ack_nak_dllp_seq", 0x1, 0x1},
{"ack_nak_dllp", 0x2, 0x0},
{"upd_fc_dllp", 0x2, 0x1},
{"nak_dllp", 0x2, 0x2},
{"inv_sync_hdr_sym", 0x3, 0x0},
{"com_pad_ts1", 0x3, 0x1},
{"com_pad_ts2", 0x3, 0x2},
{"com_fts", 0x3, 0x3},
{"com_idl", 0x3, 0x4},
{"end_edb", 0x3, 0x5},
{"stp_sdp", 0x3, 0x6},
{"com_skp", 0x3, 0x7},
{"posted_tlp_hdr", 0x4, 0x0},
{"non_post_tlp_hdr", 0x4, 0x1},
{"cmpl_tlp_hdr", 0x4, 0x2},
{"posted_tlp_data", 0x4, 0x4},
{"non_post_tlp_data", 0x4, 0x5},
{"cmpl_tlp_data", 0x4, 0x6},
{"duplicate_tlp", 0x5, 0x0},
{"nullified_tlp", 0x5, 0x1},
};
static const u32 err_inj_type_mask[] = {
EINJ0_TYPE,
EINJ1_TYPE,
EINJ2_TYPE,
EINJ3_TYPE,
EINJ4_TYPE,
EINJ5_TYPE,
};
/**
* struct dwc_pcie_event_counter - Store details about each event counter
* supported in DWC RAS DES
* @name: Name of the error counter
* @group_no: Group number that the event belongs to. The value can range
* from 0 to 4
* @event_no: Event number of the particular event. The value ranges are:
* Group 0: 0 - 10
* Group 1: 5 - 13
* Group 2: 0 - 7
* Group 3: 0 - 5
* Group 4: 0 - 1
*/
struct dwc_pcie_event_counter {
const char *name;
u32 group_no;
u32 event_no;
};
static const struct dwc_pcie_event_counter event_list[] = {
{"ebuf_overflow", 0x0, 0x0},
{"ebuf_underrun", 0x0, 0x1},
{"decode_err", 0x0, 0x2},
{"running_disparity_err", 0x0, 0x3},
{"skp_os_parity_err", 0x0, 0x4},
{"sync_header_err", 0x0, 0x5},
{"rx_valid_deassertion", 0x0, 0x6},
{"ctl_skp_os_parity_err", 0x0, 0x7},
{"retimer_parity_err_1st", 0x0, 0x8},
{"retimer_parity_err_2nd", 0x0, 0x9},
{"margin_crc_parity_err", 0x0, 0xA},
{"detect_ei_infer", 0x1, 0x5},
{"receiver_err", 0x1, 0x6},
{"rx_recovery_req", 0x1, 0x7},
{"n_fts_timeout", 0x1, 0x8},
{"framing_err", 0x1, 0x9},
{"deskew_err", 0x1, 0xa},
{"framing_err_in_l0", 0x1, 0xc},
{"deskew_uncompleted_err", 0x1, 0xd},
{"bad_tlp", 0x2, 0x0},
{"lcrc_err", 0x2, 0x1},
{"bad_dllp", 0x2, 0x2},
{"replay_num_rollover", 0x2, 0x3},
{"replay_timeout", 0x2, 0x4},
{"rx_nak_dllp", 0x2, 0x5},
{"tx_nak_dllp", 0x2, 0x6},
{"retry_tlp", 0x2, 0x7},
{"fc_timeout", 0x3, 0x0},
{"poisoned_tlp", 0x3, 0x1},
{"ecrc_error", 0x3, 0x2},
{"unsupported_request", 0x3, 0x3},
{"completer_abort", 0x3, 0x4},
{"completion_timeout", 0x3, 0x5},
{"ebuf_skp_add", 0x4, 0x0},
{"ebuf_skp_del", 0x4, 0x1},
};
static ssize_t lane_detect_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dw_pcie *pci = file->private_data;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG);
val = FIELD_GET(PIPE_DETECT_LANE, val);
if (val)
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Lane Detected\n");
else
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Lane Undetected\n");
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t lane_detect_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dw_pcie *pci = file->private_data;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 lane, val;
val = kstrtou32_from_user(buf, count, 0, &lane);
if (val)
return val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG);
val &= ~(LANE_SELECT);
val |= FIELD_PREP(LANE_SELECT, lane);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG, val);
return count;
}
static ssize_t rx_valid_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dw_pcie *pci = file->private_data;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG);
val = FIELD_GET(PIPE_RXVALID, val);
if (val)
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "RX Valid\n");
else
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "RX Invalid\n");
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t rx_valid_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return lane_detect_write(file, buf, count, ppos);
}
static ssize_t err_inj_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 val, counter, vc_num, err_group, type_mask;
int val_diff = 0;
char *kern_buf;
err_group = err_inj_list[pdata->idx].err_inj_group;
type_mask = err_inj_type_mask[err_group];
kern_buf = memdup_user_nul(buf, count);
if (IS_ERR(kern_buf))
return PTR_ERR(kern_buf);
if (err_group == 4) {
val = sscanf(kern_buf, "%u %d %u", &counter, &val_diff, &vc_num);
if ((val != 3) || (val_diff < -4095 || val_diff > 4095)) {
kfree(kern_buf);
return -EINVAL;
}
} else if (err_group == 1) {
val = sscanf(kern_buf, "%u %d", &counter, &val_diff);
if ((val != 2) || (val_diff < -4095 || val_diff > 4095)) {
kfree(kern_buf);
return -EINVAL;
}
} else {
val = kstrtou32(kern_buf, 0, &counter);
if (val) {
kfree(kern_buf);
return val;
}
}
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + ERR_INJ0_OFF + (0x4 * err_group));
val &= ~(type_mask | EINJ_COUNT);
val |= ((err_inj_list[pdata->idx].err_inj_type << EINJ_TYPE_SHIFT) & type_mask);
val |= FIELD_PREP(EINJ_COUNT, counter);
if (err_group == 1 || err_group == 4) {
val &= ~(EINJ_VAL_DIFF);
val |= FIELD_PREP(EINJ_VAL_DIFF, val_diff);
}
if (err_group == 4) {
val &= ~(EINJ_VC_NUM);
val |= FIELD_PREP(EINJ_VC_NUM, vc_num);
}
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + ERR_INJ0_OFF + (0x4 * err_group), val);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + ERR_INJ_ENABLE_REG, (0x1 << err_group));
kfree(kern_buf);
return count;
}
static void set_event_number(struct dwc_pcie_rasdes_priv *pdata,
struct dw_pcie *pci, struct dwc_pcie_rasdes_info *rinfo)
{
u32 val;
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
val &= ~EVENT_COUNTER_ENABLE;
val &= ~(EVENT_COUNTER_GROUP_SELECT | EVENT_COUNTER_EVENT_SELECT);
val |= FIELD_PREP(EVENT_COUNTER_GROUP_SELECT, event_list[pdata->idx].group_no);
val |= FIELD_PREP(EVENT_COUNTER_EVENT_SELECT, event_list[pdata->idx].event_no);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG, val);
}
static ssize_t counter_enable_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
mutex_unlock(&rinfo->reg_event_lock);
val = FIELD_GET(EVENT_COUNTER_STATUS, val);
if (val)
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Counter Enabled\n");
else
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Counter Disabled\n");
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t counter_enable_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 val, enable;
val = kstrtou32_from_user(buf, count, 0, &enable);
if (val)
return val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
if (enable)
val |= FIELD_PREP(EVENT_COUNTER_ENABLE, PER_EVENT_ON);
else
val |= FIELD_PREP(EVENT_COUNTER_ENABLE, PER_EVENT_OFF);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG, val);
/*
* While enabling the counter, always read the status back to check if
* it is enabled or not. Return error if it is not enabled to let the
* users know that the counter is not supported on the platform.
*/
if (enable) {
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset +
RAS_DES_EVENT_COUNTER_CTRL_REG);
if (!FIELD_GET(EVENT_COUNTER_STATUS, val)) {
mutex_unlock(&rinfo->reg_event_lock);
return -EOPNOTSUPP;
}
}
mutex_unlock(&rinfo->reg_event_lock);
return count;
}
static ssize_t counter_lane_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
mutex_unlock(&rinfo->reg_event_lock);
val = FIELD_GET(EVENT_COUNTER_LANE_SELECT, val);
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Lane: %d\n", val);
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static ssize_t counter_lane_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
u32 val, lane;
val = kstrtou32_from_user(buf, count, 0, &lane);
if (val)
return val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG);
val &= ~(EVENT_COUNTER_LANE_SELECT);
val |= FIELD_PREP(EVENT_COUNTER_LANE_SELECT, lane);
dw_pcie_writel_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_CTRL_REG, val);
mutex_unlock(&rinfo->reg_event_lock);
return count;
}
static ssize_t counter_value_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
struct dwc_pcie_rasdes_priv *pdata = file->private_data;
struct dw_pcie *pci = pdata->pci;
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
char debugfs_buf[DWC_DEBUGFS_BUF_MAX];
ssize_t pos;
u32 val;
mutex_lock(&rinfo->reg_event_lock);
set_event_number(pdata, pci, rinfo);
val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + RAS_DES_EVENT_COUNTER_DATA_REG);
mutex_unlock(&rinfo->reg_event_lock);
pos = scnprintf(debugfs_buf, DWC_DEBUGFS_BUF_MAX, "Counter value: %d\n", val);
return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos);
}
static const char *ltssm_status_string(enum dw_pcie_ltssm ltssm)
{
const char *str;
switch (ltssm) {
#define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2);
DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3);
default:
str = "DW_PCIE_LTSSM_UNKNOWN";
break;
}
return str + strlen("DW_PCIE_LTSSM_");
}
static int ltssm_status_show(struct seq_file *s, void *v)
{
struct dw_pcie *pci = s->private;
enum dw_pcie_ltssm val;
val = dw_pcie_get_ltssm(pci);
seq_printf(s, "%s (0x%02x)\n", ltssm_status_string(val), val);
return 0;
}
static int ltssm_status_open(struct inode *inode, struct file *file)
{
return single_open(file, ltssm_status_show, inode->i_private);
}
#define dwc_debugfs_create(name) \
debugfs_create_file(#name, 0644, rasdes_debug, pci, \
&dbg_ ## name ## _fops)
#define DWC_DEBUGFS_FOPS(name) \
static const struct file_operations dbg_ ## name ## _fops = { \
.open = simple_open, \
.read = name ## _read, \
.write = name ## _write \
}
DWC_DEBUGFS_FOPS(lane_detect);
DWC_DEBUGFS_FOPS(rx_valid);
static const struct file_operations dwc_pcie_err_inj_ops = {
.open = simple_open,
.write = err_inj_write,
};
static const struct file_operations dwc_pcie_counter_enable_ops = {
.open = simple_open,
.read = counter_enable_read,
.write = counter_enable_write,
};
static const struct file_operations dwc_pcie_counter_lane_ops = {
.open = simple_open,
.read = counter_lane_read,
.write = counter_lane_write,
};
static const struct file_operations dwc_pcie_counter_value_ops = {
.open = simple_open,
.read = counter_value_read,
};
static const struct file_operations dwc_pcie_ltssm_status_ops = {
.open = ltssm_status_open,
.read = seq_read,
};
static void dwc_pcie_rasdes_debugfs_deinit(struct dw_pcie *pci)
{
struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info;
mutex_destroy(&rinfo->reg_event_lock);
}
static int dwc_pcie_rasdes_debugfs_init(struct dw_pcie *pci, struct dentry *dir)
{
struct dentry *rasdes_debug, *rasdes_err_inj;
struct dentry *rasdes_event_counter, *rasdes_events;
struct dwc_pcie_rasdes_info *rasdes_info;
struct dwc_pcie_rasdes_priv *priv_tmp;
struct device *dev = pci->dev;
int ras_cap, i, ret;
/*
* If a given SoC has no RAS DES capability, the following call is
* bound to return an error, breaking some existing platforms. So,
* return 0 here, as this is not necessarily an error.
*/
ras_cap = dw_pcie_find_rasdes_capability(pci);
if (!ras_cap) {
dev_dbg(dev, "no RAS DES capability available\n");
return 0;
}
rasdes_info = devm_kzalloc(dev, sizeof(*rasdes_info), GFP_KERNEL);
if (!rasdes_info)
return -ENOMEM;
/* Create subdirectories for Debug, Error Injection, Statistics. */
rasdes_debug = debugfs_create_dir("rasdes_debug", dir);
rasdes_err_inj = debugfs_create_dir("rasdes_err_inj", dir);
rasdes_event_counter = debugfs_create_dir("rasdes_event_counter", dir);
mutex_init(&rasdes_info->reg_event_lock);
rasdes_info->ras_cap_offset = ras_cap;
pci->debugfs->rasdes_info = rasdes_info;
/* Create debugfs files for Debug subdirectory. */
dwc_debugfs_create(lane_detect);
dwc_debugfs_create(rx_valid);
/* Create debugfs files for Error Injection subdirectory. */
for (i = 0; i < ARRAY_SIZE(err_inj_list); i++) {
priv_tmp = devm_kzalloc(dev, sizeof(*priv_tmp), GFP_KERNEL);
if (!priv_tmp) {
ret = -ENOMEM;
goto err_deinit;
}
priv_tmp->idx = i;
priv_tmp->pci = pci;
debugfs_create_file(err_inj_list[i].name, 0200, rasdes_err_inj, priv_tmp,
&dwc_pcie_err_inj_ops);
}
/* Create debugfs files for Statistical Counter subdirectory. */
for (i = 0; i < ARRAY_SIZE(event_list); i++) {
priv_tmp = devm_kzalloc(dev, sizeof(*priv_tmp), GFP_KERNEL);
if (!priv_tmp) {
ret = -ENOMEM;
goto err_deinit;
}
priv_tmp->idx = i;
priv_tmp->pci = pci;
rasdes_events = debugfs_create_dir(event_list[i].name, rasdes_event_counter);
if (event_list[i].group_no == 0 || event_list[i].group_no == 4) {
debugfs_create_file("lane_select", 0644, rasdes_events,
priv_tmp, &dwc_pcie_counter_lane_ops);
}
debugfs_create_file("counter_value", 0444, rasdes_events, priv_tmp,
&dwc_pcie_counter_value_ops);
debugfs_create_file("counter_enable", 0644, rasdes_events, priv_tmp,
&dwc_pcie_counter_enable_ops);
}
return 0;
err_deinit:
dwc_pcie_rasdes_debugfs_deinit(pci);
return ret;
}
static void dwc_pcie_ltssm_debugfs_init(struct dw_pcie *pci, struct dentry *dir)
{
debugfs_create_file("ltssm_status", 0444, dir, pci,
&dwc_pcie_ltssm_status_ops);
}
void dwc_pcie_debugfs_deinit(struct dw_pcie *pci)
{
if (!pci->debugfs)
return;
dwc_pcie_rasdes_debugfs_deinit(pci);
debugfs_remove_recursive(pci->debugfs->debug_dir);
}
void dwc_pcie_debugfs_init(struct dw_pcie *pci)
{
char dirname[DWC_DEBUGFS_BUF_MAX];
struct device *dev = pci->dev;
struct debugfs_info *debugfs;
struct dentry *dir;
int err;
/* Create main directory for each platform driver. */
snprintf(dirname, DWC_DEBUGFS_BUF_MAX, "dwc_pcie_%s", dev_name(dev));
dir = debugfs_create_dir(dirname, NULL);
debugfs = devm_kzalloc(dev, sizeof(*debugfs), GFP_KERNEL);
if (!debugfs)
return;
debugfs->debug_dir = dir;
pci->debugfs = debugfs;
err = dwc_pcie_rasdes_debugfs_init(pci, dir);
if (err)
dev_err(dev, "failed to initialize RAS DES debugfs, err=%d\n",
err);
dwc_pcie_ltssm_debugfs_init(pci, dir);
}

View File

@ -102,6 +102,45 @@ static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap)
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
}
/**
* dw_pcie_ep_hide_ext_capability - Hide a capability from the linked list
* @pci: DWC PCI device
* @prev_cap: Capability preceding the capability that should be hidden
* @cap: Capability that should be hidden
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap)
{
u16 prev_cap_offset, cap_offset;
u32 prev_cap_header, cap_header;
prev_cap_offset = dw_pcie_find_ext_capability(pci, prev_cap);
if (!prev_cap_offset)
return -EINVAL;
prev_cap_header = dw_pcie_readl_dbi(pci, prev_cap_offset);
cap_offset = PCI_EXT_CAP_NEXT(prev_cap_header);
cap_header = dw_pcie_readl_dbi(pci, cap_offset);
/* cap must immediately follow prev_cap. */
if (PCI_EXT_CAP_ID(cap_header) != cap)
return -EINVAL;
/* Clear next ptr. */
prev_cap_header &= ~GENMASK(31, 20);
/* Set next ptr to next ptr of cap. */
prev_cap_header |= cap_header & GENMASK(31, 20);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writel_dbi(pci, prev_cap_offset, prev_cap_header);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_hide_ext_capability);
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr)
{
@ -128,7 +167,7 @@ static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
}
static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
dma_addr_t cpu_addr, enum pci_barno bar,
dma_addr_t parent_bus_addr, enum pci_barno bar,
size_t size)
{
int ret;
@ -146,7 +185,7 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
}
ret = dw_pcie_prog_ep_inbound_atu(pci, func_no, free_win, type,
cpu_addr, bar, size);
parent_bus_addr, bar, size);
if (ret < 0) {
dev_err(pci->dev, "Failed to program IB window\n");
return ret;
@ -181,7 +220,7 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep,
return ret;
set_bit(free_win, ep->ob_window_map);
ep->outbound_addr[free_win] = atu->cpu_addr;
ep->outbound_addr[free_win] = atu->parent_bus_addr;
return 0;
}
@ -205,6 +244,125 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
ep->bar_to_atu[bar] = 0;
}
static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie *pci,
enum pci_barno bar)
{
u32 reg, bar_index;
unsigned int offset, nbars;
int i;
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (!offset)
return offset;
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> PCI_REBAR_CTRL_NBAR_SHIFT;
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
bar_index = reg & PCI_REBAR_CTRL_BAR_IDX;
if (bar_index == bar)
return offset;
}
return 0;
}
static int dw_pcie_ep_set_bar_resizable(struct dw_pcie_ep *ep, u8 func_no,
struct pci_epf_bar *epf_bar)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
int flags = epf_bar->flags;
u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar);
unsigned int rebar_offset;
u32 rebar_cap, rebar_ctrl;
int ret;
rebar_offset = dw_pcie_ep_get_rebar_offset(pci, bar);
if (!rebar_offset)
return -EINVAL;
ret = pci_epc_bar_size_to_rebar_cap(size, &rebar_cap);
if (ret)
return ret;
dw_pcie_dbi_ro_wr_en(pci);
/*
* A BAR mask should not be written for a resizable BAR. The BAR mask
* is automatically derived by the controller every time the "selected
* size" bits are updated, see "Figure 3-26 Resizable BAR Example for
* 32-bit Memory BAR0" in DWC EP databook 5.96a. We simply need to write
* BIT(0) to set the BAR enable bit.
*/
dw_pcie_ep_writel_dbi2(ep, func_no, reg, BIT(0));
dw_pcie_ep_writel_dbi(ep, func_no, reg, flags);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
dw_pcie_ep_writel_dbi2(ep, func_no, reg + 4, 0);
dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0);
}
/*
* Bits 31:0 in PCI_REBAR_CAP define "supported sizes" bits for sizes
* 1 MB to 128 TB. Bits 31:16 in PCI_REBAR_CTRL define "supported sizes"
* bits for sizes 256 TB to 8 EB. Disallow sizes 256 TB to 8 EB.
*/
rebar_ctrl = dw_pcie_readl_dbi(pci, rebar_offset + PCI_REBAR_CTRL);
rebar_ctrl &= ~GENMASK(31, 16);
dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl);
/*
* The "selected size" (bits 13:8) in PCI_REBAR_CTRL are automatically
* updated when writing PCI_REBAR_CAP, see "Figure 3-26 Resizable BAR
* Example for 32-bit Memory BAR0" in DWC EP databook 5.96a.
*/
dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CAP, rebar_cap);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
static int dw_pcie_ep_set_bar_programmable(struct dw_pcie_ep *ep, u8 func_no,
struct pci_epf_bar *epf_bar)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
int flags = epf_bar->flags;
u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_ep_writel_dbi2(ep, func_no, reg, lower_32_bits(size - 1));
dw_pcie_ep_writel_dbi(ep, func_no, reg, flags);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
dw_pcie_ep_writel_dbi2(ep, func_no, reg + 4, upper_32_bits(size - 1));
dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0);
}
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
static enum pci_epc_bar_type dw_pcie_ep_get_bar_type(struct dw_pcie_ep *ep,
enum pci_barno bar)
{
const struct pci_epc_features *epc_features;
if (!ep->ops->get_features)
return BAR_PROGRAMMABLE;
epc_features = ep->ops->get_features(ep);
return epc_features->bar[bar].type;
}
static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
@ -212,9 +370,9 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
enum pci_epc_bar_type bar_type;
int flags = epf_bar->flags;
int ret, type;
u32 reg;
/*
* DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs
@ -246,19 +404,30 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
goto config_atu;
}
reg = PCI_BASE_ADDRESS_0 + (4 * bar);
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_ep_writel_dbi2(ep, func_no, reg, lower_32_bits(size - 1));
dw_pcie_ep_writel_dbi(ep, func_no, reg, flags);
if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) {
dw_pcie_ep_writel_dbi2(ep, func_no, reg + 4, upper_32_bits(size - 1));
dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0);
bar_type = dw_pcie_ep_get_bar_type(ep, bar);
switch (bar_type) {
case BAR_FIXED:
/*
* There is no need to write a BAR mask for a fixed BAR (except
* to write 1 to the LSB of the BAR mask register, to enable the
* BAR). Write the BAR mask regardless. (The fixed bits in the
* BAR mask register will be read-only anyway.)
*/
fallthrough;
case BAR_PROGRAMMABLE:
ret = dw_pcie_ep_set_bar_programmable(ep, func_no, epf_bar);
break;
case BAR_RESIZABLE:
ret = dw_pcie_ep_set_bar_resizable(ep, func_no, epf_bar);
break;
default:
ret = -EINVAL;
dev_err(pci->dev, "Invalid BAR type\n");
break;
}
dw_pcie_dbi_ro_wr_dis(pci);
if (ret)
return ret;
config_atu:
if (!(flags & PCI_BASE_ADDRESS_SPACE))
@ -282,7 +451,7 @@ static int dw_pcie_find_index(struct dw_pcie_ep *ep, phys_addr_t addr,
u32 index;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
for (index = 0; index < pci->num_ob_windows; index++) {
for_each_set_bit(index, ep->ob_window_map, pci->num_ob_windows) {
if (ep->outbound_addr[index] != addr)
continue;
*atu_index = index;
@ -314,7 +483,8 @@ static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
ret = dw_pcie_find_index(ep, addr, &atu_index);
ret = dw_pcie_find_index(ep, addr - pci->parent_bus_offset,
&atu_index);
if (ret < 0)
return;
@ -333,7 +503,7 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
atu.func_no = func_no;
atu.type = PCIE_ATU_TYPE_MEM;
atu.cpu_addr = addr;
atu.parent_bus_addr = addr - pci->parent_bus_offset;
atu.pci_addr = pci_addr;
atu.size = size;
ret = dw_pcie_ep_outbound_atu(ep, &atu);
@ -666,6 +836,7 @@ void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
dwc_pcie_debugfs_deinit(pci);
dw_pcie_edma_remove(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_cleanup);
@ -690,31 +861,15 @@ void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit);
static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
{
u32 header;
int pos = PCI_CFG_SPACE_SIZE;
while (pos) {
header = dw_pcie_readl_dbi(pci, pos);
if (PCI_EXT_CAP_ID(header) == cap)
return pos;
pos = PCI_EXT_CAP_NEXT(header);
if (!pos)
break;
}
return 0;
}
static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
{
struct dw_pcie_ep *ep = &pci->ep;
unsigned int offset;
unsigned int nbars;
u32 reg, i;
enum pci_barno bar;
u32 reg, i, val;
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
dw_pcie_dbi_ro_wr_en(pci);
@ -727,9 +882,29 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
* PCIe r6.0, sec 7.8.6.2 require us to support at least one
* size in the range from 1 MB to 512 GB. Advertise support
* for 1 MB BAR size only.
*
* For a BAR that has been configured via dw_pcie_ep_set_bar(),
* advertise support for only that size instead.
*/
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) {
/*
* While the RESBAR_CAP_REG_* fields are sticky, the
* RESBAR_CTRL_REG_BAR_SIZE field is non-sticky (it is
* sticky in certain versions of DWC PCIe, but not all).
*
* RESBAR_CTRL_REG_BAR_SIZE is updated automatically by
* the controller when RESBAR_CAP_REG is written, which
* is why RESBAR_CAP_REG is written here.
*/
val = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
bar = val & PCI_REBAR_CTRL_BAR_IDX;
if (ep->epf_bar[bar])
pci_epc_bar_size_to_rebar_cap(ep->epf_bar[bar]->size, &val);
else
val = BIT(4);
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, val);
}
}
dw_pcie_setup(pci);
@ -773,6 +948,7 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ret)
return ret;
ret = -ENOMEM;
if (!ep->ib_window_map) {
ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
GFP_KERNEL);
@ -817,7 +993,7 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ep->ops->init)
ep->ops->init(ep);
ptm_cap_base = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
ptm_cap_base = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
/*
* PTM responder capability can be disabled only after disabling
@ -837,6 +1013,8 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
dw_pcie_ep_init_non_sticky_registers(pci);
dwc_pcie_debugfs_init(pci);
return 0;
err_remove_edma:
@ -883,26 +1061,15 @@ void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep)
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkdown);
/**
* dw_pcie_ep_init - Initialize the endpoint device
* @ep: DWC EP device
*
* Initialize the endpoint device. Allocate resources and create the EPC
* device with the endpoint framework.
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_init(struct dw_pcie_ep *ep)
static int dw_pcie_ep_get_resources(struct dw_pcie_ep *ep)
{
int ret;
struct resource *res;
struct pci_epc *epc;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
struct device_node *np = dev->of_node;
INIT_LIST_HEAD(&ep->func_list);
struct pci_epc *epc = ep->epc;
struct resource *res;
int ret;
ret = dw_pcie_get_resources(pci);
if (ret)
@ -915,8 +1082,37 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->phys_base = res->start;
ep->addr_size = resource_size(res);
if (ep->ops->pre_init)
ep->ops->pre_init(ep);
/*
* artpec6_pcie_cpu_addr_fixup() uses ep->phys_base, so call
* dw_pcie_parent_bus_offset() after setting ep->phys_base.
*/
pci->parent_bus_offset = dw_pcie_parent_bus_offset(pci, "addr_space",
ep->phys_base);
ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
if (ret < 0)
epc->max_functions = 1;
return 0;
}
/**
* dw_pcie_ep_init - Initialize the endpoint device
* @ep: DWC EP device
*
* Initialize the endpoint device. Allocate resources and create the EPC
* device with the endpoint framework.
*
* Return: 0 if success, errno otherwise.
*/
int dw_pcie_ep_init(struct dw_pcie_ep *ep)
{
int ret;
struct pci_epc *epc;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
INIT_LIST_HEAD(&ep->func_list);
epc = devm_pci_epc_create(dev, &epc_ops);
if (IS_ERR(epc)) {
@ -927,9 +1123,12 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->epc = epc;
epc_set_drvdata(epc, ep);
ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
if (ret < 0)
epc->max_functions = 1;
ret = dw_pcie_ep_get_resources(ep);
if (ret)
return ret;
if (ep->ops->pre_init)
ep->ops->pre_init(ep);
ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size,
ep->page_size);

View File

@ -418,19 +418,15 @@ static void dw_pcie_host_request_msg_tlp_res(struct dw_pcie_rp *pp)
}
}
int dw_pcie_host_init(struct dw_pcie_rp *pp)
static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct platform_device *pdev = to_platform_device(dev);
struct resource_entry *win;
struct pci_host_bridge *bridge;
struct resource *res;
int ret;
raw_spin_lock_init(&pp->lock);
ret = dw_pcie_get_resources(pci);
if (ret)
return ret;
@ -448,19 +444,42 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (IS_ERR(pp->va_cfg0_base))
return PTR_ERR(pp->va_cfg0_base);
/* Get the I/O range from DT */
win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_IO);
if (win) {
pp->io_size = resource_size(win->res);
pp->io_bus_addr = win->res->start - win->offset;
pp->io_base = pci_pio_to_address(win->res->start);
}
/*
* visconti_pcie_cpu_addr_fixup() uses pp->io_base, so we have to
* call dw_pcie_parent_bus_offset() after setting pp->io_base.
*/
pci->parent_bus_offset = dw_pcie_parent_bus_offset(pci, "config",
pp->cfg0_base);
return 0;
}
int dw_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
int ret;
raw_spin_lock_init(&pp->lock);
bridge = devm_pci_alloc_host_bridge(dev, 0);
if (!bridge)
return -ENOMEM;
pp->bridge = bridge;
/* Get the I/O range from DT */
win = resource_list_first_type(&bridge->windows, IORESOURCE_IO);
if (win) {
pp->io_size = resource_size(win->res);
pp->io_bus_addr = win->res->start - win->offset;
pp->io_base = pci_pio_to_address(win->res->start);
}
ret = dw_pcie_host_get_resources(pp);
if (ret)
return ret;
/* Set default bus ops */
bridge->ops = &dw_pcie_ops;
@ -548,6 +567,8 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (pp->ops->post_init)
pp->ops->post_init(pp);
dwc_pcie_debugfs_init(pci);
return 0;
err_stop_link:
@ -572,6 +593,8 @@ void dw_pcie_host_deinit(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
dwc_pcie_debugfs_deinit(pci);
pci_stop_root_bus(pp->bridge->bus);
pci_remove_root_bus(pp->bridge->bus);
@ -616,7 +639,7 @@ static void __iomem *dw_pcie_other_conf_map_bus(struct pci_bus *bus,
type = PCIE_ATU_TYPE_CFG1;
atu.type = type;
atu.cpu_addr = pp->cfg0_base;
atu.parent_bus_addr = pp->cfg0_base - pci->parent_bus_offset;
atu.pci_addr = busdev;
atu.size = pp->cfg0_size;
@ -641,7 +664,7 @@ static int dw_pcie_rd_other_conf(struct pci_bus *bus, unsigned int devfn,
if (pp->cfg0_io_shared) {
atu.type = PCIE_ATU_TYPE_IO;
atu.cpu_addr = pp->io_base;
atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset;
atu.pci_addr = pp->io_bus_addr;
atu.size = pp->io_size;
@ -667,7 +690,7 @@ static int dw_pcie_wr_other_conf(struct pci_bus *bus, unsigned int devfn,
if (pp->cfg0_io_shared) {
atu.type = PCIE_ATU_TYPE_IO;
atu.cpu_addr = pp->io_base;
atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset;
atu.pci_addr = pp->io_bus_addr;
atu.size = pp->io_size;
@ -736,7 +759,7 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
atu.index = i;
atu.type = PCIE_ATU_TYPE_MEM;
atu.cpu_addr = entry->res->start;
atu.parent_bus_addr = entry->res->start - pci->parent_bus_offset;
atu.pci_addr = entry->res->start - entry->offset;
/* Adjust iATU size if MSG TLP region was allocated before */
@ -758,7 +781,7 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
if (pci->num_ob_windows > ++i) {
atu.index = i;
atu.type = PCIE_ATU_TYPE_IO;
atu.cpu_addr = pp->io_base;
atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset;
atu.pci_addr = pp->io_bus_addr;
atu.size = pp->io_size;
@ -902,13 +925,13 @@ static int dw_pcie_pme_turn_off(struct dw_pcie *pci)
atu.size = resource_size(pci->pp.msg_res);
atu.index = pci->pp.msg_atu_index;
atu.cpu_addr = pci->pp.msg_res->start;
atu.parent_bus_addr = pci->pp.msg_res->start - pci->parent_bus_offset;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret)
return ret;
mem = ioremap(atu.cpu_addr, pci->region_align);
mem = ioremap(pci->pp.msg_res->start, pci->region_align);
if (!mem)
return -ENOMEM;

View File

@ -16,6 +16,8 @@
#include <linux/gpio/consumer.h>
#include <linux/ioport.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/pcie-dwc.h>
#include <linux/platform_device.h>
#include <linux/sizes.h>
#include <linux/types.h>
@ -283,6 +285,51 @@ u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
}
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id,
u16 vsec_id)
{
u16 vsec = 0;
u32 header;
if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID))
return 0;
while ((vsec = dw_pcie_find_next_ext_capability(pci, vsec,
PCI_EXT_CAP_ID_VNDR))) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_ID(header) == vsec_id)
return vsec;
}
return 0;
}
static u16 dw_pcie_find_vsec_capability(struct dw_pcie *pci,
const struct dwc_pcie_vsec_id *vsec_ids)
{
const struct dwc_pcie_vsec_id *vid;
u16 vsec;
u32 header;
for (vid = vsec_ids; vid->vendor_id; vid++) {
vsec = __dw_pcie_find_vsec_capability(pci, vid->vendor_id,
vid->vsec_id);
if (vsec) {
header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER);
if (PCI_VNDR_HEADER_REV(header) == vid->vsec_rev)
return vsec;
}
}
return 0;
}
u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci)
{
return dw_pcie_find_vsec_capability(pci, dwc_pcie_rasdes_vsec_ids);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_rasdes_capability);
int dw_pcie_read(void __iomem *addr, int size, u32 *val)
{
if (!IS_ALIGNED((uintptr_t)addr, size)) {
@ -470,25 +517,22 @@ static inline u32 dw_pcie_enable_ecrc(u32 val)
int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
const struct dw_pcie_ob_atu_cfg *atu)
{
u64 cpu_addr = atu->cpu_addr;
u64 parent_bus_addr = atu->parent_bus_addr;
u32 retries, val;
u64 limit_addr;
if (pci->ops && pci->ops->cpu_addr_fixup)
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
limit_addr = parent_bus_addr + atu->size - 1;
limit_addr = cpu_addr + atu->size - 1;
if ((limit_addr & ~pci->region_limit) != (cpu_addr & ~pci->region_limit) ||
!IS_ALIGNED(cpu_addr, pci->region_align) ||
if ((limit_addr & ~pci->region_limit) != (parent_bus_addr & ~pci->region_limit) ||
!IS_ALIGNED(parent_bus_addr, pci->region_align) ||
!IS_ALIGNED(atu->pci_addr, pci->region_align) || !atu->size) {
return -EINVAL;
}
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LOWER_BASE,
lower_32_bits(cpu_addr));
lower_32_bits(parent_bus_addr));
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_BASE,
upper_32_bits(cpu_addr));
upper_32_bits(parent_bus_addr));
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LIMIT,
lower_32_bits(limit_addr));
@ -502,7 +546,7 @@ int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
upper_32_bits(atu->pci_addr));
val = atu->type | atu->routing | PCIE_ATU_FUNC_NUM(atu->func_no);
if (upper_32_bits(limit_addr) > upper_32_bits(cpu_addr) &&
if (upper_32_bits(limit_addr) > upper_32_bits(parent_bus_addr) &&
dw_pcie_ver_is_ge(pci, 460A))
val |= PCIE_ATU_INCREASE_REGION_SIZE;
if (dw_pcie_ver_is(pci, 490A))
@ -545,13 +589,13 @@ static inline void dw_pcie_writel_atu_ib(struct dw_pcie *pci, u32 index, u32 reg
}
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
u64 cpu_addr, u64 pci_addr, u64 size)
u64 parent_bus_addr, u64 pci_addr, u64 size)
{
u64 limit_addr = pci_addr + size - 1;
u32 retries, val;
if ((limit_addr & ~pci->region_limit) != (pci_addr & ~pci->region_limit) ||
!IS_ALIGNED(cpu_addr, pci->region_align) ||
!IS_ALIGNED(parent_bus_addr, pci->region_align) ||
!IS_ALIGNED(pci_addr, pci->region_align) || !size) {
return -EINVAL;
}
@ -568,9 +612,9 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
upper_32_bits(limit_addr));
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_TARGET,
lower_32_bits(cpu_addr));
lower_32_bits(parent_bus_addr));
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_UPPER_TARGET,
upper_32_bits(cpu_addr));
upper_32_bits(parent_bus_addr));
val = type;
if (upper_32_bits(limit_addr) > upper_32_bits(pci_addr) &&
@ -597,18 +641,18 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
}
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 cpu_addr, u8 bar, size_t size)
int type, u64 parent_bus_addr, u8 bar, size_t size)
{
u32 retries, val;
if (!IS_ALIGNED(cpu_addr, pci->region_align) ||
!IS_ALIGNED(cpu_addr, size))
if (!IS_ALIGNED(parent_bus_addr, pci->region_align) ||
!IS_ALIGNED(parent_bus_addr, size))
return -EINVAL;
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_TARGET,
lower_32_bits(cpu_addr));
lower_32_bits(parent_bus_addr));
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_UPPER_TARGET,
upper_32_bits(cpu_addr));
upper_32_bits(parent_bus_addr));
dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_REGION_CTRL1, type |
PCIE_ATU_FUNC_NUM(func_no));
@ -1105,3 +1149,63 @@ void dw_pcie_setup(struct dw_pcie *pci)
dw_pcie_link_set_max_link_width(pci, pci->num_lanes);
}
resource_size_t dw_pcie_parent_bus_offset(struct dw_pcie *pci,
const char *reg_name,
resource_size_t cpu_phys_addr)
{
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
int index;
u64 reg_addr, fixup_addr;
u64 (*fixup)(struct dw_pcie *pcie, u64 cpu_addr);
/* Look up reg_name address on parent bus */
index = of_property_match_string(np, "reg-names", reg_name);
if (index < 0) {
dev_err(dev, "No %s in devicetree \"reg\" property\n", reg_name);
return 0;
}
of_property_read_reg(np, index, &reg_addr, NULL);
fixup = pci->ops ? pci->ops->cpu_addr_fixup : NULL;
if (fixup) {
fixup_addr = fixup(pci, cpu_phys_addr);
if (reg_addr == fixup_addr) {
dev_info(dev, "%s reg[%d] %#010llx == %#010llx == fixup(cpu %#010llx); %ps is redundant with this devicetree\n",
reg_name, index, reg_addr, fixup_addr,
(unsigned long long) cpu_phys_addr, fixup);
} else {
dev_warn(dev, "%s reg[%d] %#010llx != %#010llx == fixup(cpu %#010llx); devicetree is broken\n",
reg_name, index, reg_addr, fixup_addr,
(unsigned long long) cpu_phys_addr);
reg_addr = fixup_addr;
}
return cpu_phys_addr - reg_addr;
}
if (pci->use_parent_dt_ranges) {
/*
* This platform once had a fixup, presumably because it
* translates between CPU and PCI controller addresses.
* Log a note if devicetree didn't describe a translation.
*/
if (reg_addr == cpu_phys_addr)
dev_info(dev, "%s reg[%d] %#010llx == cpu %#010llx\n; no fixup was ever needed for this devicetree\n",
reg_name, index, reg_addr,
(unsigned long long) cpu_phys_addr);
} else {
if (reg_addr != cpu_phys_addr) {
dev_warn(dev, "%s reg[%d] %#010llx != cpu %#010llx; no fixup and devicetree \"ranges\" is broken, assuming no translation\n",
reg_name, index, reg_addr,
(unsigned long long) cpu_phys_addr);
return 0;
}
}
return cpu_phys_addr - reg_addr;
}

View File

@ -330,9 +330,40 @@ enum dw_pcie_ltssm {
/* Need to align with PCIE_PORT_DEBUG0 bits 0:5 */
DW_PCIE_LTSSM_DETECT_QUIET = 0x0,
DW_PCIE_LTSSM_DETECT_ACT = 0x1,
DW_PCIE_LTSSM_POLL_ACTIVE = 0x2,
DW_PCIE_LTSSM_POLL_COMPLIANCE = 0x3,
DW_PCIE_LTSSM_POLL_CONFIG = 0x4,
DW_PCIE_LTSSM_PRE_DETECT_QUIET = 0x5,
DW_PCIE_LTSSM_DETECT_WAIT = 0x6,
DW_PCIE_LTSSM_CFG_LINKWD_START = 0x7,
DW_PCIE_LTSSM_CFG_LINKWD_ACEPT = 0x8,
DW_PCIE_LTSSM_CFG_LANENUM_WAI = 0x9,
DW_PCIE_LTSSM_CFG_LANENUM_ACEPT = 0xa,
DW_PCIE_LTSSM_CFG_COMPLETE = 0xb,
DW_PCIE_LTSSM_CFG_IDLE = 0xc,
DW_PCIE_LTSSM_RCVRY_LOCK = 0xd,
DW_PCIE_LTSSM_RCVRY_SPEED = 0xe,
DW_PCIE_LTSSM_RCVRY_RCVRCFG = 0xf,
DW_PCIE_LTSSM_RCVRY_IDLE = 0x10,
DW_PCIE_LTSSM_L0 = 0x11,
DW_PCIE_LTSSM_L0S = 0x12,
DW_PCIE_LTSSM_L123_SEND_EIDLE = 0x13,
DW_PCIE_LTSSM_L1_IDLE = 0x14,
DW_PCIE_LTSSM_L2_IDLE = 0x15,
DW_PCIE_LTSSM_L2_WAKE = 0x16,
DW_PCIE_LTSSM_DISABLED_ENTRY = 0x17,
DW_PCIE_LTSSM_DISABLED_IDLE = 0x18,
DW_PCIE_LTSSM_DISABLED = 0x19,
DW_PCIE_LTSSM_LPBK_ENTRY = 0x1a,
DW_PCIE_LTSSM_LPBK_ACTIVE = 0x1b,
DW_PCIE_LTSSM_LPBK_EXIT = 0x1c,
DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT = 0x1d,
DW_PCIE_LTSSM_HOT_RESET_ENTRY = 0x1e,
DW_PCIE_LTSSM_HOT_RESET = 0x1f,
DW_PCIE_LTSSM_RCVRY_EQ0 = 0x20,
DW_PCIE_LTSSM_RCVRY_EQ1 = 0x21,
DW_PCIE_LTSSM_RCVRY_EQ2 = 0x22,
DW_PCIE_LTSSM_RCVRY_EQ3 = 0x23,
DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
};
@ -343,7 +374,7 @@ struct dw_pcie_ob_atu_cfg {
u8 func_no;
u8 code;
u8 routing;
u64 cpu_addr;
u64 parent_bus_addr;
u64 pci_addr;
u64 size;
};
@ -437,6 +468,11 @@ struct dw_pcie_ops {
void (*stop_link)(struct dw_pcie *pcie);
};
struct debugfs_info {
struct dentry *debug_dir;
void *rasdes_info;
};
struct dw_pcie {
struct device *dev;
void __iomem *dbi_base;
@ -445,6 +481,7 @@ struct dw_pcie {
void __iomem *atu_base;
resource_size_t atu_phys_addr;
size_t atu_size;
resource_size_t parent_bus_offset;
u32 num_ib_windows;
u32 num_ob_windows;
u32 region_align;
@ -465,6 +502,20 @@ struct dw_pcie {
struct reset_control_bulk_data core_rsts[DW_PCIE_NUM_CORE_RSTS];
struct gpio_desc *pe_rst;
bool suspended;
struct debugfs_info *debugfs;
/*
* If iATU input addresses are offset from CPU physical addresses,
* we previously required .cpu_addr_fixup() to convert them. We
* now rely on the devicetree instead. If .cpu_addr_fixup()
* exists, we compare its results with devicetree.
*
* If .cpu_addr_fixup() does not exist, we assume the offset is
* zero and warn if devicetree claims otherwise. If we know all
* devicetrees correctly describe the offset, set
* use_parent_dt_ranges to true to avoid this warning.
*/
bool use_parent_dt_ranges;
};
#define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
@ -478,6 +529,7 @@ void dw_pcie_version_detect(struct dw_pcie *pci);
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci);
int dw_pcie_read(void __iomem *addr, int size, u32 *val);
int dw_pcie_write(void __iomem *addr, int size, u32 val);
@ -491,14 +543,18 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci);
int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
const struct dw_pcie_ob_atu_cfg *atu);
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
u64 cpu_addr, u64 pci_addr, u64 size);
u64 parent_bus_addr, u64 pci_addr, u64 size);
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 cpu_addr, u8 bar, size_t size);
int type, u64 parent_bus_addr,
u8 bar, size_t size);
void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index);
void dw_pcie_setup(struct dw_pcie *pci);
void dw_pcie_iatu_detect(struct dw_pcie *pci);
int dw_pcie_edma_detect(struct dw_pcie *pci);
void dw_pcie_edma_remove(struct dw_pcie *pci);
resource_size_t dw_pcie_parent_bus_offset(struct dw_pcie *pci,
const char *reg_name,
resource_size_t cpu_phy_addr);
static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
{
@ -743,6 +799,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num);
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar);
int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap);
struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no);
#else
@ -800,10 +857,29 @@ static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
{
}
static inline int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci,
u8 prev_cap, u8 cap)
{
return 0;
}
static inline struct dw_pcie_ep_func *
dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no)
{
return NULL;
}
#endif
#ifdef CONFIG_PCIE_DW_DEBUGFS
void dwc_pcie_debugfs_init(struct dw_pcie *pci);
void dwc_pcie_debugfs_deinit(struct dw_pcie *pci);
#else
static inline void dwc_pcie_debugfs_init(struct dw_pcie *pci)
{
}
static inline void dwc_pcie_debugfs_deinit(struct dw_pcie *pci)
{
}
#endif
#endif /* _PCIE_DESIGNWARE_H */

View File

@ -240,6 +240,34 @@ static const struct dw_pcie_host_ops rockchip_pcie_host_ops = {
.init = rockchip_pcie_host_init,
};
/*
* ATS does not work on RK3588 when running in EP mode.
*
* After the host has enabled ATS on the EP side, it will send an IOTLB
* invalidation request to the EP side. However, the RK3588 will never send
* a completion back and eventually the host will print an IOTLB_INV_TIMEOUT
* error, and the EP will not be operational. If we hide the ATS capability,
* things work as expected.
*/
static void rockchip_pcie_ep_hide_broken_ats_cap_rk3588(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev;
/* Only hide the ATS capability for RK3588 running in EP mode. */
if (!of_device_is_compatible(dev->of_node, "rockchip,rk3588-pcie-ep"))
return;
if (dw_pcie_ep_hide_ext_capability(pci, PCI_EXT_CAP_ID_SECPCI,
PCI_EXT_CAP_ID_ATS))
dev_err(dev, "failed to hide ATS capability\n");
}
static void rockchip_pcie_ep_pre_init(struct dw_pcie_ep *ep)
{
rockchip_pcie_ep_hide_broken_ats_cap_rk3588(ep);
}
static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -272,13 +300,14 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
.intx_capable = false,
.align = SZ_64K,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_0] = { .type = BAR_RESIZABLE, },
.bar[BAR_1] = { .type = BAR_RESIZABLE, },
.bar[BAR_2] = { .type = BAR_RESIZABLE, },
.bar[BAR_3] = { .type = BAR_RESIZABLE, },
.bar[BAR_4] = { .type = BAR_RESIZABLE, },
.bar[BAR_5] = { .type = BAR_RESIZABLE, },
};
/*
@ -292,13 +321,14 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
.intx_capable = false,
.align = SZ_64K,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_0] = { .type = BAR_RESIZABLE, },
.bar[BAR_1] = { .type = BAR_RESIZABLE, },
.bar[BAR_2] = { .type = BAR_RESIZABLE, },
.bar[BAR_3] = { .type = BAR_RESIZABLE, },
.bar[BAR_4] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_5] = { .type = BAR_RESIZABLE, },
};
static const struct pci_epc_features *
@ -312,6 +342,7 @@ rockchip_pcie_get_features(struct dw_pcie_ep *ep)
static const struct dw_pcie_ep_ops rockchip_pcie_ep_ops = {
.init = rockchip_pcie_ep_init,
.pre_init = rockchip_pcie_ep_pre_init,
.raise_irq = rockchip_pcie_raise_irq,
.get_features = rockchip_pcie_get_features,
};

View File

@ -409,16 +409,21 @@ static int histb_pcie_probe(struct platform_device *pdev)
ret = histb_pcie_host_enable(pp);
if (ret) {
dev_err(dev, "failed to enable host\n");
return ret;
goto err_exit_phy;
}
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
goto err_exit_phy;
}
return 0;
err_exit_phy:
phy_exit(hipcie->phy);
return ret;
}
static void histb_pcie_remove(struct platform_device *pdev)
@ -427,8 +432,7 @@ static void histb_pcie_remove(struct platform_device *pdev)
histb_pcie_host_disable(hipcie);
if (hipcie->phy)
phy_exit(hipcie->phy);
phy_exit(hipcie->phy);
}
static const struct of_device_id histb_pcie_of_match[] = {

View File

@ -57,7 +57,6 @@
PCIE_APP_IRN_INTA | PCIE_APP_IRN_INTB | \
PCIE_APP_IRN_INTC | PCIE_APP_IRN_INTD)
#define BUS_IATU_OFFSET SZ_256M
#define RESET_INTERVAL_MS 100
struct intel_pcie {
@ -381,13 +380,7 @@ static int intel_pcie_rc_init(struct dw_pcie_rp *pp)
return intel_pcie_host_setup(pcie);
}
static u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr)
{
return cpu_addr + BUS_IATU_OFFSET;
}
static const struct dw_pcie_ops intel_pcie_ops = {
.cpu_addr_fixup = intel_pcie_cpu_addr,
};
static const struct dw_pcie_host_ops intel_pcie_dw_ops = {
@ -409,6 +402,7 @@ static int intel_pcie_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pcie);
pci = &pcie->pci;
pci->dev = dev;
pci->use_parent_dt_ranges = true;
pp = &pci->pp;
ret = intel_pcie_get_resources(pdev);

View File

@ -216,10 +216,9 @@ static int hi3660_pcie_phy_start(struct hi3660_pcie_phy *phy)
usleep_range(PIPE_CLK_WAIT_MIN, PIPE_CLK_WAIT_MAX);
reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_STATUS0);
if (reg_val & PIPE_CLK_STABLE) {
dev_err(dev, "PIPE clk is not stable\n");
return -EINVAL;
}
if (reg_val & PIPE_CLK_STABLE)
return dev_err_probe(dev, -ETIMEDOUT,
"PIPE clk is not stable\n");
return 0;
}
@ -371,10 +370,9 @@ static int kirin_pcie_get_gpio_enable(struct kirin_pcie *pcie,
if (ret < 0)
return 0;
if (ret > MAX_PCI_SLOTS) {
dev_err(dev, "Too many GPIO clock requests!\n");
return -EINVAL;
}
if (ret > MAX_PCI_SLOTS)
return dev_err_probe(dev, -EINVAL,
"Too many GPIO clock requests!\n");
pcie->n_gpio_clkreq = ret;
@ -420,17 +418,16 @@ static int kirin_pcie_parse_port(struct kirin_pcie *pcie,
"unable to get a valid reset gpio\n");
}
if (pcie->num_slots + 1 >= MAX_PCI_SLOTS) {
dev_err(dev, "Too many PCI slots!\n");
return -EINVAL;
}
if (pcie->num_slots + 1 >= MAX_PCI_SLOTS)
return dev_err_probe(dev, -EINVAL,
"Too many PCI slots!\n");
pcie->num_slots++;
ret = of_pci_get_devfn(child);
if (ret < 0) {
dev_err(dev, "failed to parse devfn: %d\n", ret);
return ret;
}
if (ret < 0)
return dev_err_probe(dev, ret,
"failed to parse devfn\n");
slot = PCI_SLOT(ret);
@ -452,7 +449,7 @@ static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie,
struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *child, *node = dev->of_node;
struct device_node *node = dev->of_node;
void __iomem *apb_base;
int ret;
@ -477,17 +474,13 @@ static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie,
return ret;
/* Parse OF children */
for_each_available_child_of_node(node, child) {
for_each_available_child_of_node_scoped(node, child) {
ret = kirin_pcie_parse_port(kirin_pcie, pdev, child);
if (ret)
goto put_node;
return ret;
}
return 0;
put_node:
of_node_put(child);
return ret;
}
static void kirin_pcie_sideband_dbi_w_mode(struct kirin_pcie *kirin_pcie,
@ -729,16 +722,9 @@ static int kirin_pcie_probe(struct platform_device *pdev)
struct dw_pcie *pci;
int ret;
if (!dev->of_node) {
dev_err(dev, "NULL node\n");
return -EINVAL;
}
data = of_device_get_match_data(dev);
if (!data) {
dev_err(dev, "OF data missing\n");
return -EINVAL;
}
if (!data)
return dev_err_probe(dev, -EINVAL, "OF data missing\n");
kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL);
if (!kirin_pcie)

View File

@ -48,7 +48,7 @@
#define PARF_DBI_BASE_ADDR_HI 0x354
#define PARF_SLV_ADDR_SPACE_SIZE 0x358
#define PARF_SLV_ADDR_SPACE_SIZE_HI 0x35c
#define PARF_NO_SNOOP_OVERIDE 0x3d4
#define PARF_NO_SNOOP_OVERRIDE 0x3d4
#define PARF_ATU_BASE_ADDR 0x634
#define PARF_ATU_BASE_ADDR_HI 0x638
#define PARF_SRIS_MODE 0x644
@ -89,9 +89,9 @@
#define PARF_DEBUG_INT_CFG_BUS_MASTER_EN BIT(2)
#define PARF_DEBUG_INT_RADM_PM_TURNOFF BIT(3)
/* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERIDE_EN BIT(3)
/* PARF_NO_SNOOP_OVERRIDE register fields */
#define WR_NO_SNOOP_OVERRIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERRIDE_EN BIT(3)
/* PARF_DEVICE_TYPE register fields */
#define PARF_DEVICE_TYPE_EP 0x0
@ -529,8 +529,8 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
writel_relaxed(val, pcie_ep->parf + PARF_LTSSM);
if (pcie_ep->cfg && pcie_ep->cfg->override_no_snoop)
writel_relaxed(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN,
pcie_ep->parf + PARF_NO_SNOOP_OVERIDE);
writel_relaxed(WR_NO_SNOOP_OVERRIDE_EN | RD_NO_SNOOP_OVERRIDE_EN,
pcie_ep->parf + PARF_NO_SNOOP_OVERRIDE);
return 0;
@ -825,6 +825,10 @@ static const struct pci_epc_features qcom_pcie_epc_features = {
.msi_capable = true,
.msix_capable = false,
.align = SZ_4K,
.bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .only_64bit = true, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
};
static const struct pci_epc_features *
@ -933,6 +937,7 @@ static const struct of_device_id qcom_pcie_ep_match[] = {
{ .compatible = "qcom,sa8775p-pcie-ep", .data = &cfg_1_34_0},
{ .compatible = "qcom,sdx55-pcie-ep", },
{ .compatible = "qcom,sm8450-pcie-ep", },
{ .compatible = "qcom,sar2130p-pcie-ep", },
{ }
};
MODULE_DEVICE_TABLE(of, qcom_pcie_ep_match);

View File

@ -61,7 +61,7 @@
#define PARF_DBI_BASE_ADDR_V2_HI 0x354
#define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358
#define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c
#define PARF_NO_SNOOP_OVERIDE 0x3d4
#define PARF_NO_SNOOP_OVERRIDE 0x3d4
#define PARF_ATU_BASE_ADDR 0x634
#define PARF_ATU_BASE_ADDR_HI 0x638
#define PARF_DEVICE_TYPE 0x1000
@ -135,9 +135,9 @@
#define PARF_INT_ALL_LINK_UP BIT(13)
#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
/* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERIDE_EN BIT(3)
/* PARF_NO_SNOOP_OVERRIDE register fields */
#define WR_NO_SNOOP_OVERRIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERRIDE_EN BIT(3)
/* PARF_DEVICE_TYPE register fields */
#define DEVICE_TYPE_RC 0x4
@ -1007,8 +1007,8 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
const struct qcom_pcie_cfg *pcie_cfg = pcie->cfg;
if (pcie_cfg->override_no_snoop)
writel(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN,
pcie->parf + PARF_NO_SNOOP_OVERIDE);
writel(WR_NO_SNOOP_OVERRIDE_EN | RD_NO_SNOOP_OVERRIDE_EN,
pcie->parf + PARF_NO_SNOOP_OVERRIDE);
qcom_pcie_clear_aspm_l0s(pcie->pci);
qcom_pcie_clear_hpc(pcie->pci);

View File

@ -1356,7 +1356,7 @@ static struct pci_ops hv_pcifront_ops = {
*
* If the PF driver wishes to initiate communication, it can "invalidate" one or
* more of the first 64 blocks. This invalidation is delivered via a callback
* supplied by the VF driver by this driver.
* supplied to the VF driver by this driver.
*
* No protocol is implied, except that supplied by the PF and VF drivers.
*/

View File

@ -1422,7 +1422,7 @@ static void mvebu_pcie_powerdown(struct mvebu_pcie_port *port)
}
/*
* devm_of_pci_get_host_bridge_resources() only sets up translateable resources,
* devm_of_pci_get_host_bridge_resources() only sets up translatable resources,
* so we need extra resource setup parsing our special DT properties encoding
* the MEM and IO apertures.
*/

View File

@ -2106,47 +2106,39 @@ static int tegra_pcie_get_regulators(struct tegra_pcie *pcie, u32 lane_mask)
static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
{
struct device *dev = pcie->dev;
struct device_node *np = dev->of_node, *port;
struct device_node *np = dev->of_node;
const struct tegra_pcie_soc *soc = pcie->soc;
u32 lanes = 0, mask = 0;
unsigned int lane = 0;
int err;
/* parse root ports */
for_each_child_of_node(np, port) {
for_each_child_of_node_scoped(np, port) {
struct tegra_pcie_port *rp;
unsigned int index;
u32 value;
char *label;
err = of_pci_get_devfn(port);
if (err < 0) {
dev_err(dev, "failed to parse address: %d\n", err);
goto err_node_put;
}
if (err < 0)
return dev_err_probe(dev, err, "failed to parse address\n");
index = PCI_SLOT(err);
if (index < 1 || index > soc->num_ports) {
dev_err(dev, "invalid port number: %d\n", index);
err = -EINVAL;
goto err_node_put;
}
if (index < 1 || index > soc->num_ports)
return dev_err_probe(dev, -EINVAL,
"invalid port number: %d\n", index);
index--;
err = of_property_read_u32(port, "nvidia,num-lanes", &value);
if (err < 0) {
dev_err(dev, "failed to parse # of lanes: %d\n",
err);
goto err_node_put;
}
if (err < 0)
return dev_err_probe(dev, err,
"failed to parse # of lanes\n");
if (value > 16) {
dev_err(dev, "invalid # of lanes: %u\n", value);
err = -EINVAL;
goto err_node_put;
}
if (value > 16)
return dev_err_probe(dev, -EINVAL,
"invalid # of lanes: %u\n", value);
lanes |= value << (index << 3);
@ -2159,16 +2151,12 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
lane += value;
rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL);
if (!rp) {
err = -ENOMEM;
goto err_node_put;
}
if (!rp)
return -ENOMEM;
err = of_address_to_resource(port, 0, &rp->regs);
if (err < 0) {
dev_err(dev, "failed to parse address: %d\n", err);
goto err_node_put;
}
if (err < 0)
return dev_err_probe(dev, err, "failed to parse address\n");
INIT_LIST_HEAD(&rp->list);
rp->index = index;
@ -2177,16 +2165,12 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
rp->np = port;
rp->base = devm_pci_remap_cfg_resource(dev, &rp->regs);
if (IS_ERR(rp->base)) {
err = PTR_ERR(rp->base);
goto err_node_put;
}
if (IS_ERR(rp->base))
return PTR_ERR(rp->base);
label = devm_kasprintf(dev, GFP_KERNEL, "pex-reset-%u", index);
if (!label) {
err = -ENOMEM;
goto err_node_put;
}
if (!label)
return -ENOMEM;
/*
* Returns -ENOENT if reset-gpios property is not populated
@ -2199,34 +2183,26 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
GPIOD_OUT_LOW,
label);
if (IS_ERR(rp->reset_gpio)) {
if (PTR_ERR(rp->reset_gpio) == -ENOENT) {
if (PTR_ERR(rp->reset_gpio) == -ENOENT)
rp->reset_gpio = NULL;
} else {
dev_err(dev, "failed to get reset GPIO: %ld\n",
PTR_ERR(rp->reset_gpio));
err = PTR_ERR(rp->reset_gpio);
goto err_node_put;
}
else
return dev_err_probe(dev, PTR_ERR(rp->reset_gpio),
"failed to get reset GPIO\n");
}
list_add_tail(&rp->list, &pcie->ports);
}
err = tegra_pcie_get_xbar_config(pcie, lanes, &pcie->xbar_config);
if (err < 0) {
dev_err(dev, "invalid lane configuration\n");
return err;
}
if (err < 0)
return dev_err_probe(dev, err,
"invalid lane configuration\n");
err = tegra_pcie_get_regulators(pcie, mask);
if (err < 0)
return err;
return 0;
err_node_put:
of_node_put(port);
return err;
}
/*

View File

@ -204,7 +204,7 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
v = readl(addr);
if (v & 0xff00)
pr_err("Bad MSIX cap header: %08x\n", v);
pr_err("Bad MSI-X cap header: %08x\n", v);
v |= 0xbc00; /* next capability is EA at 0xbc */
set_val(v, where, size, val);
return PCIBIOS_SUCCESSFUL;

View File

@ -154,7 +154,7 @@ static void xgene_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
* X-Gene v1 only has 16 MSI GIC IRQs for 2048 MSI vectors. To maintain
* the expected behaviour of .set_affinity for each MSI interrupt, the 16
* MSI GIC IRQs are statically allocated to 8 X-Gene v1 cores (2 GIC IRQs
* for each core). The MSI vector is moved fom 1 MSI GIC IRQ to another
* for each core). The MSI vector is moved from 1 MSI GIC IRQ to another
* MSI GIC IRQ to steer its MSI interrupt to correct X-Gene v1 core. As a
* consequence, the total MSI vectors that X-Gene v1 supports will be
* reduced to 256 (2048/8) vectors.

View File

@ -6,6 +6,7 @@
* Description: Altera PCIe host controller driver
*/
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irqchip/chained_irq.h>
@ -77,9 +78,25 @@
#define S10_TLP_FMTTYPE_CFGWR0 0x45
#define S10_TLP_FMTTYPE_CFGWR1 0x44
#define AGLX_RP_CFG_ADDR(pcie, reg) (((pcie)->hip_base) + (reg))
#define AGLX_RP_SECONDARY(pcie) \
readb(AGLX_RP_CFG_ADDR(pcie, PCI_SECONDARY_BUS))
#define AGLX_BDF_REG 0x00002004
#define AGLX_ROOT_PORT_IRQ_STATUS 0x14c
#define AGLX_ROOT_PORT_IRQ_ENABLE 0x150
#define CFG_AER BIT(4)
#define AGLX_CFG_TARGET GENMASK(13, 12)
#define AGLX_CFG_TARGET_TYPE0 0
#define AGLX_CFG_TARGET_TYPE1 1
#define AGLX_CFG_TARGET_LOCAL_2000 2
#define AGLX_CFG_TARGET_LOCAL_3000 3
enum altera_pcie_version {
ALTERA_PCIE_V1 = 0,
ALTERA_PCIE_V2,
ALTERA_PCIE_V3,
};
struct altera_pcie {
@ -102,6 +119,11 @@ struct altera_pcie_ops {
int size, u32 *value);
int (*rp_write_cfg)(struct altera_pcie *pcie, u8 busno,
int where, int size, u32 value);
int (*ep_read_cfg)(struct altera_pcie *pcie, u8 busno,
unsigned int devfn, int where, int size, u32 *value);
int (*ep_write_cfg)(struct altera_pcie *pcie, u8 busno,
unsigned int devfn, int where, int size, u32 value);
void (*rp_isr)(struct irq_desc *desc);
};
struct altera_pcie_data {
@ -112,6 +134,9 @@ struct altera_pcie_data {
u32 cfgrd1;
u32 cfgwr0;
u32 cfgwr1;
u32 port_conf_offset;
u32 port_irq_status_offset;
u32 port_irq_enable_offset;
};
struct tlp_rp_regpair_t {
@ -131,6 +156,28 @@ static inline u32 cra_readl(struct altera_pcie *pcie, const u32 reg)
return readl_relaxed(pcie->cra_base + reg);
}
static inline void cra_writew(struct altera_pcie *pcie, const u32 value,
const u32 reg)
{
writew_relaxed(value, pcie->cra_base + reg);
}
static inline u32 cra_readw(struct altera_pcie *pcie, const u32 reg)
{
return readw_relaxed(pcie->cra_base + reg);
}
static inline void cra_writeb(struct altera_pcie *pcie, const u32 value,
const u32 reg)
{
writeb_relaxed(value, pcie->cra_base + reg);
}
static inline u32 cra_readb(struct altera_pcie *pcie, const u32 reg)
{
return readb_relaxed(pcie->cra_base + reg);
}
static bool altera_pcie_link_up(struct altera_pcie *pcie)
{
return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0);
@ -145,11 +192,20 @@ static bool s10_altera_pcie_link_up(struct altera_pcie *pcie)
return !!(readw(addr) & PCI_EXP_LNKSTA_DLLLA);
}
static bool aglx_altera_pcie_link_up(struct altera_pcie *pcie)
{
void __iomem *addr = AGLX_RP_CFG_ADDR(pcie,
pcie->pcie_data->cap_offset +
PCI_EXP_LNKSTA);
return (readw_relaxed(addr) & PCI_EXP_LNKSTA_DLLLA);
}
/*
* Altera PCIe port uses BAR0 of RC's configuration space as the translation
* from PCI bus to native BUS. Entire DDR region is mapped into PCIe space
* using these registers, so it can be reached by DMA from EP devices.
* This BAR0 will also access to MSI vector when receiving MSI/MSIX interrupt
* This BAR0 will also access to MSI vector when receiving MSI/MSI-X interrupt
* from EP devices, eventually trigger interrupt to GIC. The BAR0 of bridge
* should be hidden during enumeration to avoid the sizing and resource
* allocation by PCIe core.
@ -425,6 +481,103 @@ static int s10_rp_write_cfg(struct altera_pcie *pcie, u8 busno,
return PCIBIOS_SUCCESSFUL;
}
static int aglx_rp_read_cfg(struct altera_pcie *pcie, int where,
int size, u32 *value)
{
void __iomem *addr = AGLX_RP_CFG_ADDR(pcie, where);
switch (size) {
case 1:
*value = readb_relaxed(addr);
break;
case 2:
*value = readw_relaxed(addr);
break;
default:
*value = readl_relaxed(addr);
break;
}
/* Interrupt PIN not programmed in hardware, set to INTA. */
if (where == PCI_INTERRUPT_PIN && size == 1 && !(*value))
*value = 0x01;
else if (where == PCI_INTERRUPT_LINE && !(*value & 0xff00))
*value |= 0x0100;
return PCIBIOS_SUCCESSFUL;
}
static int aglx_rp_write_cfg(struct altera_pcie *pcie, u8 busno,
int where, int size, u32 value)
{
void __iomem *addr = AGLX_RP_CFG_ADDR(pcie, where);
switch (size) {
case 1:
writeb_relaxed(value, addr);
break;
case 2:
writew_relaxed(value, addr);
break;
default:
writel_relaxed(value, addr);
break;
}
/*
* Monitor changes to PCI_PRIMARY_BUS register on Root Port
* and update local copy of root bus number accordingly.
*/
if (busno == pcie->root_bus_nr && where == PCI_PRIMARY_BUS)
pcie->root_bus_nr = value & 0xff;
return PCIBIOS_SUCCESSFUL;
}
static int aglx_ep_write_cfg(struct altera_pcie *pcie, u8 busno,
unsigned int devfn, int where, int size, u32 value)
{
cra_writel(pcie, ((busno << 8) | devfn), AGLX_BDF_REG);
if (busno > AGLX_RP_SECONDARY(pcie))
where |= FIELD_PREP(AGLX_CFG_TARGET, AGLX_CFG_TARGET_TYPE1);
switch (size) {
case 1:
cra_writeb(pcie, value, where);
break;
case 2:
cra_writew(pcie, value, where);
break;
default:
cra_writel(pcie, value, where);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int aglx_ep_read_cfg(struct altera_pcie *pcie, u8 busno,
unsigned int devfn, int where, int size, u32 *value)
{
cra_writel(pcie, ((busno << 8) | devfn), AGLX_BDF_REG);
if (busno > AGLX_RP_SECONDARY(pcie))
where |= FIELD_PREP(AGLX_CFG_TARGET, AGLX_CFG_TARGET_TYPE1);
switch (size) {
case 1:
*value = cra_readb(pcie, where);
break;
case 2:
*value = cra_readw(pcie, where);
break;
default:
*value = cra_readl(pcie, where);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int _altera_pcie_cfg_read(struct altera_pcie *pcie, u8 busno,
unsigned int devfn, int where, int size,
u32 *value)
@ -437,6 +590,10 @@ static int _altera_pcie_cfg_read(struct altera_pcie *pcie, u8 busno,
return pcie->pcie_data->ops->rp_read_cfg(pcie, where,
size, value);
if (pcie->pcie_data->ops->ep_read_cfg)
return pcie->pcie_data->ops->ep_read_cfg(pcie, busno, devfn,
where, size, value);
switch (size) {
case 1:
byte_en = 1 << (where & 3);
@ -481,6 +638,10 @@ static int _altera_pcie_cfg_write(struct altera_pcie *pcie, u8 busno,
return pcie->pcie_data->ops->rp_write_cfg(pcie, busno,
where, size, value);
if (pcie->pcie_data->ops->ep_write_cfg)
return pcie->pcie_data->ops->ep_write_cfg(pcie, busno, devfn,
where, size, value);
switch (size) {
case 1:
data32 = (value & 0xff) << shift;
@ -659,7 +820,32 @@ static void altera_pcie_isr(struct irq_desc *desc)
dev_err_ratelimited(dev, "unexpected IRQ, INT%d\n", bit);
}
}
chained_irq_exit(chip, desc);
}
static void aglx_isr(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct altera_pcie *pcie;
struct device *dev;
u32 status;
int ret;
chained_irq_enter(chip, desc);
pcie = irq_desc_get_handler_data(desc);
dev = &pcie->pdev->dev;
status = readl(pcie->hip_base + pcie->pcie_data->port_conf_offset +
pcie->pcie_data->port_irq_status_offset);
if (status & CFG_AER) {
writel(CFG_AER, (pcie->hip_base + pcie->pcie_data->port_conf_offset +
pcie->pcie_data->port_irq_status_offset));
ret = generic_handle_domain_irq(pcie->irq_domain, 0);
if (ret)
dev_err_ratelimited(dev, "unexpected IRQ %d\n", pcie->irq);
}
chained_irq_exit(chip, desc);
}
@ -694,9 +880,9 @@ static int altera_pcie_parse_dt(struct altera_pcie *pcie)
if (IS_ERR(pcie->cra_base))
return PTR_ERR(pcie->cra_base);
if (pcie->pcie_data->version == ALTERA_PCIE_V2) {
pcie->hip_base =
devm_platform_ioremap_resource_byname(pdev, "Hip");
if (pcie->pcie_data->version == ALTERA_PCIE_V2 ||
pcie->pcie_data->version == ALTERA_PCIE_V3) {
pcie->hip_base = devm_platform_ioremap_resource_byname(pdev, "Hip");
if (IS_ERR(pcie->hip_base))
return PTR_ERR(pcie->hip_base);
}
@ -706,7 +892,7 @@ static int altera_pcie_parse_dt(struct altera_pcie *pcie)
if (pcie->irq < 0)
return pcie->irq;
irq_set_chained_handler_and_data(pcie->irq, altera_pcie_isr, pcie);
irq_set_chained_handler_and_data(pcie->irq, pcie->pcie_data->ops->rp_isr, pcie);
return 0;
}
@ -719,6 +905,7 @@ static const struct altera_pcie_ops altera_pcie_ops_1_0 = {
.tlp_read_pkt = tlp_read_packet,
.tlp_write_pkt = tlp_write_packet,
.get_link_status = altera_pcie_link_up,
.rp_isr = altera_pcie_isr,
};
static const struct altera_pcie_ops altera_pcie_ops_2_0 = {
@ -727,6 +914,16 @@ static const struct altera_pcie_ops altera_pcie_ops_2_0 = {
.get_link_status = s10_altera_pcie_link_up,
.rp_read_cfg = s10_rp_read_cfg,
.rp_write_cfg = s10_rp_write_cfg,
.rp_isr = altera_pcie_isr,
};
static const struct altera_pcie_ops altera_pcie_ops_3_0 = {
.rp_read_cfg = aglx_rp_read_cfg,
.rp_write_cfg = aglx_rp_write_cfg,
.get_link_status = aglx_altera_pcie_link_up,
.ep_read_cfg = aglx_ep_read_cfg,
.ep_write_cfg = aglx_ep_write_cfg,
.rp_isr = aglx_isr,
};
static const struct altera_pcie_data altera_pcie_1_0_data = {
@ -749,11 +946,44 @@ static const struct altera_pcie_data altera_pcie_2_0_data = {
.cfgwr1 = S10_TLP_FMTTYPE_CFGWR1,
};
static const struct altera_pcie_data altera_pcie_3_0_f_tile_data = {
.ops = &altera_pcie_ops_3_0,
.version = ALTERA_PCIE_V3,
.cap_offset = 0x70,
.port_conf_offset = 0x14000,
.port_irq_status_offset = AGLX_ROOT_PORT_IRQ_STATUS,
.port_irq_enable_offset = AGLX_ROOT_PORT_IRQ_ENABLE,
};
static const struct altera_pcie_data altera_pcie_3_0_p_tile_data = {
.ops = &altera_pcie_ops_3_0,
.version = ALTERA_PCIE_V3,
.cap_offset = 0x70,
.port_conf_offset = 0x104000,
.port_irq_status_offset = AGLX_ROOT_PORT_IRQ_STATUS,
.port_irq_enable_offset = AGLX_ROOT_PORT_IRQ_ENABLE,
};
static const struct altera_pcie_data altera_pcie_3_0_r_tile_data = {
.ops = &altera_pcie_ops_3_0,
.version = ALTERA_PCIE_V3,
.cap_offset = 0x70,
.port_conf_offset = 0x1300,
.port_irq_status_offset = 0x0,
.port_irq_enable_offset = 0x4,
};
static const struct of_device_id altera_pcie_of_match[] = {
{.compatible = "altr,pcie-root-port-1.0",
.data = &altera_pcie_1_0_data },
{.compatible = "altr,pcie-root-port-2.0",
.data = &altera_pcie_2_0_data },
{.compatible = "altr,pcie-root-port-3.0-f-tile",
.data = &altera_pcie_3_0_f_tile_data },
{.compatible = "altr,pcie-root-port-3.0-p-tile",
.data = &altera_pcie_3_0_p_tile_data },
{.compatible = "altr,pcie-root-port-3.0-r-tile",
.data = &altera_pcie_3_0_r_tile_data },
{},
};
@ -791,11 +1021,18 @@ static int altera_pcie_probe(struct platform_device *pdev)
return ret;
}
/* clear all interrupts */
cra_writel(pcie, P2A_INT_STS_ALL, P2A_INT_STATUS);
/* enable all interrupts */
cra_writel(pcie, P2A_INT_ENA_ALL, P2A_INT_ENABLE);
altera_pcie_host_init(pcie);
if (pcie->pcie_data->version == ALTERA_PCIE_V1 ||
pcie->pcie_data->version == ALTERA_PCIE_V2) {
/* clear all interrupts */
cra_writel(pcie, P2A_INT_STS_ALL, P2A_INT_STATUS);
/* enable all interrupts */
cra_writel(pcie, P2A_INT_ENA_ALL, P2A_INT_ENABLE);
altera_pcie_host_init(pcie);
} else if (pcie->pcie_data->version == ALTERA_PCIE_V3) {
writel(CFG_AER,
pcie->hip_base + pcie->pcie_data->port_conf_offset +
pcie->pcie_data->port_irq_enable_offset);
}
bridge->sysdata = pcie;
bridge->busnr = pcie->root_bus_nr;

View File

@ -732,7 +732,6 @@ static int apple_pcie_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct platform_device *platform = to_platform_device(dev);
struct device_node *of_port;
struct apple_pcie *pcie;
int ret;
@ -755,11 +754,10 @@ static int apple_pcie_init(struct pci_config_window *cfg)
if (ret)
return ret;
for_each_child_of_node(dev->of_node, of_port) {
for_each_child_of_node_scoped(dev->of_node, of_port) {
ret = apple_pcie_setup_port(pcie, of_port);
if (ret) {
dev_err(pcie->dev, "Port %pOF setup fail: %d\n", of_port, ret);
of_node_put(of_port);
return ret;
}
}

View File

@ -40,7 +40,7 @@
/* Broadcom STB PCIe Register Offsets */
#define PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1 0x0188
#define PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1_ENDIAN_MODE_BAR2_MASK 0xc
#define PCIE_RC_CFG_VENDOR_SPCIFIC_REG1_LITTLE_ENDIAN 0x0
#define PCIE_RC_CFG_VENDOR_SPECIFIC_REG1_LITTLE_ENDIAN 0x0
#define PCIE_RC_CFG_PRIV1_ID_VAL3 0x043c
#define PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK 0xffffff
@ -55,6 +55,10 @@
#define PCIE_RC_DL_MDIO_WR_DATA 0x1104
#define PCIE_RC_DL_MDIO_RD_DATA 0x1108
#define PCIE_RC_PL_PHY_CTL_15 0x184c
#define PCIE_RC_PL_PHY_CTL_15_DIS_PLL_PD_MASK 0x400000
#define PCIE_RC_PL_PHY_CTL_15_PM_CLK_PERIOD_MASK 0xff
#define PCIE_MISC_MISC_CTRL 0x4008
#define PCIE_MISC_MISC_CTRL_PCIE_RCB_64B_MODE_MASK 0x80
#define PCIE_MISC_MISC_CTRL_PCIE_RCB_MPS_MODE_MASK 0x400
@ -146,9 +150,6 @@
#define MSI_INT_MASK_SET 0x10
#define MSI_INT_MASK_CLR 0x14
#define PCIE_EXT_CFG_DATA 0x8000
#define PCIE_EXT_CFG_INDEX 0x9000
#define PCIE_RGR1_SW_INIT_1_PERST_MASK 0x1
#define PCIE_RGR1_SW_INIT_1_PERST_SHIFT 0x0
@ -174,8 +175,9 @@
#define MDIO_PORT0 0x0
#define MDIO_DATA_MASK 0x7fffffff
#define MDIO_PORT_MASK 0xf0000
#define MDIO_PORT_EXT_MASK 0x200000
#define MDIO_REGAD_MASK 0xffff
#define MDIO_CMD_MASK 0xfff00000
#define MDIO_CMD_MASK 0x00100000
#define MDIO_CMD_READ 0x1
#define MDIO_CMD_WRITE 0x0
#define MDIO_DATA_DONE_MASK 0x80000000
@ -191,11 +193,11 @@
#define SSC_STATUS_PLL_LOCK_MASK 0x800
#define PCIE_BRCM_MAX_MEMC 3
#define IDX_ADDR(pcie) ((pcie)->reg_offsets[EXT_CFG_INDEX])
#define DATA_ADDR(pcie) ((pcie)->reg_offsets[EXT_CFG_DATA])
#define PCIE_RGR1_SW_INIT_1(pcie) ((pcie)->reg_offsets[RGR1_SW_INIT_1])
#define HARD_DEBUG(pcie) ((pcie)->reg_offsets[PCIE_HARD_DEBUG])
#define INTR2_CPU_BASE(pcie) ((pcie)->reg_offsets[PCIE_INTR2_CPU_BASE])
#define IDX_ADDR(pcie) ((pcie)->cfg->offsets[EXT_CFG_INDEX])
#define DATA_ADDR(pcie) ((pcie)->cfg->offsets[EXT_CFG_DATA])
#define PCIE_RGR1_SW_INIT_1(pcie) ((pcie)->cfg->offsets[RGR1_SW_INIT_1])
#define HARD_DEBUG(pcie) ((pcie)->cfg->offsets[PCIE_HARD_DEBUG])
#define INTR2_CPU_BASE(pcie) ((pcie)->cfg->offsets[PCIE_INTR2_CPU_BASE])
/* Rescal registers */
#define PCIE_DVT_PMU_PCIE_PHY_CTRL 0xc700
@ -234,13 +236,24 @@ struct inbound_win {
u64 cpu_addr;
};
/*
* The RESCAL block is tied to PCIe controller #1, regardless of the number of
* controllers, and turning off PCIe controller #1 prevents access to the RESCAL
* register blocks, therefore no other controller can access this register
* space, and depending upon the bus fabric we may get a timeout (UBUS/GISB),
* or a hang (AXI).
*/
#define CFG_QUIRK_AVOID_BRIDGE_SHUTDOWN BIT(0)
struct pcie_cfg_data {
const int *offsets;
const enum pcie_soc_base soc_base;
const bool has_phy;
const u32 quirks;
u8 num_inbound_wins;
int (*perst_set)(struct brcm_pcie *pcie, u32 val);
int (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val);
int (*post_setup)(struct brcm_pcie *pcie);
};
struct subdev_regulators {
@ -276,8 +289,6 @@ struct brcm_pcie {
int gen;
u64 msi_target_addr;
struct brcm_msi *msi;
const int *reg_offsets;
enum pcie_soc_base soc_base;
struct reset_control *rescal;
struct reset_control *perst_reset;
struct reset_control *bridge_reset;
@ -285,17 +296,14 @@ struct brcm_pcie {
int num_memc;
u64 memc_size[PCIE_BRCM_MAX_MEMC];
u32 hw_rev;
int (*perst_set)(struct brcm_pcie *pcie, u32 val);
int (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val);
struct subdev_regulators *sr;
bool ep_wakeup_capable;
bool has_phy;
u8 num_inbound_wins;
const struct pcie_cfg_data *cfg;
};
static inline bool is_bmips(const struct brcm_pcie *pcie)
{
return pcie->soc_base == BCM7435 || pcie->soc_base == BCM7425;
return pcie->cfg->soc_base == BCM7435 || pcie->cfg->soc_base == BCM7425;
}
/*
@ -309,8 +317,8 @@ static int brcm_pcie_encode_ibar_size(u64 size)
if (log2_in >= 12 && log2_in <= 15)
/* Covers 4KB to 32KB (inclusive) */
return (log2_in - 12) + 0x1c;
else if (log2_in >= 16 && log2_in <= 35)
/* Covers 64KB to 32GB, (inclusive) */
else if (log2_in >= 16 && log2_in <= 36)
/* Covers 64KB to 64GB, (inclusive) */
return log2_in - 15;
/* Something is awry so disable */
return 0;
@ -320,6 +328,7 @@ static u32 brcm_pcie_mdio_form_pkt(int port, int regad, int cmd)
{
u32 pkt = 0;
pkt |= FIELD_PREP(MDIO_PORT_EXT_MASK, port >> 4);
pkt |= FIELD_PREP(MDIO_PORT_MASK, port);
pkt |= FIELD_PREP(MDIO_REGAD_MASK, regad);
pkt |= FIELD_PREP(MDIO_CMD_MASK, cmd);
@ -403,12 +412,12 @@ static int brcm_pcie_set_ssc(struct brcm_pcie *pcie)
static void brcm_pcie_set_gen(struct brcm_pcie *pcie, int gen)
{
u16 lnkctl2 = readw(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2);
u32 lnkcap = readl(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP);
u32 lnkcap = readl(pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
lnkcap = (lnkcap & ~PCI_EXP_LNKCAP_SLS) | gen;
writel(lnkcap, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP);
u32p_replace_bits(&lnkcap, gen, PCI_EXP_LNKCAP_SLS);
writel(lnkcap, pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
lnkctl2 = (lnkctl2 & ~0xf) | gen;
u16p_replace_bits(&lnkctl2, gen, PCI_EXP_LNKCTL2_TLS);
writew(lnkctl2, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2);
}
@ -550,7 +559,7 @@ static int brcm_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
return hwirq;
for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, hwirq + i,
irq_domain_set_info(domain, virq + i, (irq_hw_number_t)hwirq + i,
&brcm_msi_bottom_irq_chip, domain->host_data,
handle_edge_irq, NULL, NULL);
return 0;
@ -717,8 +726,8 @@ static void __iomem *brcm_pcie_map_bus(struct pci_bus *bus,
/* For devices, write to the config space index register */
idx = PCIE_ECAM_OFFSET(bus->number, devfn, 0);
writel(idx, pcie->base + PCIE_EXT_CFG_INDEX);
return base + PCIE_EXT_CFG_DATA + PCIE_ECAM_REG(where);
writel(idx, base + IDX_ADDR(pcie));
return base + DATA_ADDR(pcie) + PCIE_ECAM_REG(where);
}
static void __iomem *brcm7425_pcie_map_bus(struct pci_bus *bus,
@ -821,6 +830,39 @@ static int brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val)
return 0;
}
static int brcm_pcie_post_setup_bcm2712(struct brcm_pcie *pcie)
{
static const u16 data[] = { 0x50b9, 0xbda1, 0x0094, 0x97b4, 0x5030,
0x5030, 0x0007 };
static const u8 regs[] = { 0x16, 0x17, 0x18, 0x19, 0x1b, 0x1c, 0x1e };
int ret, i;
u32 tmp;
/* Allow a 54MHz (xosc) refclk source */
ret = brcm_pcie_mdio_write(pcie->base, MDIO_PORT0, SET_ADDR_OFFSET, 0x1600);
if (ret < 0)
return ret;
for (i = 0; i < ARRAY_SIZE(regs); i++) {
ret = brcm_pcie_mdio_write(pcie->base, MDIO_PORT0, regs[i], data[i]);
if (ret < 0)
return ret;
}
usleep_range(100, 200);
/*
* Set L1SS sub-state timers to avoid lengthy state transitions,
* PM clock period is 18.52ns (1/54MHz, round down).
*/
tmp = readl(pcie->base + PCIE_RC_PL_PHY_CTL_15);
tmp &= ~PCIE_RC_PL_PHY_CTL_15_PM_CLK_PERIOD_MASK;
tmp |= 0x12;
writel(tmp, pcie->base + PCIE_RC_PL_PHY_CTL_15);
return 0;
}
static void add_inbound_win(struct inbound_win *b, u8 *count, u64 size,
u64 cpu_addr, u64 pci_offset)
{
@ -855,7 +897,7 @@ static int brcm_pcie_get_inbound_wins(struct brcm_pcie *pcie,
* security considerations, and is not implemented in our modern
* SoCs.
*/
if (pcie->soc_base != BCM7712)
if (pcie->cfg->soc_base != BCM7712)
add_inbound_win(b++, &n, 0, 0, 0);
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
@ -872,10 +914,10 @@ static int brcm_pcie_get_inbound_wins(struct brcm_pcie *pcie,
* That being said, each BARs size must still be a power of
* two.
*/
if (pcie->soc_base == BCM7712)
if (pcie->cfg->soc_base == BCM7712)
add_inbound_win(b++, &n, size, cpu_start, pcie_start);
if (n > pcie->num_inbound_wins)
if (n > pcie->cfg->num_inbound_wins)
break;
}
@ -889,7 +931,7 @@ static int brcm_pcie_get_inbound_wins(struct brcm_pcie *pcie,
* that enables multiple memory controllers. As such, it can return
* now w/o doing special configuration.
*/
if (pcie->soc_base == BCM7712)
if (pcie->cfg->soc_base == BCM7712)
return n;
ret = of_property_read_variable_u64_array(pcie->np, "brcm,scb-sizes", pcie->memc_size, 1,
@ -1012,7 +1054,7 @@ static void set_inbound_win_registers(struct brcm_pcie *pcie,
* 7712:
* All of their BARs need to be set.
*/
if (pcie->soc_base == BCM7712) {
if (pcie->cfg->soc_base == BCM7712) {
/* BUS remap register settings */
reg_offset = brcm_ubus_reg_offset(i);
tmp = lower_32_bits(cpu_addr) & ~0xfff;
@ -1036,15 +1078,15 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
int memc, ret;
/* Reset the bridge */
ret = pcie->bridge_sw_init_set(pcie, 1);
ret = pcie->cfg->bridge_sw_init_set(pcie, 1);
if (ret)
return ret;
/* Ensure that PERST# is asserted; some bootloaders may deassert it. */
if (pcie->soc_base == BCM2711) {
ret = pcie->perst_set(pcie, 1);
if (pcie->cfg->soc_base == BCM2711) {
ret = pcie->cfg->perst_set(pcie, 1);
if (ret) {
pcie->bridge_sw_init_set(pcie, 0);
pcie->cfg->bridge_sw_init_set(pcie, 0);
return ret;
}
}
@ -1052,7 +1094,7 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
usleep_range(100, 200);
/* Take the bridge out of reset */
ret = pcie->bridge_sw_init_set(pcie, 0);
ret = pcie->cfg->bridge_sw_init_set(pcie, 0);
if (ret)
return ret;
@ -1072,9 +1114,9 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
*/
if (is_bmips(pcie))
burst = 0x1; /* 256 bytes */
else if (pcie->soc_base == BCM2711)
else if (pcie->cfg->soc_base == BCM2711)
burst = 0x0; /* 128 bytes */
else if (pcie->soc_base == BCM7278)
else if (pcie->cfg->soc_base == BCM7278)
burst = 0x3; /* 512 bytes */
else
burst = 0x2; /* 512 bytes */
@ -1180,10 +1222,16 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
/* PCIe->SCB endian mode for inbound window */
tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1);
u32p_replace_bits(&tmp, PCIE_RC_CFG_VENDOR_SPCIFIC_REG1_LITTLE_ENDIAN,
u32p_replace_bits(&tmp, PCIE_RC_CFG_VENDOR_SPECIFIC_REG1_LITTLE_ENDIAN,
PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1_ENDIAN_MODE_BAR2_MASK);
writel(tmp, base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1);
if (pcie->cfg->post_setup) {
ret = pcie->cfg->post_setup(pcie);
if (ret < 0)
return ret;
}
return 0;
}
@ -1199,7 +1247,7 @@ static void brcm_extend_rbus_timeout(struct brcm_pcie *pcie)
u32 timeout_us = 4000000; /* 4 seconds, our setting for L1SS */
/* 7712 does not have this (RGR1) timer */
if (pcie->soc_base == BCM7712)
if (pcie->cfg->soc_base == BCM7712)
return;
/* Each unit in timeout register is 1/216,000,000 seconds */
@ -1276,8 +1324,12 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
bool ssc_good = false;
int ret, i;
/* Limit the generation if specified */
if (pcie->gen)
brcm_pcie_set_gen(pcie, pcie->gen);
/* Unassert the fundamental reset */
ret = pcie->perst_set(pcie, 0);
ret = pcie->cfg->perst_set(pcie, 0);
if (ret)
return ret;
@ -1302,9 +1354,6 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
brcm_config_clkreq(pcie);
if (pcie->gen)
brcm_pcie_set_gen(pcie, pcie->gen);
if (pcie->ssc) {
ret = brcm_pcie_set_ssc(pcie);
if (ret == 0)
@ -1367,7 +1416,8 @@ static int brcm_pcie_add_bus(struct pci_bus *bus)
ret = regulator_bulk_get(dev, sr->num_supplies, sr->supplies);
if (ret) {
dev_info(dev, "No regulators for downstream device\n");
dev_info(dev, "Did not get regulators, err=%d\n", ret);
pcie->sr = NULL;
goto no_regulators;
}
@ -1390,7 +1440,7 @@ static void brcm_pcie_remove_bus(struct pci_bus *bus)
struct subdev_regulators *sr = pcie->sr;
struct device *dev = &bus->dev;
if (!sr)
if (!sr || !bus->parent || !pci_is_root_bus(bus->parent))
return;
if (regulator_bulk_disable(sr->num_supplies, sr->supplies))
@ -1463,12 +1513,12 @@ static int brcm_phy_cntl(struct brcm_pcie *pcie, const int start)
static inline int brcm_phy_start(struct brcm_pcie *pcie)
{
return pcie->has_phy ? brcm_phy_cntl(pcie, 1) : 0;
return pcie->cfg->has_phy ? brcm_phy_cntl(pcie, 1) : 0;
}
static inline int brcm_phy_stop(struct brcm_pcie *pcie)
{
return pcie->has_phy ? brcm_phy_cntl(pcie, 0) : 0;
return pcie->cfg->has_phy ? brcm_phy_cntl(pcie, 0) : 0;
}
static int brcm_pcie_turn_off(struct brcm_pcie *pcie)
@ -1479,7 +1529,7 @@ static int brcm_pcie_turn_off(struct brcm_pcie *pcie)
if (brcm_pcie_link_up(pcie))
brcm_pcie_enter_l23(pcie);
/* Assert fundamental reset */
ret = pcie->perst_set(pcie, 1);
ret = pcie->cfg->perst_set(pcie, 1);
if (ret)
return ret;
@ -1493,8 +1543,9 @@ static int brcm_pcie_turn_off(struct brcm_pcie *pcie)
u32p_replace_bits(&tmp, 1, PCIE_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK);
writel(tmp, base + HARD_DEBUG(pcie));
/* Shutdown PCIe bridge */
ret = pcie->bridge_sw_init_set(pcie, 1);
if (!(pcie->cfg->quirks & CFG_QUIRK_AVOID_BRIDGE_SHUTDOWN))
/* Shutdown PCIe bridge */
ret = pcie->cfg->bridge_sw_init_set(pcie, 1);
return ret;
}
@ -1582,7 +1633,7 @@ static int brcm_pcie_resume_noirq(struct device *dev)
goto err_reset;
/* Take bridge out of reset so we can access the SERDES reg */
pcie->bridge_sw_init_set(pcie, 0);
pcie->cfg->bridge_sw_init_set(pcie, 0);
/* SERDES_IDDQ = 0 */
tmp = readl(base + HARD_DEBUG(pcie));
@ -1660,7 +1711,7 @@ static void brcm_pcie_remove(struct platform_device *pdev)
static const int pcie_offsets[] = {
[RGR1_SW_INIT_1] = 0x9210,
[EXT_CFG_INDEX] = 0x9000,
[EXT_CFG_DATA] = 0x9004,
[EXT_CFG_DATA] = 0x8000,
[PCIE_HARD_DEBUG] = 0x4204,
[PCIE_INTR2_CPU_BASE] = 0x4300,
};
@ -1668,7 +1719,7 @@ static const int pcie_offsets[] = {
static const int pcie_offsets_bcm7278[] = {
[RGR1_SW_INIT_1] = 0xc010,
[EXT_CFG_INDEX] = 0x9000,
[EXT_CFG_DATA] = 0x9004,
[EXT_CFG_DATA] = 0x8000,
[PCIE_HARD_DEBUG] = 0x4204,
[PCIE_INTR2_CPU_BASE] = 0x4300,
};
@ -1682,8 +1733,9 @@ static const int pcie_offsets_bcm7425[] = {
};
static const int pcie_offsets_bcm7712[] = {
[RGR1_SW_INIT_1] = 0x9210,
[EXT_CFG_INDEX] = 0x9000,
[EXT_CFG_DATA] = 0x9004,
[EXT_CFG_DATA] = 0x8000,
[PCIE_HARD_DEBUG] = 0x4304,
[PCIE_INTR2_CPU_BASE] = 0x4400,
};
@ -1704,6 +1756,16 @@ static const struct pcie_cfg_data bcm2711_cfg = {
.num_inbound_wins = 3,
};
static const struct pcie_cfg_data bcm2712_cfg = {
.offsets = pcie_offsets_bcm7712,
.soc_base = BCM7712,
.perst_set = brcm_pcie_perst_set_7278,
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
.post_setup = brcm_pcie_post_setup_bcm2712,
.quirks = CFG_QUIRK_AVOID_BRIDGE_SHUTDOWN,
.num_inbound_wins = 10,
};
static const struct pcie_cfg_data bcm4908_cfg = {
.offsets = pcie_offsets,
.soc_base = BCM4908,
@ -1755,6 +1817,7 @@ static const struct pcie_cfg_data bcm7712_cfg = {
static const struct of_device_id brcm_pcie_match[] = {
{ .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg },
{ .compatible = "brcm,bcm2712-pcie", .data = &bcm2712_cfg },
{ .compatible = "brcm,bcm4908-pcie", .data = &bcm4908_cfg },
{ .compatible = "brcm,bcm7211-pcie", .data = &generic_cfg },
{ .compatible = "brcm,bcm7216-pcie", .data = &bcm7216_cfg },
@ -1784,7 +1847,7 @@ static struct pci_ops brcm7425_pcie_ops = {
static int brcm_pcie_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node, *msi_np;
struct device_node *np = pdev->dev.of_node;
struct pci_host_bridge *bridge;
const struct pcie_cfg_data *data;
struct brcm_pcie *pcie;
@ -1803,12 +1866,7 @@ static int brcm_pcie_probe(struct platform_device *pdev)
pcie = pci_host_bridge_priv(bridge);
pcie->dev = &pdev->dev;
pcie->np = np;
pcie->reg_offsets = data->offsets;
pcie->soc_base = data->soc_base;
pcie->perst_set = data->perst_set;
pcie->bridge_sw_init_set = data->bridge_sw_init_set;
pcie->has_phy = data->has_phy;
pcie->num_inbound_wins = data->num_inbound_wins;
pcie->cfg = data;
pcie->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(pcie->base))
@ -1843,7 +1901,7 @@ static int brcm_pcie_probe(struct platform_device *pdev)
if (ret)
return dev_err_probe(&pdev->dev, ret, "could not enable clock\n");
pcie->bridge_sw_init_set(pcie, 0);
pcie->cfg->bridge_sw_init_set(pcie, 0);
if (pcie->swinit_reset) {
ret = reset_control_assert(pcie->swinit_reset);
@ -1882,22 +1940,29 @@ static int brcm_pcie_probe(struct platform_device *pdev)
goto fail;
pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION);
if (pcie->soc_base == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) {
if (pcie->cfg->soc_base == BCM4908 &&
pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) {
dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n");
ret = -ENODEV;
goto fail;
}
msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
if (pci_msi_enabled() && msi_np == pcie->np) {
ret = brcm_pcie_enable_msi(pcie);
if (pci_msi_enabled()) {
struct device_node *msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
if (msi_np == pcie->np)
ret = brcm_pcie_enable_msi(pcie);
of_node_put(msi_np);
if (ret) {
dev_err(pcie->dev, "probe of internal MSI failed");
goto fail;
}
}
bridge->ops = pcie->soc_base == BCM7425 ? &brcm7425_pcie_ops : &brcm_pcie_ops;
bridge->ops = pcie->cfg->soc_base == BCM7425 ?
&brcm7425_pcie_ops : &brcm_pcie_ops;
bridge->sysdata = pcie;
platform_set_drvdata(pdev, pcie);
@ -1940,3 +2005,4 @@ module_platform_driver(brcm_pcie_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Broadcom STB PCIe RC driver");
MODULE_AUTHOR("Broadcom");
MODULE_SOFTDEP("pre: irq_bcm2712_mip");

View File

@ -15,6 +15,7 @@
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of_device.h>
@ -24,6 +25,7 @@
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "../pci.h"
@ -352,7 +354,8 @@ static int mtk_pcie_set_trans_table(struct mtk_gen3_pcie *pcie,
dev_dbg(pcie->dev, "set %s trans window[%d]: cpu_addr = %#llx, pci_addr = %#llx, size = %#llx\n",
range_type, *num, (unsigned long long)cpu_addr,
(unsigned long long)pci_addr, (unsigned long long)table_size);
(unsigned long long)pci_addr,
(unsigned long long)table_size);
cpu_addr += table_size;
pci_addr += table_size;
@ -887,7 +890,8 @@ static int mtk_pcie_parse_port(struct mtk_gen3_pcie *pcie)
for (i = 0; i < num_resets; i++)
pcie->phy_resets[i].id = pcie->soc->phy_resets.id[i];
ret = devm_reset_control_bulk_get_optional_shared(dev, num_resets, pcie->phy_resets);
ret = devm_reset_control_bulk_get_optional_shared(dev, num_resets,
pcie->phy_resets);
if (ret) {
dev_err(dev, "failed to get PHY bulk reset\n");
return ret;
@ -917,22 +921,27 @@ static int mtk_pcie_parse_port(struct mtk_gen3_pcie *pcie)
return pcie->num_clks;
}
ret = of_property_read_u32(dev->of_node, "num-lanes", &num_lanes);
if (ret == 0) {
if (num_lanes == 0 || num_lanes > 16 || (num_lanes != 1 && num_lanes % 2))
ret = of_property_read_u32(dev->of_node, "num-lanes", &num_lanes);
if (ret == 0) {
if (num_lanes == 0 || num_lanes > 16 ||
(num_lanes != 1 && num_lanes % 2))
dev_warn(dev, "invalid num-lanes, using controller defaults\n");
else
else
pcie->num_lanes = num_lanes;
}
}
return 0;
}
static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
{
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct device *dev = pcie->dev;
struct resource_entry *entry;
struct regmap *pbus_regmap;
u32 val, args[2], size;
resource_size_t addr;
int err;
u32 val;
/*
* The controller may have been left out of reset by the bootloader
@ -940,11 +949,30 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
*/
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
reset_control_assert(pcie->mac_reset);
/* Wait for the time needed to complete the reset lines assert. */
msleep(PCIE_EN7581_RESET_TIME_MS);
/*
* Configure PBus base address and base address mask to allow the
* hw to detect if a given address is accessible on PCIe controller.
*/
pbus_regmap = syscon_regmap_lookup_by_phandle_args(dev->of_node,
"mediatek,pbus-csr",
ARRAY_SIZE(args),
args);
if (IS_ERR(pbus_regmap))
return PTR_ERR(pbus_regmap);
entry = resource_list_first_type(&host->windows, IORESOURCE_MEM);
if (!entry)
return -ENODEV;
addr = entry->res->start - entry->offset;
regmap_write(pbus_regmap, args[0], lower_32_bits(addr));
size = lower_32_bits(resource_size(entry->res));
regmap_write(pbus_regmap, args[1], GENMASK(31, __fls(size)));
/*
* Unlike the other MediaTek Gen3 controllers, the Airoha EN7581
* requires PHY initialization and power-on before PHY reset deassert.
@ -961,7 +989,8 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
goto err_phy_on;
}
err = reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
err = reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
if (err) {
dev_err(dev, "failed to deassert PHYs\n");
goto err_phy_deassert;
@ -1006,7 +1035,8 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
err_clk_prepare_enable:
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
err_phy_deassert:
phy_power_off(pcie->phy);
err_phy_on:
@ -1030,7 +1060,8 @@ static int mtk_pcie_power_up(struct mtk_gen3_pcie *pcie)
usleep_range(PCIE_MTK_RESET_TIME_US, 2 * PCIE_MTK_RESET_TIME_US);
/* PHY power on and enable pipe clock */
err = reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
err = reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
if (err) {
dev_err(dev, "failed to deassert PHYs\n");
return err;
@ -1070,7 +1101,8 @@ err_clk_init:
err_phy_on:
phy_exit(pcie->phy);
err_phy_init:
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
return err;
}
@ -1085,7 +1117,8 @@ static void mtk_pcie_power_down(struct mtk_gen3_pcie *pcie)
phy_power_off(pcie->phy);
phy_exit(pcie->phy);
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
}
static int mtk_pcie_get_controller_max_link_speed(struct mtk_gen3_pcie *pcie)
@ -1112,7 +1145,8 @@ static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
* Deassert the line in order to avoid unbalance in deassert_count
* counter since the bulk is shared.
*/
reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets,
pcie->phy_resets);
/* Don't touch the hardware registers before power up */
err = pcie->soc->power_up(pcie);

View File

@ -1041,24 +1041,22 @@ err_free_ck:
static int mtk_pcie_setup(struct mtk_pcie *pcie)
{
struct device *dev = pcie->dev;
struct device_node *node = dev->of_node, *child;
struct device_node *node = dev->of_node;
struct mtk_pcie_port *port, *tmp;
int err, slot;
slot = of_get_pci_domain_nr(dev->of_node);
if (slot < 0) {
for_each_available_child_of_node(node, child) {
for_each_available_child_of_node_scoped(node, child) {
err = of_pci_get_devfn(child);
if (err < 0) {
dev_err(dev, "failed to get devfn: %d\n", err);
goto error_put_node;
}
if (err < 0)
return dev_err_probe(dev, err, "failed to get devfn\n");
slot = PCI_SLOT(err);
err = mtk_pcie_parse_port(pcie, child, slot);
if (err)
goto error_put_node;
return err;
}
} else {
err = mtk_pcie_parse_port(pcie, node, slot);
@ -1079,9 +1077,6 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
mtk_pcie_subsys_powerdown(pcie);
return 0;
error_put_node:
of_node_put(child);
return err;
}
static int mtk_pcie_probe(struct platform_device *pdev)

View File

@ -258,30 +258,25 @@ static int mt7621_pcie_parse_dt(struct mt7621_pcie *pcie)
{
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
struct device_node *node = dev->of_node, *child;
struct device_node *node = dev->of_node;
int err;
pcie->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base);
for_each_available_child_of_node(node, child) {
for_each_available_child_of_node_scoped(node, child) {
int slot;
err = of_pci_get_devfn(child);
if (err < 0) {
of_node_put(child);
dev_err(dev, "failed to parse devfn: %d\n", err);
return err;
}
if (err < 0)
return dev_err_probe(dev, err, "failed to parse devfn\n");
slot = PCI_SLOT(err);
err = mt7621_pcie_parse_port(pcie, child, slot);
if (err) {
of_node_put(child);
if (err)
return err;
}
}
return 0;

View File

@ -178,8 +178,8 @@ static int rcar_pcie_config_access(struct rcar_pcie_host *host,
* space, it's generally only accessible when in endpoint mode.
* When in root complex mode, the controller is unable to target
* itself with either type 0 or type 1 accesses, and indeed, any
* controller initiated target transfer to its own config space
* result in a completer abort.
* controller-initiated target transfer to its own config space
* results in a completer abort.
*
* Each channel effectively only supports a single device, but as
* the same channel <-> device access works for any PCI_SLOT()
@ -775,7 +775,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
if (err)
return err;
/* Two irqs are for MSI, but they are also used for non-MSI irqs */
/* Two IRQs are for MSI, but they are also used for non-MSI IRQs */
err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq,
IRQF_SHARED | IRQF_NO_THREAD,
rcar_msi_bottom_chip.name, host);
@ -792,7 +792,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
goto err;
}
/* disable all MSIs */
/* Disable all MSIs */
rcar_pci_write_reg(pcie, 0, PCIEMSIIER);
/*
@ -892,6 +892,7 @@ static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie,
dev_err(pcie->dev, "Failed to map inbound regions!\n");
return -EINVAL;
}
/*
* If the size of the range is larger than the alignment of
* the start address, we have to use multiple entries to
@ -903,6 +904,7 @@ static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie,
size = min(size, alignment);
}
/* Hardware supports max 4GiB inbound region */
size = min(size, 1ULL << 32);

View File

@ -367,7 +367,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
}
}
rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
rockchip_pcie_write(rockchip, PCI_VENDOR_ID_ROCKCHIP,
PCIE_CORE_CONFIG_VENDOR);
rockchip_pcie_write(rockchip,
PCI_CLASS_BRIDGE_PCI_NORMAL << 8,

View File

@ -200,7 +200,6 @@
#define AXI_WRAPPER_NOR_MSG 0xc
#define PCIE_RC_SEND_PME_OFF 0x11960
#define ROCKCHIP_VENDOR_ID 0x1d87
#define PCIE_LINK_IS_L2(x) \
(((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
#define PCIE_LINK_TRAINING_DONE(x) \

View File

@ -84,6 +84,7 @@ enum xilinx_cpm_version {
CPM,
CPM5,
CPM5_HOST1,
CPM5NC_HOST,
};
/**
@ -478,6 +479,9 @@ static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie *port)
{
const struct xilinx_cpm_variant *variant = port->variant;
if (variant->version == CPM5NC_HOST)
return;
if (cpm_pcie_link_up(port))
dev_info(port->dev, "PCIe Link is UP\n");
else
@ -538,7 +542,8 @@ static int xilinx_cpm_pcie_parse_dt(struct xilinx_cpm_pcie *port,
if (IS_ERR(port->cfg))
return PTR_ERR(port->cfg);
if (port->variant->version == CPM5) {
if (port->variant->version == CPM5 ||
port->variant->version == CPM5_HOST1) {
port->reg_base = devm_platform_ioremap_resource_byname(pdev,
"cpm_csr");
if (IS_ERR(port->reg_base))
@ -578,28 +583,34 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
port->dev = dev;
err = xilinx_cpm_pcie_init_irq_domain(port);
if (err)
return err;
port->variant = of_device_get_match_data(dev);
if (port->variant->version != CPM5NC_HOST) {
err = xilinx_cpm_pcie_init_irq_domain(port);
if (err)
return err;
}
bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
if (!bus)
return -ENODEV;
port->variant = of_device_get_match_data(dev);
if (!bus) {
err = -ENODEV;
goto err_free_irq_domains;
}
err = xilinx_cpm_pcie_parse_dt(port, bus->res);
if (err) {
dev_err(dev, "Parsing DT failed\n");
goto err_parse_dt;
goto err_free_irq_domains;
}
xilinx_cpm_pcie_init_port(port);
err = xilinx_cpm_setup_irq(port);
if (err) {
dev_err(dev, "Failed to set up interrupts\n");
goto err_setup_irq;
if (port->variant->version != CPM5NC_HOST) {
err = xilinx_cpm_setup_irq(port);
if (err) {
dev_err(dev, "Failed to set up interrupts\n");
goto err_setup_irq;
}
}
bridge->sysdata = port->cfg;
@ -612,11 +623,13 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
return 0;
err_host_bridge:
xilinx_cpm_free_interrupts(port);
if (port->variant->version != CPM5NC_HOST)
xilinx_cpm_free_interrupts(port);
err_setup_irq:
pci_ecam_free(port->cfg);
err_parse_dt:
xilinx_cpm_free_irq_domains(port);
err_free_irq_domains:
if (port->variant->version != CPM5NC_HOST)
xilinx_cpm_free_irq_domains(port);
return err;
}
@ -639,6 +652,10 @@ static const struct xilinx_cpm_variant cpm5_host1 = {
.ir_enable = XILINX_CPM_PCIE1_IR_ENABLE,
};
static const struct xilinx_cpm_variant cpm5n_host = {
.version = CPM5NC_HOST,
};
static const struct of_device_id xilinx_cpm_pcie_of_match[] = {
{
.compatible = "xlnx,versal-cpm-host-1.00",
@ -652,6 +669,10 @@ static const struct of_device_id xilinx_cpm_pcie_of_match[] = {
.compatible = "xlnx,versal-cpm5-host1",
.data = &cpm5_host1,
},
{
.compatible = "xlnx,versal-cpm5nc-host",
.data = &cpm5n_host,
},
{}
};

View File

@ -127,7 +127,7 @@ struct vmd_irq_list {
struct vmd_dev {
struct pci_dev *dev;
spinlock_t cfg_lock;
raw_spinlock_t cfg_lock;
void __iomem *cfgbar;
int msix_count;
@ -393,7 +393,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
if (!addr)
return -EFAULT;
spin_lock_irqsave(&vmd->cfg_lock, flags);
raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
switch (len) {
case 1:
*value = readb(addr);
@ -408,7 +408,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
ret = -EINVAL;
break;
}
spin_unlock_irqrestore(&vmd->cfg_lock, flags);
raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
return ret;
}
@ -428,7 +428,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
if (!addr)
return -EFAULT;
spin_lock_irqsave(&vmd->cfg_lock, flags);
raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
switch (len) {
case 1:
writeb(value, addr);
@ -446,7 +446,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
ret = -EINVAL;
break;
}
spin_unlock_irqrestore(&vmd->cfg_lock, flags);
raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
return ret;
}
@ -1029,7 +1029,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
vmd->first_vec = 1;
spin_lock_init(&vmd->cfg_lock);
raw_spin_lock_init(&vmd->cfg_lock);
pci_set_drvdata(dev, vmd);
err = vmd_enable_domain(vmd, features);
if (err)

View File

@ -40,7 +40,7 @@
* Legacy struct storing addresses to whole mapped BARs.
*/
struct pcim_iomap_devres {
void __iomem *table[PCI_STD_NUM_BARS];
void __iomem *table[PCI_NUM_RESOURCES];
};
/* Used to restore the old INTx state on driver detach. */
@ -577,7 +577,7 @@ static int pcim_add_mapping_to_legacy_table(struct pci_dev *pdev,
{
void __iomem **legacy_iomap_table;
if (bar >= PCI_STD_NUM_BARS)
if (!pci_bar_index_is_valid(bar))
return -EINVAL;
legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
@ -622,7 +622,7 @@ static void pcim_remove_bar_from_legacy_table(struct pci_dev *pdev, int bar)
{
void __iomem **legacy_iomap_table;
if (bar >= PCI_STD_NUM_BARS)
if (!pci_bar_index_is_valid(bar))
return;
legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
@ -655,6 +655,9 @@ void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen)
void __iomem *mapping;
struct pcim_addr_devres *res;
if (!pci_bar_index_is_valid(bar))
return NULL;
res = pcim_addr_devres_alloc(pdev);
if (!res)
return NULL;
@ -722,6 +725,9 @@ void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
int ret;
struct pcim_addr_devres *res;
if (!pci_bar_index_is_valid(bar))
return IOMEM_ERR_PTR(-EINVAL);
res = pcim_addr_devres_alloc(pdev);
if (!res)
return IOMEM_ERR_PTR(-ENOMEM);
@ -823,6 +829,9 @@ static int _pcim_request_region(struct pci_dev *pdev, int bar, const char *name,
int ret;
struct pcim_addr_devres *res;
if (!pci_bar_index_is_valid(bar))
return -EINVAL;
res = pcim_addr_devres_alloc(pdev);
if (!res)
return -ENOMEM;
@ -991,6 +1000,9 @@ void __iomem *pcim_iomap_range(struct pci_dev *pdev, int bar,
void __iomem *mapping;
struct pcim_addr_devres *res;
if (!pci_bar_index_is_valid(bar))
return IOMEM_ERR_PTR(-EINVAL);
res = pcim_addr_devres_alloc(pdev);
if (!res)
return IOMEM_ERR_PTR(-ENOMEM);

View File

@ -14,15 +14,17 @@
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/jiffies.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/pci-doe.h>
#include <linux/sysfs.h>
#include <linux/workqueue.h>
#include "pci.h"
#define PCI_DOE_PROTOCOL_DISCOVERY 0
#define PCI_DOE_FEATURE_DISCOVERY 0
/* Timeout of 1 second from 6.30.2 Operation, PCI Spec r6.0 */
#define PCI_DOE_TIMEOUT HZ
@ -43,22 +45,27 @@
*
* @pdev: PCI device this mailbox belongs to
* @cap_offset: Capability offset
* @prots: Array of protocols supported (encoded as long values)
* @feats: Array of features supported (encoded as long values)
* @wq: Wait queue for work item
* @work_queue: Queue of pci_doe_work items
* @flags: Bit array of PCI_DOE_FLAG_* flags
* @sysfs_attrs: Array of sysfs device attributes
*/
struct pci_doe_mb {
struct pci_dev *pdev;
u16 cap_offset;
struct xarray prots;
struct xarray feats;
wait_queue_head_t wq;
struct workqueue_struct *work_queue;
unsigned long flags;
#ifdef CONFIG_SYSFS
struct device_attribute *sysfs_attrs;
#endif
};
struct pci_doe_protocol {
struct pci_doe_feature {
u16 vid;
u8 type;
};
@ -66,7 +73,7 @@ struct pci_doe_protocol {
/**
* struct pci_doe_task - represents a single query/response
*
* @prot: DOE Protocol
* @feat: DOE Feature
* @request_pl: The request payload
* @request_pl_sz: Size of the request payload (bytes)
* @response_pl: The response payload
@ -78,7 +85,7 @@ struct pci_doe_protocol {
* @doe_mb: Used internally by the mailbox
*/
struct pci_doe_task {
struct pci_doe_protocol prot;
struct pci_doe_feature feat;
const __le32 *request_pl;
size_t request_pl_sz;
__le32 *response_pl;
@ -92,6 +99,152 @@ struct pci_doe_task {
struct pci_doe_mb *doe_mb;
};
#ifdef CONFIG_SYSFS
static ssize_t doe_discovery_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "0001:00\n");
}
static DEVICE_ATTR_RO(doe_discovery);
static struct attribute *pci_doe_sysfs_feature_attrs[] = {
&dev_attr_doe_discovery.attr,
NULL
};
static bool pci_doe_features_sysfs_group_visible(struct kobject *kobj)
{
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
return !xa_empty(&pdev->doe_mbs);
}
DEFINE_SIMPLE_SYSFS_GROUP_VISIBLE(pci_doe_features_sysfs)
const struct attribute_group pci_doe_sysfs_group = {
.name = "doe_features",
.attrs = pci_doe_sysfs_feature_attrs,
.is_visible = SYSFS_GROUP_VISIBLE(pci_doe_features_sysfs),
};
static ssize_t pci_doe_sysfs_feature_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%s\n", attr->attr.name);
}
static void pci_doe_sysfs_feature_remove(struct pci_dev *pdev,
struct pci_doe_mb *doe_mb)
{
struct device_attribute *attrs = doe_mb->sysfs_attrs;
struct device *dev = &pdev->dev;
unsigned long i;
void *entry;
if (!attrs)
return;
doe_mb->sysfs_attrs = NULL;
xa_for_each(&doe_mb->feats, i, entry) {
if (attrs[i].show)
sysfs_remove_file_from_group(&dev->kobj, &attrs[i].attr,
pci_doe_sysfs_group.name);
kfree(attrs[i].attr.name);
}
kfree(attrs);
}
static int pci_doe_sysfs_feature_populate(struct pci_dev *pdev,
struct pci_doe_mb *doe_mb)
{
struct device *dev = &pdev->dev;
struct device_attribute *attrs;
unsigned long num_features = 0;
unsigned long vid, type;
unsigned long i;
void *entry;
int ret;
xa_for_each(&doe_mb->feats, i, entry)
num_features++;
attrs = kcalloc(num_features, sizeof(*attrs), GFP_KERNEL);
if (!attrs) {
pci_warn(pdev, "Failed allocating the device_attribute array\n");
return -ENOMEM;
}
doe_mb->sysfs_attrs = attrs;
xa_for_each(&doe_mb->feats, i, entry) {
sysfs_attr_init(&attrs[i].attr);
vid = xa_to_value(entry) >> 8;
type = xa_to_value(entry) & 0xFF;
if (vid == PCI_VENDOR_ID_PCI_SIG &&
type == PCI_DOE_FEATURE_DISCOVERY) {
/*
* DOE Discovery, manually displayed by
* `dev_attr_doe_discovery`
*/
continue;
}
attrs[i].attr.name = kasprintf(GFP_KERNEL,
"%04lx:%02lx", vid, type);
if (!attrs[i].attr.name) {
ret = -ENOMEM;
pci_warn(pdev, "Failed allocating the attribute name\n");
goto fail;
}
attrs[i].attr.mode = 0444;
attrs[i].show = pci_doe_sysfs_feature_show;
ret = sysfs_add_file_to_group(&dev->kobj, &attrs[i].attr,
pci_doe_sysfs_group.name);
if (ret) {
attrs[i].show = NULL;
if (ret != -EEXIST) {
pci_warn(pdev, "Failed adding %s to sysfs group\n",
attrs[i].attr.name);
goto fail;
} else
kfree(attrs[i].attr.name);
}
}
return 0;
fail:
pci_doe_sysfs_feature_remove(pdev, doe_mb);
return ret;
}
void pci_doe_sysfs_teardown(struct pci_dev *pdev)
{
struct pci_doe_mb *doe_mb;
unsigned long index;
xa_for_each(&pdev->doe_mbs, index, doe_mb)
pci_doe_sysfs_feature_remove(pdev, doe_mb);
}
void pci_doe_sysfs_init(struct pci_dev *pdev)
{
struct pci_doe_mb *doe_mb;
unsigned long index;
int ret;
xa_for_each(&pdev->doe_mbs, index, doe_mb) {
ret = pci_doe_sysfs_feature_populate(pdev, doe_mb);
if (ret)
return;
}
}
#endif
static int pci_doe_wait(struct pci_doe_mb *doe_mb, unsigned long timeout)
{
if (wait_event_timeout(doe_mb->wq,
@ -183,8 +336,8 @@ static int pci_doe_send_req(struct pci_doe_mb *doe_mb,
length = 0;
/* Write DOE Header */
val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, task->prot.vid) |
FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, task->prot.type);
val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, task->feat.vid) |
FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, task->feat.type);
pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, val);
pci_write_config_dword(pdev, offset + PCI_DOE_WRITE,
FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH,
@ -229,12 +382,12 @@ static int pci_doe_recv_resp(struct pci_doe_mb *doe_mb, struct pci_doe_task *tas
int i = 0;
u32 val;
/* Read the first dword to get the protocol */
/* Read the first dword to get the feature */
pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val);
if ((FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val) != task->prot.vid) ||
(FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val) != task->prot.type)) {
dev_err_ratelimited(&pdev->dev, "[%x] expected [VID, Protocol] = [%04x, %02x], got [%04x, %02x]\n",
doe_mb->cap_offset, task->prot.vid, task->prot.type,
if ((FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val) != task->feat.vid) ||
(FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val) != task->feat.type)) {
dev_err_ratelimited(&pdev->dev, "[%x] expected [VID, Feature] = [%04x, %02x], got [%04x, %02x]\n",
doe_mb->cap_offset, task->feat.vid, task->feat.type,
FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val),
FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val));
return -EIO;
@ -396,7 +549,7 @@ static void pci_doe_task_complete(struct pci_doe_task *task)
}
static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 capver, u8 *index, u16 *vid,
u8 *protocol)
u8 *feature)
{
u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX,
*index) |
@ -407,7 +560,7 @@ static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 capver, u8 *index, u1
u32 response_pl;
int rc;
rc = pci_doe(doe_mb, PCI_VENDOR_ID_PCI_SIG, PCI_DOE_PROTOCOL_DISCOVERY,
rc = pci_doe(doe_mb, PCI_VENDOR_ID_PCI_SIG, PCI_DOE_FEATURE_DISCOVERY,
&request_pl_le, sizeof(request_pl_le),
&response_pl_le, sizeof(response_pl_le));
if (rc < 0)
@ -418,7 +571,7 @@ static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 capver, u8 *index, u1
response_pl = le32_to_cpu(response_pl_le);
*vid = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID, response_pl);
*protocol = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL,
*feature = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_TYPE,
response_pl);
*index = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX,
response_pl);
@ -426,12 +579,12 @@ static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 capver, u8 *index, u1
return 0;
}
static void *pci_doe_xa_prot_entry(u16 vid, u8 prot)
static void *pci_doe_xa_feat_entry(u16 vid, u8 type)
{
return xa_mk_value((vid << 8) | prot);
return xa_mk_value((vid << 8) | type);
}
static int pci_doe_cache_protocols(struct pci_doe_mb *doe_mb)
static int pci_doe_cache_features(struct pci_doe_mb *doe_mb)
{
u8 index = 0;
u8 xa_idx = 0;
@ -442,19 +595,19 @@ static int pci_doe_cache_protocols(struct pci_doe_mb *doe_mb)
do {
int rc;
u16 vid;
u8 prot;
u8 type;
rc = pci_doe_discovery(doe_mb, PCI_EXT_CAP_VER(hdr), &index,
&vid, &prot);
&vid, &type);
if (rc)
return rc;
pci_dbg(doe_mb->pdev,
"[%x] Found protocol %d vid: %x prot: %x\n",
doe_mb->cap_offset, xa_idx, vid, prot);
"[%x] Found feature %d vid: %x type: %x\n",
doe_mb->cap_offset, xa_idx, vid, type);
rc = xa_insert(&doe_mb->prots, xa_idx++,
pci_doe_xa_prot_entry(vid, prot), GFP_KERNEL);
rc = xa_insert(&doe_mb->feats, xa_idx++,
pci_doe_xa_feat_entry(vid, type), GFP_KERNEL);
if (rc)
return rc;
} while (index);
@ -478,7 +631,7 @@ static void pci_doe_cancel_tasks(struct pci_doe_mb *doe_mb)
* @pdev: PCI device to create the DOE mailbox for
* @cap_offset: Offset of the DOE mailbox
*
* Create a single mailbox object to manage the mailbox protocol at the
* Create a single mailbox object to manage the mailbox feature at the
* cap_offset specified.
*
* RETURNS: created mailbox object on success
@ -497,7 +650,7 @@ static struct pci_doe_mb *pci_doe_create_mb(struct pci_dev *pdev,
doe_mb->pdev = pdev;
doe_mb->cap_offset = cap_offset;
init_waitqueue_head(&doe_mb->wq);
xa_init(&doe_mb->prots);
xa_init(&doe_mb->feats);
doe_mb->work_queue = alloc_ordered_workqueue("%s %s DOE [%x]", 0,
dev_bus_name(&pdev->dev),
@ -520,11 +673,11 @@ static struct pci_doe_mb *pci_doe_create_mb(struct pci_dev *pdev,
/*
* The state machine and the mailbox should be in sync now;
* Use the mailbox to query protocols.
* Use the mailbox to query features.
*/
rc = pci_doe_cache_protocols(doe_mb);
rc = pci_doe_cache_features(doe_mb);
if (rc) {
pci_err(pdev, "[%x] failed to cache protocols : %d\n",
pci_err(pdev, "[%x] failed to cache features : %d\n",
doe_mb->cap_offset, rc);
goto err_cancel;
}
@ -533,7 +686,7 @@ static struct pci_doe_mb *pci_doe_create_mb(struct pci_dev *pdev,
err_cancel:
pci_doe_cancel_tasks(doe_mb);
xa_destroy(&doe_mb->prots);
xa_destroy(&doe_mb->feats);
err_destroy_wq:
destroy_workqueue(doe_mb->work_queue);
err_free:
@ -551,31 +704,31 @@ err_free:
static void pci_doe_destroy_mb(struct pci_doe_mb *doe_mb)
{
pci_doe_cancel_tasks(doe_mb);
xa_destroy(&doe_mb->prots);
xa_destroy(&doe_mb->feats);
destroy_workqueue(doe_mb->work_queue);
kfree(doe_mb);
}
/**
* pci_doe_supports_prot() - Return if the DOE instance supports the given
* protocol
* pci_doe_supports_feat() - Return if the DOE instance supports the given
* feature
* @doe_mb: DOE mailbox capability to query
* @vid: Protocol Vendor ID
* @type: Protocol type
* @vid: Feature Vendor ID
* @type: Feature type
*
* RETURNS: True if the DOE mailbox supports the protocol specified
* RETURNS: True if the DOE mailbox supports the feature specified
*/
static bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type)
static bool pci_doe_supports_feat(struct pci_doe_mb *doe_mb, u16 vid, u8 type)
{
unsigned long index;
void *entry;
/* The discovery protocol must always be supported */
if (vid == PCI_VENDOR_ID_PCI_SIG && type == PCI_DOE_PROTOCOL_DISCOVERY)
/* The discovery feature must always be supported */
if (vid == PCI_VENDOR_ID_PCI_SIG && type == PCI_DOE_FEATURE_DISCOVERY)
return true;
xa_for_each(&doe_mb->prots, index, entry)
if (entry == pci_doe_xa_prot_entry(vid, type))
xa_for_each(&doe_mb->feats, index, entry)
if (entry == pci_doe_xa_feat_entry(vid, type))
return true;
return false;
@ -603,7 +756,7 @@ static bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type)
static int pci_doe_submit_task(struct pci_doe_mb *doe_mb,
struct pci_doe_task *task)
{
if (!pci_doe_supports_prot(doe_mb, task->prot.vid, task->prot.type))
if (!pci_doe_supports_feat(doe_mb, task->feat.vid, task->feat.type))
return -EINVAL;
if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags))
@ -649,8 +802,8 @@ int pci_doe(struct pci_doe_mb *doe_mb, u16 vendor, u8 type,
{
DECLARE_COMPLETION_ONSTACK(c);
struct pci_doe_task task = {
.prot.vid = vendor,
.prot.type = type,
.feat.vid = vendor,
.feat.type = type,
.request_pl = request,
.request_pl_sz = request_sz,
.response_pl = response,
@ -677,7 +830,7 @@ EXPORT_SYMBOL_GPL(pci_doe);
* @vendor: Vendor ID
* @type: Data Object Type
*
* Find first DOE mailbox of a PCI device which supports the given protocol.
* Find first DOE mailbox of a PCI device which supports the given feature.
*
* RETURNS: Pointer to the DOE mailbox or NULL if none was found.
*/
@ -688,7 +841,7 @@ struct pci_doe_mb *pci_find_doe_mailbox(struct pci_dev *pdev, u16 vendor,
unsigned long index;
xa_for_each(&pdev->doe_mbs, index, doe_mb)
if (pci_doe_supports_prot(doe_mb, vendor, type))
if (pci_doe_supports_feat(doe_mb, vendor, type))
return doe_mb;
return NULL;

View File

@ -26,7 +26,7 @@ config PCI_ENDPOINT_CONFIGFS
help
This will enable the configfs entry that can be used to
configure the endpoint function and used to bind the
function with a endpoint controller.
function with an endpoint controller.
source "drivers/pci/endpoint/functions/Kconfig"

View File

@ -125,7 +125,7 @@ static const struct pci_epf_mhi_ep_info sm8450_info = {
static struct pci_epf_header sa8775p_header = {
.vendorid = PCI_VENDOR_ID_QCOM,
.deviceid = 0x0306, /* FIXME: Update deviceid for sa8775p EP */
.deviceid = 0x0116,
.baseclass_code = PCI_CLASS_OTHERS,
.interrupt_pin = PCI_INTERRUPT_INTA,
};

View File

@ -45,6 +45,9 @@
#define TIMER_RESOLUTION 1
#define CAP_UNALIGNED_ACCESS BIT(0)
#define CAP_MSI BIT(1)
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
static struct workqueue_struct *kpcitest_workqueue;
@ -66,17 +69,17 @@ struct pci_epf_test {
};
struct pci_epf_test_reg {
u32 magic;
u32 command;
u32 status;
u64 src_addr;
u64 dst_addr;
u32 size;
u32 checksum;
u32 irq_type;
u32 irq_number;
u32 flags;
u32 caps;
__le32 magic;
__le32 command;
__le32 status;
__le64 src_addr;
__le64 dst_addr;
__le32 size;
__le32 checksum;
__le32 irq_type;
__le32 irq_number;
__le32 flags;
__le32 caps;
} __packed;
static struct pci_epf_header test_header = {
@ -324,13 +327,17 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct pci_epc_map src_map, dst_map;
u64 src_addr = reg->src_addr;
u64 dst_addr = reg->dst_addr;
size_t copy_size = reg->size;
u64 src_addr = le64_to_cpu(reg->src_addr);
u64 dst_addr = le64_to_cpu(reg->dst_addr);
size_t orig_size, copy_size;
ssize_t map_size = 0;
u32 flags = le32_to_cpu(reg->flags);
u32 status = 0;
void *copy_buf = NULL, *buf;
if (reg->flags & FLAG_USE_DMA) {
orig_size = copy_size = le32_to_cpu(reg->size);
if (flags & FLAG_USE_DMA) {
if (!dma_has_cap(DMA_MEMCPY, epf_test->dma_chan_tx->device->cap_mask)) {
dev_err(dev, "DMA controller doesn't support MEMCPY\n");
ret = -EINVAL;
@ -350,7 +357,7 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
src_addr, copy_size, &src_map);
if (ret) {
dev_err(dev, "Failed to map source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
status = STATUS_SRC_ADDR_INVALID;
goto free_buf;
}
@ -358,7 +365,7 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
dst_addr, copy_size, &dst_map);
if (ret) {
dev_err(dev, "Failed to map destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
status = STATUS_DST_ADDR_INVALID;
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no,
&src_map);
goto free_buf;
@ -367,7 +374,7 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
map_size = min_t(size_t, dst_map.pci_size, src_map.pci_size);
ktime_get_ts64(&start);
if (reg->flags & FLAG_USE_DMA) {
if (flags & FLAG_USE_DMA) {
ret = pci_epf_test_data_transfer(epf_test,
dst_map.phys_addr, src_map.phys_addr,
map_size, 0, DMA_MEM_TO_MEM);
@ -391,8 +398,8 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
map_size = 0;
}
pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start,
&end, reg->flags & FLAG_USE_DMA);
pci_epf_test_print_rate(epf_test, "COPY", orig_size, &start, &end,
flags & FLAG_USE_DMA);
unmap:
if (map_size) {
@ -405,9 +412,10 @@ free_buf:
set_status:
if (!ret)
reg->status |= STATUS_COPY_SUCCESS;
status |= STATUS_COPY_SUCCESS;
else
reg->status |= STATUS_COPY_FAIL;
status |= STATUS_COPY_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_read(struct pci_epf_test *epf_test,
@ -423,9 +431,14 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct device *dma_dev = epf->epc->dev.parent;
u64 src_addr = reg->src_addr;
size_t src_size = reg->size;
u64 src_addr = le64_to_cpu(reg->src_addr);
size_t orig_size, src_size;
ssize_t map_size = 0;
u32 flags = le32_to_cpu(reg->flags);
u32 checksum = le32_to_cpu(reg->checksum);
u32 status = 0;
orig_size = src_size = le32_to_cpu(reg->size);
src_buf = kzalloc(src_size, GFP_KERNEL);
if (!src_buf) {
@ -439,12 +452,12 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
src_addr, src_size, &map);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
status = STATUS_SRC_ADDR_INVALID;
goto free_buf;
}
map_size = map.pci_size;
if (reg->flags & FLAG_USE_DMA) {
if (flags & FLAG_USE_DMA) {
dst_phys_addr = dma_map_single(dma_dev, buf, map_size,
DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, dst_phys_addr)) {
@ -481,11 +494,11 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
map_size = 0;
}
pci_epf_test_print_rate(epf_test, "READ", reg->size, &start,
&end, reg->flags & FLAG_USE_DMA);
pci_epf_test_print_rate(epf_test, "READ", orig_size, &start, &end,
flags & FLAG_USE_DMA);
crc32 = crc32_le(~0, src_buf, reg->size);
if (crc32 != reg->checksum)
crc32 = crc32_le(~0, src_buf, orig_size);
if (crc32 != checksum)
ret = -EIO;
unmap:
@ -497,9 +510,10 @@ free_buf:
set_status:
if (!ret)
reg->status |= STATUS_READ_SUCCESS;
status |= STATUS_READ_SUCCESS;
else
reg->status |= STATUS_READ_FAIL;
status |= STATUS_READ_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_write(struct pci_epf_test *epf_test,
@ -514,9 +528,13 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct device *dma_dev = epf->epc->dev.parent;
u64 dst_addr = reg->dst_addr;
size_t dst_size = reg->size;
u64 dst_addr = le64_to_cpu(reg->dst_addr);
size_t orig_size, dst_size;
ssize_t map_size = 0;
u32 flags = le32_to_cpu(reg->flags);
u32 status = 0;
orig_size = dst_size = le32_to_cpu(reg->size);
dst_buf = kzalloc(dst_size, GFP_KERNEL);
if (!dst_buf) {
@ -524,7 +542,7 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
goto set_status;
}
get_random_bytes(dst_buf, dst_size);
reg->checksum = crc32_le(~0, dst_buf, dst_size);
reg->checksum = cpu_to_le32(crc32_le(~0, dst_buf, dst_size));
buf = dst_buf;
while (dst_size) {
@ -532,12 +550,12 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
dst_addr, dst_size, &map);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_DST_ADDR_INVALID;
status = STATUS_DST_ADDR_INVALID;
goto free_buf;
}
map_size = map.pci_size;
if (reg->flags & FLAG_USE_DMA) {
if (flags & FLAG_USE_DMA) {
src_phys_addr = dma_map_single(dma_dev, buf, map_size,
DMA_TO_DEVICE);
if (dma_mapping_error(dma_dev, src_phys_addr)) {
@ -576,8 +594,8 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
map_size = 0;
}
pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start,
&end, reg->flags & FLAG_USE_DMA);
pci_epf_test_print_rate(epf_test, "WRITE", orig_size, &start, &end,
flags & FLAG_USE_DMA);
/*
* wait 1ms inorder for the write to complete. Without this delay L3
@ -594,9 +612,10 @@ free_buf:
set_status:
if (!ret)
reg->status |= STATUS_WRITE_SUCCESS;
status |= STATUS_WRITE_SUCCESS;
else
reg->status |= STATUS_WRITE_FAIL;
status |= STATUS_WRITE_FAIL;
reg->status = cpu_to_le32(status);
}
static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test,
@ -605,39 +624,42 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test,
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
u32 status = reg->status | STATUS_IRQ_RAISED;
u32 status = le32_to_cpu(reg->status);
u32 irq_number = le32_to_cpu(reg->irq_number);
u32 irq_type = le32_to_cpu(reg->irq_type);
int count;
/*
* Set the status before raising the IRQ to ensure that the host sees
* the updated value when it gets the IRQ.
*/
WRITE_ONCE(reg->status, status);
status |= STATUS_IRQ_RAISED;
WRITE_ONCE(reg->status, cpu_to_le32(status));
switch (reg->irq_type) {
switch (irq_type) {
case IRQ_TYPE_INTX:
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_IRQ_INTX, 0);
break;
case IRQ_TYPE_MSI:
count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0) {
if (irq_number > count || count <= 0) {
dev_err(dev, "Invalid MSI IRQ number %d / %d\n",
reg->irq_number, count);
irq_number, count);
return;
}
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_IRQ_MSI, reg->irq_number);
PCI_IRQ_MSI, irq_number);
break;
case IRQ_TYPE_MSIX:
count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0) {
dev_err(dev, "Invalid MSIX IRQ number %d / %d\n",
reg->irq_number, count);
if (irq_number > count || count <= 0) {
dev_err(dev, "Invalid MSI-X IRQ number %d / %d\n",
irq_number, count);
return;
}
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_IRQ_MSIX, reg->irq_number);
PCI_IRQ_MSIX, irq_number);
break;
default:
dev_err(dev, "Failed to raise IRQ, unknown type\n");
@ -654,21 +676,22 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
struct device *dev = &epf->dev;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
u32 irq_type = le32_to_cpu(reg->irq_type);
command = READ_ONCE(reg->command);
command = le32_to_cpu(READ_ONCE(reg->command));
if (!command)
goto reset_handler;
WRITE_ONCE(reg->command, 0);
WRITE_ONCE(reg->status, 0);
if ((READ_ONCE(reg->flags) & FLAG_USE_DMA) &&
if ((le32_to_cpu(READ_ONCE(reg->flags)) & FLAG_USE_DMA) &&
!epf_test->dma_supported) {
dev_err(dev, "Cannot transfer data using DMA\n");
goto reset_handler;
}
if (reg->irq_type > IRQ_TYPE_MSIX) {
if (irq_type > IRQ_TYPE_MSIX) {
dev_err(dev, "Failed to detect IRQ type\n");
goto reset_handler;
}
@ -718,6 +741,7 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
if (ret) {
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
epf_test->reg[bar] = NULL;
dev_err(dev, "Failed to set BAR%d\n", bar);
if (bar == test_reg_bar)
return ret;
@ -753,6 +777,15 @@ static void pci_epf_test_set_capabilities(struct pci_epf *epf)
if (epc->ops->align_addr)
caps |= CAP_UNALIGNED_ACCESS;
if (epf_test->epc_features->msi_capable)
caps |= CAP_MSI;
if (epf_test->epc_features->msix_capable)
caps |= CAP_MSIX;
if (epf_test->epc_features->intx_capable)
caps |= CAP_INTX;
reg->caps = cpu_to_le32(caps);
}
@ -909,6 +942,7 @@ static void pci_epf_test_free_space(struct pci_epf *epf)
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
epf_test->reg[bar] = NULL;
}
}

View File

@ -25,13 +25,6 @@ static void devm_pci_epc_release(struct device *dev, void *res)
pci_epc_destroy(epc);
}
static int devm_pci_epc_match(struct device *dev, void *res, void *match_data)
{
struct pci_epc **epc = res;
return *epc == match_data;
}
/**
* pci_epc_put() - release the PCI endpoint controller
* @epc: epc returned by pci_epc_get()
@ -609,6 +602,10 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
if (!epc_features)
return -EINVAL;
if (epc_features->bar[bar].type == BAR_RESIZABLE &&
(epf_bar->size < SZ_1M || (u64)epf_bar->size > (SZ_128G * 1024)))
return -EINVAL;
if (epc_features->bar[bar].type == BAR_FIXED &&
(epc_features->bar[bar].fixed_size != epf_bar->size))
return -EINVAL;
@ -634,6 +631,33 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
}
EXPORT_SYMBOL_GPL(pci_epc_set_bar);
/**
* pci_epc_bar_size_to_rebar_cap() - convert a size to the representation used
* by the Resizable BAR Capability Register
* @size: the size to convert
* @cap: where to store the result
*
* Returns 0 on success and a negative error code in case of error.
*/
int pci_epc_bar_size_to_rebar_cap(size_t size, u32 *cap)
{
/*
* As per PCIe r6.0, sec 7.8.6.2, min size for a resizable BAR is 1 MB,
* thus disallow a requested BAR size smaller than 1 MB.
* Disallow a requested BAR size larger than 128 TB.
*/
if (size < SZ_1M || (u64)size > (SZ_128G * 1024))
return -EINVAL;
*cap = ilog2(size) - ilog2(SZ_1M);
/* Sizes in REBAR_CAP start at BIT(4). */
*cap = BIT(*cap + 4);
return 0;
}
EXPORT_SYMBOL_GPL(pci_epc_bar_size_to_rebar_cap);
/**
* pci_epc_write_header() - write standard configuration header
* @epc: the EPC device to which the configuration header should be written
@ -931,24 +955,6 @@ void pci_epc_destroy(struct pci_epc *epc)
}
EXPORT_SYMBOL_GPL(pci_epc_destroy);
/**
* devm_pci_epc_destroy() - destroy the EPC device
* @dev: device that wants to destroy the EPC
* @epc: the EPC device that has to be destroyed
*
* Invoke to destroy the devres associated with this
* pci_epc and destroy the EPC device.
*/
void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc)
{
int r;
r = devres_release(dev, devm_pci_epc_release, devm_pci_epc_match,
epc);
dev_WARN_ONCE(dev, r, "couldn't find PCI EPC resource\n");
}
EXPORT_SYMBOL_GPL(devm_pci_epc_destroy);
static void pci_epc_release(struct device *dev)
{
kfree(to_pci_epc(dev));

View File

@ -274,6 +274,10 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
if (size < 128)
size = 128;
/* According to PCIe base spec, min size for a resizable BAR is 1 MB. */
if (epc_features->bar[bar].type == BAR_RESIZABLE && size < SZ_1M)
size = SZ_1M;
if (epc_features->bar[bar].type == BAR_FIXED && bar_fixed_size) {
if (size > bar_fixed_size) {
dev_err(&epf->dev,

View File

@ -97,7 +97,7 @@ config HOTPLUG_PCI_CPCI_ZT5550
tristate "Ziatech ZT5550 CompactPCI Hotplug driver"
depends on HOTPLUG_PCI_CPCI && X86
help
Say Y here if you have an Performance Technologies (formerly Intel,
Say Y here if you have a Performance Technologies (formerly Intel,
formerly just Ziatech) Ziatech ZT5550 CompactPCI system card.
To compile this driver as a module, choose M here: the

View File

@ -44,8 +44,6 @@ struct cpci_hp_controller_ops {
int (*enable_irq)(void);
int (*disable_irq)(void);
int (*check_irq)(void *dev_id);
u8 (*get_power)(struct slot *slot);
int (*set_power)(struct slot *slot, int value);
};
struct cpci_hp_controller {

View File

@ -71,13 +71,10 @@ static int
enable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = to_slot(hotplug_slot);
int retval = 0;
dbg("%s - physical_slot = %s", __func__, slot_name(slot));
if (controller->ops->set_power)
retval = controller->ops->set_power(slot, 1);
return retval;
return 0;
}
static int
@ -109,12 +106,6 @@ disable_slot(struct hotplug_slot *hotplug_slot)
}
cpci_led_on(slot);
if (controller->ops->set_power) {
retval = controller->ops->set_power(slot, 0);
if (retval)
goto disable_error;
}
slot->adapter_status = 0;
if (slot->extracting) {
@ -129,11 +120,7 @@ disable_error:
static u8
cpci_get_power_status(struct slot *slot)
{
u8 power = 1;
if (controller->ops->get_power)
power = controller->ops->get_power(slot);
return power;
return 1;
}
static int

View File

@ -14,18 +14,16 @@
* Scott Murray <scottm@somanetworks.com>
*/
#include <linux/module.h> /* try_module_get & module_put */
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/list.h>
#include <linux/kobject.h>
#include <linux/sysfs.h>
#include <linux/pagemap.h>
#include <linux/init.h>
#include <linux/mount.h>
#include <linux/namei.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
#include <linux/uaccess.h>
@ -42,20 +40,14 @@
/* local variables */
static bool debug;
static LIST_HEAD(pci_hotplug_slot_list);
static DEFINE_MUTEX(pci_hp_mutex);
/* Weee, fun with macros... */
#define GET_STATUS(name, type) \
static int get_##name(struct hotplug_slot *slot, type *value) \
{ \
const struct hotplug_slot_ops *ops = slot->ops; \
int retval = 0; \
if (!try_module_get(slot->owner)) \
return -ENODEV; \
if (ops->get_##name) \
retval = ops->get_##name(slot, value); \
module_put(slot->owner); \
return retval; \
}
@ -88,10 +80,6 @@ static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf,
power = (u8)(lpower & 0xff);
dbg("power = %d\n", power);
if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
switch (power) {
case 0:
if (slot->ops->disable_slot)
@ -107,9 +95,7 @@ static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf,
err("Illegal value specified for power\n");
retval = -EINVAL;
}
module_put(slot->owner);
exit:
if (retval)
return retval;
return count;
@ -146,15 +132,9 @@ static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf,
attention = (u8)(lattention & 0xff);
dbg(" - attention = %d\n", attention);
if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
if (ops->set_attention_status)
retval = ops->set_attention_status(slot, attention);
module_put(slot->owner);
exit:
if (retval)
return retval;
return count;
@ -212,15 +192,9 @@ static ssize_t test_write_file(struct pci_slot *pci_slot, const char *buf,
test = (u32)(ltest & 0xffffffff);
dbg("test = %d\n", test);
if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
if (slot->ops->hardware_test)
retval = slot->ops->hardware_test(slot, test);
module_put(slot->owner);
exit:
if (retval)
return retval;
return count;
@ -231,12 +205,8 @@ static struct pci_slot_attribute hotplug_slot_attr_test = {
.store = test_write_file
};
static bool has_power_file(struct pci_slot *pci_slot)
static bool has_power_file(struct hotplug_slot *slot)
{
struct hotplug_slot *slot = pci_slot->hotplug;
if ((!slot) || (!slot->ops))
return false;
if ((slot->ops->enable_slot) ||
(slot->ops->disable_slot) ||
(slot->ops->get_power_status))
@ -244,87 +214,79 @@ static bool has_power_file(struct pci_slot *pci_slot)
return false;
}
static bool has_attention_file(struct pci_slot *pci_slot)
static bool has_attention_file(struct hotplug_slot *slot)
{
struct hotplug_slot *slot = pci_slot->hotplug;
if ((!slot) || (!slot->ops))
return false;
if ((slot->ops->set_attention_status) ||
(slot->ops->get_attention_status))
return true;
return false;
}
static bool has_latch_file(struct pci_slot *pci_slot)
static bool has_latch_file(struct hotplug_slot *slot)
{
struct hotplug_slot *slot = pci_slot->hotplug;
if ((!slot) || (!slot->ops))
return false;
if (slot->ops->get_latch_status)
return true;
return false;
}
static bool has_adapter_file(struct pci_slot *pci_slot)
static bool has_adapter_file(struct hotplug_slot *slot)
{
struct hotplug_slot *slot = pci_slot->hotplug;
if ((!slot) || (!slot->ops))
return false;
if (slot->ops->get_adapter_status)
return true;
return false;
}
static bool has_test_file(struct pci_slot *pci_slot)
static bool has_test_file(struct hotplug_slot *slot)
{
struct hotplug_slot *slot = pci_slot->hotplug;
if ((!slot) || (!slot->ops))
return false;
if (slot->ops->hardware_test)
return true;
return false;
}
static int fs_add_slot(struct pci_slot *pci_slot)
static int fs_add_slot(struct hotplug_slot *slot, struct pci_slot *pci_slot)
{
struct kobject *kobj;
int retval = 0;
/* Create symbolic link to the hotplug driver module */
pci_hp_create_module_link(pci_slot);
kobj = kset_find_obj(module_kset, slot->mod_name);
if (kobj) {
retval = sysfs_create_link(&pci_slot->kobj, kobj, "module");
if (retval)
dev_err(&pci_slot->bus->dev,
"Error creating sysfs link (%d)\n", retval);
kobject_put(kobj);
}
if (has_power_file(pci_slot)) {
if (has_power_file(slot)) {
retval = sysfs_create_file(&pci_slot->kobj,
&hotplug_slot_attr_power.attr);
if (retval)
goto exit_power;
}
if (has_attention_file(pci_slot)) {
if (has_attention_file(slot)) {
retval = sysfs_create_file(&pci_slot->kobj,
&hotplug_slot_attr_attention.attr);
if (retval)
goto exit_attention;
}
if (has_latch_file(pci_slot)) {
if (has_latch_file(slot)) {
retval = sysfs_create_file(&pci_slot->kobj,
&hotplug_slot_attr_latch.attr);
if (retval)
goto exit_latch;
}
if (has_adapter_file(pci_slot)) {
if (has_adapter_file(slot)) {
retval = sysfs_create_file(&pci_slot->kobj,
&hotplug_slot_attr_presence.attr);
if (retval)
goto exit_adapter;
}
if (has_test_file(pci_slot)) {
if (has_test_file(slot)) {
retval = sysfs_create_file(&pci_slot->kobj,
&hotplug_slot_attr_test.attr);
if (retval)
@ -334,56 +296,45 @@ static int fs_add_slot(struct pci_slot *pci_slot)
goto exit;
exit_test:
if (has_adapter_file(pci_slot))
if (has_adapter_file(slot))
sysfs_remove_file(&pci_slot->kobj,
&hotplug_slot_attr_presence.attr);
exit_adapter:
if (has_latch_file(pci_slot))
if (has_latch_file(slot))
sysfs_remove_file(&pci_slot->kobj, &hotplug_slot_attr_latch.attr);
exit_latch:
if (has_attention_file(pci_slot))
if (has_attention_file(slot))
sysfs_remove_file(&pci_slot->kobj,
&hotplug_slot_attr_attention.attr);
exit_attention:
if (has_power_file(pci_slot))
if (has_power_file(slot))
sysfs_remove_file(&pci_slot->kobj, &hotplug_slot_attr_power.attr);
exit_power:
pci_hp_remove_module_link(pci_slot);
sysfs_remove_link(&pci_slot->kobj, "module");
exit:
return retval;
}
static void fs_remove_slot(struct pci_slot *pci_slot)
static void fs_remove_slot(struct hotplug_slot *slot, struct pci_slot *pci_slot)
{
if (has_power_file(pci_slot))
if (has_power_file(slot))
sysfs_remove_file(&pci_slot->kobj, &hotplug_slot_attr_power.attr);
if (has_attention_file(pci_slot))
if (has_attention_file(slot))
sysfs_remove_file(&pci_slot->kobj,
&hotplug_slot_attr_attention.attr);
if (has_latch_file(pci_slot))
if (has_latch_file(slot))
sysfs_remove_file(&pci_slot->kobj, &hotplug_slot_attr_latch.attr);
if (has_adapter_file(pci_slot))
if (has_adapter_file(slot))
sysfs_remove_file(&pci_slot->kobj,
&hotplug_slot_attr_presence.attr);
if (has_test_file(pci_slot))
if (has_test_file(slot))
sysfs_remove_file(&pci_slot->kobj, &hotplug_slot_attr_test.attr);
pci_hp_remove_module_link(pci_slot);
}
static struct hotplug_slot *get_slot_from_name(const char *name)
{
struct hotplug_slot *slot;
list_for_each_entry(slot, &pci_hotplug_slot_list, slot_list) {
if (strcmp(hotplug_slot_name(slot), name) == 0)
return slot;
}
return NULL;
sysfs_remove_link(&pci_slot->kobj, "module");
}
/**
@ -476,18 +427,19 @@ EXPORT_SYMBOL_GPL(__pci_hp_initialize);
*/
int pci_hp_add(struct hotplug_slot *slot)
{
struct pci_slot *pci_slot = slot->pci_slot;
struct pci_slot *pci_slot;
int result;
result = fs_add_slot(pci_slot);
if (WARN_ON(!slot))
return -EINVAL;
pci_slot = slot->pci_slot;
result = fs_add_slot(slot, pci_slot);
if (result)
return result;
kobject_uevent(&pci_slot->kobj, KOBJ_ADD);
mutex_lock(&pci_hp_mutex);
list_add(&slot->slot_list, &pci_hotplug_slot_list);
mutex_unlock(&pci_hp_mutex);
dbg("Added slot %s to the list\n", hotplug_slot_name(slot));
return 0;
}
EXPORT_SYMBOL_GPL(pci_hp_add);
@ -514,22 +466,10 @@ EXPORT_SYMBOL_GPL(pci_hp_deregister);
*/
void pci_hp_del(struct hotplug_slot *slot)
{
struct hotplug_slot *temp;
if (WARN_ON(!slot))
return;
mutex_lock(&pci_hp_mutex);
temp = get_slot_from_name(hotplug_slot_name(slot));
if (WARN_ON(temp != slot)) {
mutex_unlock(&pci_hp_mutex);
return;
}
list_del(&slot->slot_list);
mutex_unlock(&pci_hp_mutex);
dbg("Removed slot %s from the list\n", hotplug_slot_name(slot));
fs_remove_slot(slot->pci_slot);
fs_remove_slot(slot, slot->pci_slot);
}
EXPORT_SYMBOL_GPL(pci_hp_del);

View File

@ -286,9 +286,12 @@ static int pciehp_suspend(struct pcie_device *dev)
static bool pciehp_device_replaced(struct controller *ctrl)
{
struct pci_dev *pdev __free(pci_dev_put);
struct pci_dev *pdev __free(pci_dev_put) = NULL;
u32 reg;
if (pci_dev_is_disconnected(ctrl->pcie->port))
return false;
pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
if (!pdev)
return true;

View File

@ -292,7 +292,7 @@ int pciehp_check_link_status(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
bool found;
u16 lnk_status;
u16 lnk_status, linksta2;
if (!pcie_wait_for_link(pdev, true)) {
ctrl_info(ctrl, "Slot(%s): No link\n", slot_name(ctrl));
@ -319,7 +319,8 @@ int pciehp_check_link_status(struct controller *ctrl)
return -1;
}
__pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status);
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA2, &linksta2);
__pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status, linksta2);
if (!found) {
ctrl_info(ctrl, "Slot(%s): No device found\n",
@ -430,7 +431,7 @@ void pciehp_get_latch_status(struct controller *ctrl, u8 *status)
* removed immediately after the check so the caller may need to take
* this into account.
*
* It the hotplug controller itself is not available anymore returns
* If the hotplug controller itself is not available anymore returns
* %-ENODEV.
*/
int pciehp_card_present(struct controller *ctrl)
@ -842,7 +843,9 @@ void pcie_enable_interrupt(struct controller *ctrl)
{
u16 mask;
mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
mask = PCI_EXP_SLTCTL_DLLSCE;
if (!pciehp_poll_mode)
mask |= PCI_EXP_SLTCTL_HPIE;
pcie_write_cmd(ctrl, mask, mask);
}

View File

@ -33,24 +33,8 @@ extern bool shpchp_poll_mode;
extern int shpchp_poll_time;
extern bool shpchp_debug;
#define dbg(format, arg...) \
do { \
if (shpchp_debug) \
printk(KERN_DEBUG "%s: " format, MY_NAME, ## arg); \
} while (0)
#define err(format, arg...) \
printk(KERN_ERR "%s: " format, MY_NAME, ## arg)
#define info(format, arg...) \
printk(KERN_INFO "%s: " format, MY_NAME, ## arg)
#define warn(format, arg...) \
printk(KERN_WARNING "%s: " format, MY_NAME, ## arg)
#define ctrl_dbg(ctrl, format, arg...) \
do { \
if (shpchp_debug) \
pci_printk(KERN_DEBUG, ctrl->pci_dev, \
format, ## arg); \
} while (0)
pci_dbg(ctrl->pci_dev, format, ## arg)
#define ctrl_err(ctrl, format, arg...) \
pci_err(ctrl->pci_dev, format, ## arg)
#define ctrl_info(ctrl, format, arg...) \

View File

@ -22,7 +22,6 @@
#include "shpchp.h"
/* Global variables */
bool shpchp_debug;
bool shpchp_poll_mode;
int shpchp_poll_time;
@ -33,10 +32,8 @@ int shpchp_poll_time;
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
module_param(shpchp_debug, bool, 0644);
module_param(shpchp_poll_mode, bool, 0644);
module_param(shpchp_poll_time, int, 0644);
MODULE_PARM_DESC(shpchp_debug, "Debugging mode enabled or not");
MODULE_PARM_DESC(shpchp_poll_mode, "Using polling mechanism for hot-plug events or not");
MODULE_PARM_DESC(shpchp_poll_time, "Polling mechanism frequency, in seconds");
@ -324,20 +321,12 @@ static struct pci_driver shpc_driver = {
static int __init shpcd_init(void)
{
int retval;
retval = pci_register_driver(&shpc_driver);
dbg("%s: pci_register_driver = %d\n", __func__, retval);
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
return retval;
return pci_register_driver(&shpc_driver);
}
static void __exit shpcd_cleanup(void)
{
dbg("unload_shpchpd()\n");
pci_unregister_driver(&shpc_driver);
info(DRIVER_DESC " version: " DRIVER_VERSION " unloaded\n");
}
module_init(shpcd_init);

View File

@ -675,7 +675,7 @@ static int shpc_get_cur_bus_speed(struct controller *ctrl)
out:
bus->cur_bus_speed = bus_speed;
dbg("Current bus speed = %d\n", bus_speed);
ctrl_dbg(ctrl, "Current bus speed = %d\n", bus_speed);
return retval;
}

View File

@ -9,6 +9,8 @@
#include <linux/export.h>
#include "pci.h" /* for pci_bar_index_is_valid() */
/**
* pci_iomap_range - create a virtual mapping cookie for a PCI BAR
* @dev: PCI device that owns the BAR
@ -33,12 +35,19 @@ void __iomem *pci_iomap_range(struct pci_dev *dev,
unsigned long offset,
unsigned long maxlen)
{
resource_size_t start = pci_resource_start(dev, bar);
resource_size_t len = pci_resource_len(dev, bar);
unsigned long flags = pci_resource_flags(dev, bar);
resource_size_t start, len;
unsigned long flags;
if (!pci_bar_index_is_valid(bar))
return NULL;
start = pci_resource_start(dev, bar);
len = pci_resource_len(dev, bar);
flags = pci_resource_flags(dev, bar);
if (len <= offset || !start)
return NULL;
len -= offset;
start += offset;
if (maxlen && len > maxlen)
@ -77,16 +86,20 @@ void __iomem *pci_iomap_wc_range(struct pci_dev *dev,
unsigned long offset,
unsigned long maxlen)
{
resource_size_t start = pci_resource_start(dev, bar);
resource_size_t len = pci_resource_len(dev, bar);
unsigned long flags = pci_resource_flags(dev, bar);
resource_size_t start, len;
unsigned long flags;
if (flags & IORESOURCE_IO)
if (!pci_bar_index_is_valid(bar))
return NULL;
start = pci_resource_start(dev, bar);
len = pci_resource_len(dev, bar);
flags = pci_resource_flags(dev, bar);
if (len <= offset || !start)
return NULL;
if (flags & IORESOURCE_IO)
return NULL;
len -= offset;
start += offset;

View File

@ -285,23 +285,16 @@ const struct attribute_group sriov_vf_dev_attr_group = {
.is_visible = sriov_vf_attrs_are_visible,
};
int pci_iov_add_virtfn(struct pci_dev *dev, int id)
static struct pci_dev *pci_iov_scan_device(struct pci_dev *dev, int id,
struct pci_bus *bus)
{
int i;
int rc = -ENOMEM;
u64 size;
struct pci_dev *virtfn;
struct resource *res;
struct pci_sriov *iov = dev->sriov;
struct pci_bus *bus;
bus = virtfn_add_bus(dev->bus, pci_iov_virtfn_bus(dev, id));
if (!bus)
goto failed;
struct pci_dev *virtfn;
int rc;
virtfn = pci_alloc_dev(bus);
if (!virtfn)
goto failed0;
return ERR_PTR(-ENOMEM);
virtfn->devfn = pci_iov_virtfn_devfn(dev, id);
virtfn->vendor = dev->vendor;
@ -314,8 +307,35 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id)
pci_read_vf_config_common(virtfn);
rc = pci_setup_device(virtfn);
if (rc)
goto failed1;
if (rc) {
pci_dev_put(dev);
pci_bus_put(virtfn->bus);
kfree(virtfn);
return ERR_PTR(rc);
}
return virtfn;
}
int pci_iov_add_virtfn(struct pci_dev *dev, int id)
{
struct pci_bus *bus;
struct pci_dev *virtfn;
struct resource *res;
int rc, i;
u64 size;
bus = virtfn_add_bus(dev->bus, pci_iov_virtfn_bus(dev, id));
if (!bus) {
rc = -ENOMEM;
goto failed;
}
virtfn = pci_iov_scan_device(dev, id, bus);
if (IS_ERR(virtfn)) {
rc = PTR_ERR(virtfn);
goto failed0;
}
virtfn->dev.parent = dev->dev.parent;
virtfn->multifunction = 0;
@ -952,7 +972,7 @@ void pci_iov_remove(struct pci_dev *dev)
void pci_iov_update_resource(struct pci_dev *dev, int resno)
{
struct pci_sriov *iov = dev->is_physfn ? dev->sriov : NULL;
struct resource *res = dev->resource + resno;
struct resource *res = pci_resource_n(dev, resno);
int vf_bar = resno - PCI_IOV_RESOURCES;
struct pci_bus_region region;
u16 cmd;

View File

@ -162,7 +162,7 @@ struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
EXPORT_SYMBOL_GPL(pci_msix_alloc_irq_at);
/**
* pci_msix_free_irq - Free an interrupt on a PCI/MSIX interrupt domain
* pci_msix_free_irq - Free an interrupt on a PCI/MSI-X interrupt domain
*
* @dev: The PCI device to operate on
* @map: A struct msi_map describing the interrupt to free

View File

@ -455,9 +455,9 @@ failed:
* @out_irq: structure of_phandle_args filled by this function
*
* This function resolves the PCI interrupt for a given PCI device. If a
* device-node exists for a given pci_dev, it will use normal OF tree
* device node exists for a given pci_dev, it will use normal OF tree
* walking. If not, it will implement standard swizzling and walk up the
* PCI tree until an device-node is found, at which point it will finish
* PCI tree until a device node is found, at which point it will finish
* resolving using the OF tree walking.
*/
static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
@ -517,13 +517,16 @@ static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *
}
/*
* Ok, we have found a parent with a device-node, hand over to
* Ok, we have found a parent with a device node, hand over to
* the OF parsing code.
*
* We build a unit address from the linux device to be used for
* resolution. Note that we use the linux bus number which may
* not match your firmware bus numbering.
*
* Fortunately, in most cases, interrupt-map-mask doesn't
* include the bus number as part of the matching.
*
* You should still be careful about that though if you intend
* to rely on this function (you ship a firmware that doesn't
* create device nodes for all PCI devices).
@ -653,8 +656,8 @@ void of_pci_remove_node(struct pci_dev *pdev)
np = pci_device_to_OF_node(pdev);
if (!np || !of_node_check_flag(np, OF_DYNAMIC))
return;
pdev->dev.of_node = NULL;
device_remove_of_node(&pdev->dev);
of_changeset_revert(np->data);
of_changeset_destroy(np->data);
of_node_put(np);
@ -711,11 +714,18 @@ void of_pci_make_dev_node(struct pci_dev *pdev)
goto out_free_node;
np->data = cset;
pdev->dev.of_node = np;
ret = device_add_of_node(&pdev->dev, np);
if (ret)
goto out_revert_cset;
kfree(name);
return;
out_revert_cset:
np->data = NULL;
of_changeset_revert(cset);
out_free_node:
of_node_put(np);
out_destroy_cset:
@ -724,7 +734,112 @@ out_destroy_cset:
out_free_name:
kfree(name);
}
#endif
void of_pci_remove_host_bridge_node(struct pci_host_bridge *bridge)
{
struct device_node *np;
np = pci_bus_to_OF_node(bridge->bus);
if (!np || !of_node_check_flag(np, OF_DYNAMIC))
return;
device_remove_of_node(&bridge->bus->dev);
device_remove_of_node(&bridge->dev);
of_changeset_revert(np->data);
of_changeset_destroy(np->data);
of_node_put(np);
}
void of_pci_make_host_bridge_node(struct pci_host_bridge *bridge)
{
struct device_node *np = NULL;
struct of_changeset *cset;
const char *name;
int ret;
/*
* If there is already a device tree node linked to the PCI bus handled
* by this bridge (i.e. the PCI root bus), nothing to do.
*/
if (pci_bus_to_OF_node(bridge->bus))
return;
/*
* The root bus has no node. Check that the host bridge has no node
* too
*/
if (bridge->dev.of_node) {
dev_err(&bridge->dev, "PCI host bridge of_node already set");
return;
}
/* Check if there is a DT root node to attach the created node */
if (!of_root) {
pr_err("of_root node is NULL, cannot create PCI host bridge node\n");
return;
}
name = kasprintf(GFP_KERNEL, "pci@%x,%x", pci_domain_nr(bridge->bus),
bridge->bus->number);
if (!name)
return;
cset = kmalloc(sizeof(*cset), GFP_KERNEL);
if (!cset)
goto out_free_name;
of_changeset_init(cset);
np = of_changeset_create_node(cset, of_root, name);
if (!np)
goto out_destroy_cset;
ret = of_pci_add_host_bridge_properties(bridge, cset, np);
if (ret)
goto out_free_node;
/*
* This of_node will be added to an existing device. The of_node parent
* is the root OF node and so this node will be handled by the platform
* bus. Avoid any new device creation.
*/
of_node_set_flag(np, OF_POPULATED);
np->fwnode.dev = &bridge->dev;
fwnode_dev_initialized(&np->fwnode, true);
ret = of_changeset_apply(cset);
if (ret)
goto out_free_node;
np->data = cset;
/* Add the of_node to host bridge and the root bus */
ret = device_add_of_node(&bridge->dev, np);
if (ret)
goto out_revert_cset;
ret = device_add_of_node(&bridge->bus->dev, np);
if (ret)
goto out_remove_bridge_dev_of_node;
kfree(name);
return;
out_remove_bridge_dev_of_node:
device_remove_of_node(&bridge->dev);
out_revert_cset:
np->data = NULL;
of_changeset_revert(cset);
out_free_node:
of_node_put(np);
out_destroy_cset:
of_changeset_destroy(cset);
kfree(cset);
out_free_name:
kfree(name);
}
#endif /* CONFIG_PCI_DYNAMIC_OF_NODES */
/**
* of_pci_supply_present() - Check if the power supply is present for the PCI

View File

@ -54,9 +54,13 @@ enum of_pci_prop_compatible {
static void of_pci_set_address(struct pci_dev *pdev, u32 *prop, u64 addr,
u32 reg_num, u32 flags, bool reloc)
{
prop[0] = FIELD_PREP(OF_PCI_ADDR_FIELD_BUS, pdev->bus->number) |
FIELD_PREP(OF_PCI_ADDR_FIELD_DEV, PCI_SLOT(pdev->devfn)) |
FIELD_PREP(OF_PCI_ADDR_FIELD_FUNC, PCI_FUNC(pdev->devfn));
if (pdev) {
prop[0] = FIELD_PREP(OF_PCI_ADDR_FIELD_BUS, pdev->bus->number) |
FIELD_PREP(OF_PCI_ADDR_FIELD_DEV, PCI_SLOT(pdev->devfn)) |
FIELD_PREP(OF_PCI_ADDR_FIELD_FUNC, PCI_FUNC(pdev->devfn));
} else
prop[0] = 0;
prop[0] |= flags | reg_num;
if (!reloc) {
prop[0] |= OF_PCI_ADDR_FIELD_NONRELOC;
@ -65,7 +69,7 @@ static void of_pci_set_address(struct pci_dev *pdev, u32 *prop, u64 addr,
}
}
static int of_pci_get_addr_flags(struct resource *res, u32 *flags)
static int of_pci_get_addr_flags(const struct resource *res, u32 *flags)
{
u32 ss;
@ -390,3 +394,106 @@ int of_pci_add_properties(struct pci_dev *pdev, struct of_changeset *ocs,
return 0;
}
static bool of_pci_is_range_resource(const struct resource *res, u32 *flags)
{
if (!(resource_type(res) & IORESOURCE_MEM) &&
!(resource_type(res) & IORESOURCE_MEM_64))
return false;
if (of_pci_get_addr_flags(res, flags))
return false;
return true;
}
static int of_pci_host_bridge_prop_ranges(struct pci_host_bridge *bridge,
struct of_changeset *ocs,
struct device_node *np)
{
struct resource_entry *window;
unsigned int ranges_sz = 0;
unsigned int n_range = 0;
struct resource *res;
int n_addr_cells;
u32 *ranges;
u64 val64;
u32 flags;
int ret;
n_addr_cells = of_n_addr_cells(np);
if (n_addr_cells <= 0 || n_addr_cells > 2)
return -EINVAL;
resource_list_for_each_entry(window, &bridge->windows) {
res = window->res;
if (!of_pci_is_range_resource(res, &flags))
continue;
n_range++;
}
if (!n_range)
return 0;
ranges = kcalloc(n_range,
(OF_PCI_ADDRESS_CELLS + OF_PCI_SIZE_CELLS +
n_addr_cells) * sizeof(*ranges),
GFP_KERNEL);
if (!ranges)
return -ENOMEM;
resource_list_for_each_entry(window, &bridge->windows) {
res = window->res;
if (!of_pci_is_range_resource(res, &flags))
continue;
/* PCI bus address */
val64 = res->start;
of_pci_set_address(NULL, &ranges[ranges_sz],
val64 - window->offset, 0, flags, false);
ranges_sz += OF_PCI_ADDRESS_CELLS;
/* Host bus address */
if (n_addr_cells == 2)
ranges[ranges_sz++] = upper_32_bits(val64);
ranges[ranges_sz++] = lower_32_bits(val64);
/* Size */
val64 = resource_size(res);
ranges[ranges_sz] = upper_32_bits(val64);
ranges[ranges_sz + 1] = lower_32_bits(val64);
ranges_sz += OF_PCI_SIZE_CELLS;
}
ret = of_changeset_add_prop_u32_array(ocs, np, "ranges", ranges,
ranges_sz);
kfree(ranges);
return ret;
}
int of_pci_add_host_bridge_properties(struct pci_host_bridge *bridge,
struct of_changeset *ocs,
struct device_node *np)
{
int ret;
ret = of_changeset_add_prop_string(ocs, np, "device_type", "pci");
if (ret)
return ret;
ret = of_changeset_add_prop_u32(ocs, np, "#address-cells",
OF_PCI_ADDRESS_CELLS);
if (ret)
return ret;
ret = of_changeset_add_prop_u32(ocs, np, "#size-cells",
OF_PCI_SIZE_CELLS);
if (ret)
return ret;
ret = of_pci_host_bridge_prop_ranges(bridge, ocs, np);
if (ret)
return ret;
return 0;
}

View File

@ -1257,6 +1257,10 @@ static int pci_create_resource_files(struct pci_dev *pdev)
int i;
int retval;
/* Skip devices with non-mappable BARs */
if (pdev->non_mappable_bars)
return 0;
/* Expose the PCI resources from this device as files */
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
@ -1556,7 +1560,7 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
return -EINVAL;
device_lock(dev);
if (dev->driver) {
if (dev->driver || pci_num_vf(pdev)) {
ret = -EBUSY;
goto unlock;
}
@ -1578,7 +1582,7 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
pci_remove_resource_files(pdev);
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) {
if (pci_resource_len(pdev, i) &&
pci_resource_flags(pdev, i) == flags)
pci_release_resource(pdev, i);
@ -1804,6 +1808,9 @@ const struct attribute_group *pci_dev_attr_groups[] = {
#endif
#ifdef CONFIG_PCIEASPM
&aspm_ctrl_attr_group,
#endif
#ifdef CONFIG_PCI_DOE
&pci_doe_sysfs_group,
#endif
NULL,
};

View File

@ -954,8 +954,10 @@ struct pci_acs {
};
static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
const char *p, u16 mask, u16 flags)
const char *p, const u16 acs_mask, const u16 acs_flags)
{
u16 flags = acs_flags;
u16 mask = acs_mask;
char *delimit;
int ret = 0;
@ -963,7 +965,7 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
return;
while (*p) {
if (!mask) {
if (!acs_mask) {
/* Check for ACS flags */
delimit = strstr(p, "@");
if (delimit) {
@ -971,6 +973,8 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
u32 shift = 0;
end = delimit - p - 1;
mask = 0;
flags = 0;
while (end > -1) {
if (*(p + end) == '0') {
@ -1027,10 +1031,14 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
pci_dbg(dev, "ACS mask = %#06x\n", mask);
pci_dbg(dev, "ACS flags = %#06x\n", flags);
pci_dbg(dev, "ACS control = %#06x\n", caps->ctrl);
pci_dbg(dev, "ACS fw_ctrl = %#06x\n", caps->fw_ctrl);
/* If mask is 0 then we copy the bit from the firmware setting. */
caps->ctrl = (caps->ctrl & ~mask) | (caps->fw_ctrl & mask);
caps->ctrl |= flags;
/*
* For mask bits that are 0, copy them from the firmware setting
* and apply flags for all the mask bits that are 1.
*/
caps->ctrl = (caps->fw_ctrl & ~mask) | (flags & mask);
pci_info(dev, "Configured ACS to %#06x\n", caps->ctrl);
}
@ -1871,7 +1879,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
unsigned int pos, nbars, i;
u32 ctrl;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
pos = pdev->rebar_cap;
if (!pos)
return;
@ -1884,7 +1892,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
res = pdev->resource + bar_idx;
res = pci_resource_n(pdev, bar_idx);
size = pci_rebar_bytes_to_size(resource_size(res));
ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
ctrl |= FIELD_PREP(PCI_REBAR_CTRL_BAR_SIZE, size);
@ -3023,7 +3031,7 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
* @bridge: Bridge to check
*
* This function checks if it is possible to move the bridge to D3.
* Currently we only allow D3 for recent enough PCIe ports and Thunderbolt.
* Currently we only allow D3 for some PCIe ports and for Thunderbolt.
*/
bool pci_bridge_d3_possible(struct pci_dev *bridge)
{
@ -3067,10 +3075,10 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge)
return false;
/*
* It should be safe to put PCIe ports from 2015 or newer
* to D3.
* Out of caution, we only allow PCIe ports from 2015 or newer
* into D3 on x86.
*/
if (dmi_get_bios_year() >= 2015)
if (!IS_ENABLED(CONFIG_X86) || dmi_get_bios_year() >= 2015)
return true;
break;
}
@ -3718,6 +3726,11 @@ void pci_acs_init(struct pci_dev *dev)
pci_enable_acs(dev);
}
void pci_rebar_init(struct pci_dev *pdev)
{
pdev->rebar_cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
}
/**
* pci_rebar_find_pos - find position of resize ctrl reg for BAR
* @pdev: PCI device
@ -3732,7 +3745,7 @@ static int pci_rebar_find_pos(struct pci_dev *pdev, int bar)
unsigned int pos, nbars, i;
u32 ctrl;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
pos = pdev->rebar_cap;
if (!pos)
return -ENOTSUPP;
@ -3757,7 +3770,7 @@ static int pci_rebar_find_pos(struct pci_dev *pdev, int bar)
* @bar: BAR to query
*
* Get the possible sizes of a resizable BAR as bitmask defined in the spec
* (bit 0=1MB, bit 19=512GB). Returns 0 if BAR isn't resizable.
* (bit 0=1MB, bit 31=128TB). Returns 0 if BAR isn't resizable.
*/
u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar)
{
@ -3805,7 +3818,7 @@ int pci_rebar_get_current_size(struct pci_dev *pdev, int bar)
* pci_rebar_set_size - set a new size for a BAR
* @pdev: PCI device
* @bar: BAR to set size to
* @size: new size as defined in the spec (0=1MB, 19=512GB)
* @size: new size as defined in the spec (0=1MB, 31=128TB)
*
* Set the new size of a BAR as defined in the spec.
* Returns zero if resizing was successful, error code otherwise.
@ -3921,6 +3934,9 @@ EXPORT_SYMBOL(pci_enable_atomic_ops_to_root);
*/
void pci_release_region(struct pci_dev *pdev, int bar)
{
if (!pci_bar_index_is_valid(bar))
return;
/*
* This is done for backwards compatibility, because the old PCI devres
* API had a mode in which the function became managed if it had been
@ -3965,6 +3981,9 @@ EXPORT_SYMBOL(pci_release_region);
static int __pci_request_region(struct pci_dev *pdev, int bar,
const char *name, int exclusive)
{
if (!pci_bar_index_is_valid(bar))
return -EINVAL;
if (pci_is_managed(pdev)) {
if (exclusive == IORESOURCE_EXCLUSIVE)
return pcim_request_region_exclusive(pdev, bar, name);
@ -4766,7 +4785,7 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
/*
* PCIe r4.0 sec 6.6.1, a component must enter LTSSM Detect within 20ms,
* after which we should expect an link active if the reset was
* after which we should expect the link to be active if the reset was
* successful. If so, software must wait a minimum 100ms before sending
* configuration requests to devices downstream this port.
*
@ -5230,6 +5249,7 @@ const struct pci_reset_fn_method pci_reset_fn_methods[] = {
int __pci_reset_function_locked(struct pci_dev *dev)
{
int i, m, rc;
const struct pci_reset_fn_method *method;
might_sleep();
@ -5246,9 +5266,13 @@ int __pci_reset_function_locked(struct pci_dev *dev)
if (!m)
return -ENOTTY;
rc = pci_reset_fn_methods[m].reset_fn(dev, PCI_RESET_DO_RESET);
method = &pci_reset_fn_methods[m];
pci_dbg(dev, "reset via %s\n", method->name);
rc = method->reset_fn(dev, PCI_RESET_DO_RESET);
if (!rc)
return 0;
pci_dbg(dev, "%s failed with %d\n", method->name, rc);
if (rc != -ENOTTY)
return rc;
}
@ -5405,6 +5429,8 @@ static bool pci_bus_resettable(struct pci_bus *bus)
return false;
list_for_each_entry(dev, &bus->devices, bus_list) {
if (!pci_reset_supported(dev))
return false;
if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
(dev->subordinate && !pci_bus_resettable(dev->subordinate)))
return false;
@ -5481,6 +5507,8 @@ static bool pci_slot_resettable(struct pci_slot *slot)
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
continue;
if (!pci_reset_supported(dev))
return false;
if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
(dev->subordinate && !pci_bus_resettable(dev->subordinate)))
return false;
@ -6190,21 +6218,25 @@ void __pcie_print_link_status(struct pci_dev *dev, bool verbose)
enum pci_bus_speed speed, speed_cap;
struct pci_dev *limiting_dev = NULL;
u32 bw_avail, bw_cap;
char *flit_mode = "";
bw_cap = pcie_bandwidth_capable(dev, &speed_cap, &width_cap);
bw_avail = pcie_bandwidth_available(dev, &limiting_dev, &speed, &width);
if (dev->bus && dev->bus->flit_mode)
flit_mode = ", in Flit mode";
if (bw_avail >= bw_cap && verbose)
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n",
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)%s\n",
bw_cap / 1000, bw_cap % 1000,
pci_speed_string(speed_cap), width_cap);
pci_speed_string(speed_cap), width_cap, flit_mode);
else if (bw_avail < bw_cap)
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n",
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)%s\n",
bw_avail / 1000, bw_avail % 1000,
pci_speed_string(speed), width,
limiting_dev ? pci_name(limiting_dev) : "<unknown>",
bw_cap / 1000, bw_cap % 1000,
pci_speed_string(speed_cap), width_cap);
pci_speed_string(speed_cap), width_cap, flit_mode);
}
/**

View File

@ -167,6 +167,22 @@ static inline void pci_wakeup_event(struct pci_dev *dev)
pm_wakeup_event(&dev->dev, 100);
}
/**
* pci_bar_index_is_valid - Check whether a BAR index is within valid range
* @bar: BAR index
*
* Protects against overflowing &struct pci_dev.resource array.
*
* Return: true for valid index, false otherwise.
*/
static inline bool pci_bar_index_is_valid(int bar)
{
if (bar >= 0 && bar < PCI_NUM_RESOURCES)
return true;
return false;
}
static inline bool pci_has_subordinate(struct pci_dev *pci_dev)
{
return !!(pci_dev->subordinate);
@ -253,6 +269,7 @@ extern const struct attribute_group *pci_dev_groups[];
extern const struct attribute_group *pci_dev_attr_groups[];
extern const struct attribute_group *pcibus_groups[];
extern const struct attribute_group *pci_bus_groups[];
extern const struct attribute_group pci_doe_sysfs_group;
#else
static inline int pci_create_sysfs_dev_files(struct pci_dev *pdev) { return 0; }
static inline void pci_remove_sysfs_dev_files(struct pci_dev *pdev) { }
@ -266,6 +283,8 @@ extern unsigned long pci_hotplug_io_size;
extern unsigned long pci_hotplug_mmio_size;
extern unsigned long pci_hotplug_mmio_pref_size;
extern unsigned long pci_hotplug_bus_size;
extern unsigned long pci_cardbus_io_size;
extern unsigned long pci_cardbus_mem_size;
/**
* pci_match_one_device - Tell if a PCI device structure has a matching
@ -309,6 +328,10 @@ enum pci_bar_type {
struct device *pci_get_host_bridge_device(struct pci_dev *dev);
void pci_put_host_bridge_device(struct device *dev);
unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge);
int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type);
int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
int pci_configure_extended_tags(struct pci_dev *dev, void *ign);
bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl,
int rrs_timeout);
@ -333,6 +356,29 @@ void pci_walk_bus_locked(struct pci_bus *top,
void *userdata);
const char *pci_resource_name(struct pci_dev *dev, unsigned int i);
bool pci_resource_is_optional(const struct pci_dev *dev, int resno);
/**
* pci_resource_num - Reverse lookup resource number from device resources
* @dev: PCI device
* @res: Resource to lookup index for (MUST be a @dev's resource)
*
* Perform reverse lookup to determine the resource number for @res within
* @dev resource array. NOTE: The caller is responsible for ensuring @res is
* among @dev's resources!
*
* Returns: resource number.
*/
static inline int pci_resource_num(const struct pci_dev *dev,
const struct resource *res)
{
int resno = res - &dev->resource[0];
/* Passing a resource that is not among dev's resources? */
WARN_ON_ONCE(resno >= PCI_NUM_RESOURCES);
return resno;
}
void pci_reassigndev_resource_alignment(struct pci_dev *dev);
void pci_disable_bridge_window(struct pci_dev *dev);
@ -406,9 +452,10 @@ const char *pci_speed_string(enum pci_bus_speed speed);
void __pcie_print_link_status(struct pci_dev *dev, bool verbose);
void pcie_report_downtraining(struct pci_dev *dev);
static inline void __pcie_update_link_speed(struct pci_bus *bus, u16 linksta)
static inline void __pcie_update_link_speed(struct pci_bus *bus, u16 linksta, u16 linksta2)
{
bus->cur_bus_speed = pcie_link_speed[linksta & PCI_EXP_LNKSTA_CLS];
bus->flit_mode = (linksta2 & PCI_EXP_LNKSTA2_FLIT) ? 1 : 0;
}
void pcie_update_link_speed(struct pci_bus *bus);
@ -456,6 +503,14 @@ static inline void pci_npem_create(struct pci_dev *dev) { }
static inline void pci_npem_remove(struct pci_dev *dev) { }
#endif
#if defined(CONFIG_PCI_DOE) && defined(CONFIG_SYSFS)
void pci_doe_sysfs_init(struct pci_dev *pci_dev);
void pci_doe_sysfs_teardown(struct pci_dev *pdev);
#else
static inline void pci_doe_sysfs_init(struct pci_dev *pdev) { }
static inline void pci_doe_sysfs_teardown(struct pci_dev *pdev) { }
#endif
/**
* pci_dev_set_io_state - Set the new error state if possible.
*
@ -553,7 +608,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
unsigned int tlp_len, struct pcie_tlp_log *log);
unsigned int tlp_len, bool flit,
struct pcie_tlp_log *log);
unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc);
void pcie_print_tlp_log(const struct pci_dev *dev,
const struct pcie_tlp_log *log, const char *pfx);
@ -632,6 +688,10 @@ void pci_iov_update_resource(struct pci_dev *dev, int resno);
resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno);
void pci_restore_iov_state(struct pci_dev *dev);
int pci_iov_bus_range(struct pci_bus *bus);
static inline bool pci_resource_is_iov(int resno)
{
return resno >= PCI_IOV_RESOURCES && resno <= PCI_IOV_RESOURCE_END;
}
extern const struct attribute_group sriov_pf_dev_attr_group;
extern const struct attribute_group sriov_vf_dev_attr_group;
#else
@ -641,12 +701,21 @@ static inline int pci_iov_init(struct pci_dev *dev)
}
static inline void pci_iov_release(struct pci_dev *dev) { }
static inline void pci_iov_remove(struct pci_dev *dev) { }
static inline void pci_iov_update_resource(struct pci_dev *dev, int resno) { }
static inline resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev,
int resno)
{
return 0;
}
static inline void pci_restore_iov_state(struct pci_dev *dev) { }
static inline int pci_iov_bus_range(struct pci_bus *bus)
{
return 0;
}
static inline bool pci_resource_is_iov(int resno)
{
return false;
}
#endif /* CONFIG_PCI_IOV */
#ifdef CONFIG_PCIE_TPH
@ -680,12 +749,10 @@ unsigned long pci_cardbus_resource_alignment(struct resource *);
static inline resource_size_t pci_resource_alignment(struct pci_dev *dev,
struct resource *res)
{
#ifdef CONFIG_PCI_IOV
int resno = res - dev->resource;
int resno = pci_resource_num(dev, res);
if (resno >= PCI_IOV_RESOURCES && resno <= PCI_IOV_RESOURCE_END)
if (pci_resource_is_iov(resno))
return pci_sriov_resource_alignment(dev, resno);
#endif
if (dev->class >> 8 == PCI_CLASS_BRIDGE_CARDBUS)
return pci_cardbus_resource_alignment(res);
return resource_alignment(res);
@ -799,6 +866,7 @@ static inline int acpi_get_rc_resources(struct device *dev, const char *hid,
}
#endif
void pci_rebar_init(struct pci_dev *pdev);
int pci_rebar_get_current_size(struct pci_dev *pdev, int bar);
int pci_rebar_set_size(struct pci_dev *pdev, int bar, int size);
static inline u64 pci_rebar_size_to_bytes(int size)
@ -876,9 +944,16 @@ void of_pci_make_dev_node(struct pci_dev *pdev);
void of_pci_remove_node(struct pci_dev *pdev);
int of_pci_add_properties(struct pci_dev *pdev, struct of_changeset *ocs,
struct device_node *np);
void of_pci_make_host_bridge_node(struct pci_host_bridge *bridge);
void of_pci_remove_host_bridge_node(struct pci_host_bridge *bridge);
int of_pci_add_host_bridge_properties(struct pci_host_bridge *bridge,
struct of_changeset *ocs,
struct device_node *np);
#else
static inline void of_pci_make_dev_node(struct pci_dev *pdev) { }
static inline void of_pci_remove_node(struct pci_dev *pdev) { }
static inline void of_pci_make_host_bridge_node(struct pci_host_bridge *bridge) { }
static inline void of_pci_remove_host_bridge_node(struct pci_host_bridge *bridge) { }
#endif
#ifdef CONFIG_PCIEAER

View File

@ -2,7 +2,7 @@
/*
* Implement the AER root port service driver. The driver registers an IRQ
* handler. When a root port triggers an AER interrupt, the IRQ handler
* collects root port status and schedules work.
* collects Root Port status and schedules work.
*
* Copyright (C) 2006 Intel Corp.
* Tom Long Nguyen (tom.l.nguyen@intel.com)
@ -17,6 +17,7 @@
#include <linux/bitops.h>
#include <linux/cper.h>
#include <linux/dev_printk.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/sched.h>
@ -35,6 +36,9 @@
#include "../pci.h"
#include "portdrv.h"
#define aer_printk(level, pdev, fmt, arg...) \
dev_printk(level, &(pdev)->dev, fmt, ##arg)
#define AER_ERROR_SOURCES_MAX 128
#define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */
@ -56,9 +60,9 @@ struct aer_stats {
/*
* Fields for all AER capable devices. They indicate the errors
* "as seen by this device". Note that this may mean that if an
* end point is causing problems, the AER counters may increment
* at its link partner (e.g. root port) because the errors will be
* "seen" by the link partner and not the problematic end point
* Endpoint is causing problems, the AER counters may increment
* at its link partner (e.g. Root Port) because the errors will be
* "seen" by the link partner and not the problematic Endpoint
* itself (which may report all counters as 0 as it never saw any
* problems).
*/
@ -76,10 +80,10 @@ struct aer_stats {
u64 dev_total_nonfatal_errs;
/*
* Fields for Root ports & root complex event collectors only, these
* Fields for Root Ports & Root Complex Event Collectors only; these
* indicate the total number of ERR_COR, ERR_FATAL, and ERR_NONFATAL
* messages received by the root port / event collector, INCLUDING the
* ones that are generated internally (by the rootport itself)
* messages received by the Root Port / Event Collector, INCLUDING the
* ones that are generated internally (by the Root Port itself)
*/
u64 rootport_total_cor_errs;
u64 rootport_total_fatal_errs;
@ -138,7 +142,7 @@ static const char * const ecrc_policy_str[] = {
* enable_ecrc_checking - enable PCIe ECRC checking for a device
* @dev: the PCI device
*
* Returns 0 on success, or negative on failure.
* Return: 0 on success, or negative on failure.
*/
static int enable_ecrc_checking(struct pci_dev *dev)
{
@ -159,10 +163,10 @@ static int enable_ecrc_checking(struct pci_dev *dev)
}
/**
* disable_ecrc_checking - disables PCIe ECRC checking for a device
* disable_ecrc_checking - disable PCIe ECRC checking for a device
* @dev: the PCI device
*
* Returns 0 on success, or negative on failure.
* Return: 0 on success, or negative on failure.
*/
static int disable_ecrc_checking(struct pci_dev *dev)
{
@ -283,10 +287,10 @@ void pci_aer_clear_fatal_status(struct pci_dev *dev)
* pci_aer_raw_clear_status - Clear AER error registers.
* @dev: the PCI device
*
* Clearing AER error status registers unconditionally, regardless of
* Clear AER error status registers unconditionally, regardless of
* whether they're owned by firmware or the OS.
*
* Returns 0 on success, or negative on failure.
* Return: 0 on success, or negative on failure.
*/
int pci_aer_raw_clear_status(struct pci_dev *dev)
{
@ -378,8 +382,8 @@ void pci_aer_init(struct pci_dev *dev)
/*
* We save/restore PCI_ERR_UNCOR_MASK, PCI_ERR_UNCOR_SEVER,
* PCI_ERR_COR_MASK, and PCI_ERR_CAP. Root and Root Complex Event
* Collectors also implement PCI_ERR_ROOT_COMMAND (PCIe r5.0, sec
* 7.8.4).
* Collectors also implement PCI_ERR_ROOT_COMMAND (PCIe r6.0, sec
* 7.8.4.9).
*/
n = pcie_cap_has_rtctl(dev) ? 5 : 4;
pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_ERR, sizeof(u32) * n);
@ -686,7 +690,7 @@ static void __aer_print_error(struct pci_dev *dev,
if (!errmsg)
errmsg = "Unknown Error Bit";
pci_printk(level, dev, " [%2d] %-22s%s\n", i, errmsg,
aer_printk(level, dev, " [%2d] %-22s%s\n", i, errmsg,
info->first_error == i ? " (First)" : "");
}
pci_dev_aer_stats_incr(dev, info);
@ -709,11 +713,11 @@ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
level = (info->severity == AER_CORRECTABLE) ? KERN_WARNING : KERN_ERR;
pci_printk(level, dev, "PCIe Bus Error: severity=%s, type=%s, (%s)\n",
aer_printk(level, dev, "PCIe Bus Error: severity=%s, type=%s, (%s)\n",
aer_error_severity_string[info->severity],
aer_error_layer[layer], aer_agent_string[agent]);
pci_printk(level, dev, " device [%04x:%04x] error status/mask=%08x/%08x\n",
aer_printk(level, dev, " device [%04x:%04x] error status/mask=%08x/%08x\n",
dev->vendor, dev->device, info->status, info->mask);
__aer_print_error(dev, info);
@ -825,8 +829,8 @@ static bool is_error_source(struct pci_dev *dev, struct aer_err_info *e_info)
u16 reg16;
/*
* When bus id is equal to 0, it might be a bad id
* reported by root port.
* When bus ID is equal to 0, it might be a bad ID
* reported by Root Port.
*/
if ((PCI_BUS_NUM(e_info->id) != 0) &&
!(dev->bus->bus_flags & PCI_BUS_FLAGS_NO_AERSID)) {
@ -834,15 +838,15 @@ static bool is_error_source(struct pci_dev *dev, struct aer_err_info *e_info)
if (e_info->id == pci_dev_id(dev))
return true;
/* Continue id comparing if there is no multiple error */
/* Continue ID comparing if there is no multiple error */
if (!e_info->multi_error_valid)
return false;
}
/*
* When either
* 1) bus id is equal to 0. Some ports might lose the bus
* id of error source id;
* 1) bus ID is equal to 0. Some ports might lose the bus
* ID of error source id;
* 2) bus flag PCI_BUS_FLAGS_NO_AERSID is set
* 3) There are multiple errors and prior ID comparing fails;
* We check AER status registers to find possible reporter.
@ -894,9 +898,9 @@ static int find_device_iter(struct pci_dev *dev, void *data)
/**
* find_source_device - search through device hierarchy for source device
* @parent: pointer to Root Port pci_dev data structure
* @e_info: including detailed error information such like id
* @e_info: including detailed error information such as ID
*
* Return true if found.
* Return: true if found.
*
* Invoked by DPC when error is detected at the Root Port.
* Caller of this function must set id, severity, and multi_error_valid of
@ -938,9 +942,9 @@ static bool find_source_device(struct pci_dev *parent,
/**
* pci_aer_unmask_internal_errors - unmask internal errors
* @dev: pointer to the pcie_dev data structure
* @dev: pointer to the pci_dev data structure
*
* Unmasks internal errors in the Uncorrectable and Correctable Error
* Unmask internal errors in the Uncorrectable and Correctable Error
* Mask registers.
*
* Note: AER must be enabled and supported by the device which must be
@ -1003,7 +1007,7 @@ static int cxl_rch_handle_error_iter(struct pci_dev *dev, void *data)
if (!is_cxl_mem_dev(dev) || !cxl_error_is_native(dev))
return 0;
/* protect dev->driver */
/* Protect dev->driver */
device_lock(&dev->dev);
err_handler = dev->driver ? dev->driver->err_handler : NULL;
@ -1195,10 +1199,10 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
/**
* aer_get_device_error_info - read error status from dev and store it to info
* @dev: pointer to the device expected to have a error record
* @dev: pointer to the device expected to have an error record
* @info: pointer to structure to store the error record
*
* Return 1 on success, 0 on error.
* Return: 1 on success, 0 on error.
*
* Note that @info is reused among all error devices. Clear fields properly.
*/
@ -1245,6 +1249,7 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG,
aer + PCI_ERR_PREFIX_LOG,
aer_tlp_log_len(dev, aercc),
aercc & PCI_ERR_CAP_TLP_LOG_FLIT,
&info->tlp);
}
}
@ -1256,7 +1261,7 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
{
int i;
/* Report all before handle them, not to lost records by reset etc. */
/* Report all before handling them, to not lose records by reset etc. */
for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
if (aer_get_device_error_info(e_info->dev[i], e_info))
aer_print_error(e_info->dev[i], e_info);
@ -1268,8 +1273,8 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
}
/**
* aer_isr_one_error - consume an error detected by root port
* @rpc: pointer to the root port which holds an error
* aer_isr_one_error - consume an error detected by Root Port
* @rpc: pointer to the Root Port which holds an error
* @e_src: pointer to an error source
*/
static void aer_isr_one_error(struct aer_rpc *rpc,
@ -1319,11 +1324,11 @@ static void aer_isr_one_error(struct aer_rpc *rpc,
}
/**
* aer_isr - consume errors detected by root port
* aer_isr - consume errors detected by Root Port
* @irq: IRQ assigned to Root Port
* @context: pointer to Root Port data structure
*
* Invoked, as DPC, when root port records new detected error
* Invoked, as DPC, when Root Port records new detected error
*/
static irqreturn_t aer_isr(int irq, void *context)
{
@ -1383,7 +1388,7 @@ static void aer_disable_irq(struct pci_dev *pdev)
int aer = pdev->aer_cap;
u32 reg32;
/* Disable Root's interrupt in response to error messages */
/* Disable Root Port's interrupt in response to error messages */
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK;
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
@ -1583,9 +1588,9 @@ static struct pcie_port_service_driver aerdriver = {
};
/**
* pcie_aer_init - register AER root service driver
* pcie_aer_init - register AER service driver
*
* Invoked when AER root service driver is loaded.
* Invoked when AER service driver is loaded.
*/
int __init pcie_aer_init(void)
{

View File

@ -1270,16 +1270,16 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
parent_link = link->parent;
/*
* link->downstream is a pointer to the pci_dev of function 0. If
* we remove that function, the pci_dev is about to be deallocated,
* so we can't use link->downstream again. Free the link state to
* avoid this.
* Free the parent link state, no later than function 0 (i.e.
* link->downstream) being removed.
*
* If we're removing a non-0 function, it's possible we could
* retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
* programming the same ASPM Control value for all functions of
* multi-function devices, so disable ASPM for all of them.
* Do not free the link state any earlier. If function 0 is a
* switch upstream port, this link state is parent_link to all
* subordinate ones.
*/
if (pdev != link->downstream)
goto out;
pcie_config_aspm_link(link, 0);
list_del(&link->sibling);
free_link_state(link);
@ -1290,6 +1290,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
pcie_config_aspm_path(parent_link);
}
out:
mutex_unlock(&aspm_lock);
up_read(&pci_bus_sem);
}

View File

@ -113,7 +113,7 @@ static u16 pcie_bwctrl_select_speed(struct pci_dev *port, enum pci_bus_speed spe
up_read(&pci_bus_sem);
}
if (!supported_speeds)
return PCI_EXP_LNKCAP2_SLS_2_5GB;
supported_speeds = PCI_EXP_LNKCAP2_SLS_2_5GB;
return pcie_supported_speeds2target_speed(supported_speeds & desired_speeds);
}
@ -294,6 +294,10 @@ static int pcie_bwnotif_probe(struct pcie_device *srv)
struct pci_dev *port = srv->port;
int ret;
/* Can happen if we run out of bus numbers during enumeration. */
if (!port->subordinate)
return -ENODEV;
struct pcie_bwctrl_data *data = devm_kzalloc(&srv->device,
sizeof(*data), GFP_KERNEL);
if (!data)

View File

@ -219,7 +219,9 @@ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
goto clear_status;
pcie_read_tlp_log(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG,
cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG,
dpc_tlp_log_len(pdev), &tlp_log);
dpc_tlp_log_len(pdev),
pdev->subordinate->flit_mode,
&tlp_log);
pcie_print_tlp_log(pdev, &tlp_log, dev_fmt(""));
if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG + 1)
@ -398,11 +400,21 @@ void pci_dpc_init(struct pci_dev *pdev)
/* Quirks may set dpc_rp_log_size if device or firmware is buggy */
if (!pdev->dpc_rp_log_size) {
u16 flags;
int ret;
ret = pcie_capability_read_word(pdev, PCI_EXP_FLAGS, &flags);
if (ret)
return;
pdev->dpc_rp_log_size =
FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, cap);
if (FIELD_GET(PCI_EXP_FLAGS_FLIT, flags))
pdev->dpc_rp_log_size += FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE4,
cap) << 4;
if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG ||
pdev->dpc_rp_log_size > PCIE_STD_NUM_TLP_HEADERLOG + 1 +
PCIE_STD_MAX_TLP_PREFIXLOG) {
pdev->dpc_rp_log_size > PCIE_STD_MAX_TLP_HEADERLOG + 1) {
pci_err(pdev, "RP PIO log size %u is invalid\n",
pdev->dpc_rp_log_size);
pdev->dpc_rp_log_size = 0;

View File

@ -228,10 +228,12 @@ static int get_port_device_capability(struct pci_dev *dev)
/*
* Disable hot-plug interrupts in case they have been enabled
* by the BIOS and the hot-plug service driver is not loaded.
* by the BIOS and the hot-plug service driver won't be loaded
* to handle them.
*/
pcie_capability_clear_word(dev, PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE);
if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
pcie_capability_clear_word(dev, PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE);
}
#ifdef CONFIG_PCIEAER

View File

@ -7,6 +7,7 @@
#include <linux/aer.h>
#include <linux/array_size.h>
#include <linux/bitfield.h>
#include <linux/pci.h>
#include <linux/string.h>
@ -21,6 +22,9 @@
*/
unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc)
{
if (aercc & PCI_ERR_CAP_TLP_LOG_FLIT)
return FIELD_GET(PCI_ERR_CAP_TLP_LOG_SIZE, aercc);
return PCIE_STD_NUM_TLP_HEADERLOG +
((aercc & PCI_ERR_CAP_PREFIX_LOG_PRESENT) ?
dev->eetlp_prefix_max : 0);
@ -49,6 +53,7 @@ unsigned int dpc_tlp_log_len(struct pci_dev *dev)
* @where: PCI Config offset of TLP Header Log
* @where2: PCI Config offset of TLP Prefix Log
* @tlp_len: TLP Log length (Header Log + TLP Prefix Log in DWORDs)
* @flit: TLP Logged in Flit mode
* @log: TLP Log structure to fill
*
* Fill @log from TLP Header Log registers, e.g., AER or DPC.
@ -56,28 +61,34 @@ unsigned int dpc_tlp_log_len(struct pci_dev *dev)
* Return: 0 on success and filled TLP Log structure, <0 on error.
*/
int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
unsigned int tlp_len, struct pcie_tlp_log *log)
unsigned int tlp_len, bool flit, struct pcie_tlp_log *log)
{
unsigned int i;
int off, ret;
u32 *to;
if (tlp_len > ARRAY_SIZE(log->dw))
tlp_len = ARRAY_SIZE(log->dw);
memset(log, 0, sizeof(*log));
for (i = 0; i < tlp_len; i++) {
if (i < PCIE_STD_NUM_TLP_HEADERLOG) {
if (i < PCIE_STD_NUM_TLP_HEADERLOG)
off = where + i * 4;
to = &log->dw[i];
} else {
else
off = where2 + (i - PCIE_STD_NUM_TLP_HEADERLOG) * 4;
to = &log->prefix[i - PCIE_STD_NUM_TLP_HEADERLOG];
}
ret = pci_read_config_dword(dev, off, to);
ret = pci_read_config_dword(dev, off, &log->dw[i]);
if (ret)
return pcibios_err_to_errno(ret);
}
/*
* Hard-code non-Flit mode to 4 DWORDs, for now. The exact length
* can only be known if the TLP is parsed.
*/
log->header_len = flit ? tlp_len : 4;
log->flit = flit;
return 0;
}
@ -94,22 +105,31 @@ int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
void pcie_print_tlp_log(const struct pci_dev *dev,
const struct pcie_tlp_log *log, const char *pfx)
{
char buf[11 * (PCIE_STD_NUM_TLP_HEADERLOG + ARRAY_SIZE(log->prefix)) +
sizeof(EE_PREFIX_STR)];
/* EE_PREFIX_STR fits the extended DW space needed for the Flit mode */
char buf[11 * PCIE_STD_MAX_TLP_HEADERLOG + 1];
unsigned int i;
int len;
len = scnprintf(buf, sizeof(buf), "%#010x %#010x %#010x %#010x",
log->dw[0], log->dw[1], log->dw[2], log->dw[3]);
if (log->prefix[0])
len += scnprintf(buf + len, sizeof(buf) - len, EE_PREFIX_STR);
for (i = 0; i < ARRAY_SIZE(log->prefix); i++) {
if (!log->prefix[i])
break;
len += scnprintf(buf + len, sizeof(buf) - len,
" %#010x", log->prefix[i]);
if (log->flit) {
for (i = PCIE_STD_NUM_TLP_HEADERLOG; i < log->header_len; i++) {
len += scnprintf(buf + len, sizeof(buf) - len,
" %#010x", log->dw[i]);
}
} else {
if (log->prefix[0])
len += scnprintf(buf + len, sizeof(buf) - len,
EE_PREFIX_STR);
for (i = 0; i < ARRAY_SIZE(log->prefix); i++) {
if (!log->prefix[i])
break;
len += scnprintf(buf + len, sizeof(buf) - len,
" %#010x", log->prefix[i]);
}
}
pci_err(dev, "%sTLP Header: %s\n", pfx, buf);
pci_err(dev, "%sTLP Header%s: %s\n", pfx,
log->flit ? " (Flit)" : "", buf);
}

View File

@ -9,6 +9,8 @@
#include <linux/pci.h>
#include <linux/msi.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pci_hotplug.h>
#include <linux/slab.h>
#include <linux/module.h>
@ -789,10 +791,11 @@ EXPORT_SYMBOL_GPL(pci_speed_string);
void pcie_update_link_speed(struct pci_bus *bus)
{
struct pci_dev *bridge = bus->self;
u16 linksta;
u16 linksta, linksta2;
pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta);
__pcie_update_link_speed(bus, linksta);
pcie_capability_read_word(bridge, PCI_EXP_LNKSTA2, &linksta2);
__pcie_update_link_speed(bus, linksta, linksta2);
}
EXPORT_SYMBOL_GPL(pcie_update_link_speed);
@ -954,6 +957,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
resource_size_t offset, next_offset;
LIST_HEAD(resources);
struct resource *res, *next_res;
bool bus_registered = false;
char addr[64], *fmt;
const char *name;
int err;
@ -996,10 +1000,9 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
/* Temporarily move resources off the list */
list_splice_init(&bridge->windows, &resources);
err = device_add(&bridge->dev);
if (err) {
put_device(&bridge->dev);
if (err)
goto free;
}
bus->bridge = get_device(&bridge->dev);
device_enable_async_suspend(bus->bridge);
pci_set_bus_of_node(bus);
@ -1018,6 +1021,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
name = dev_name(&bus->dev);
err = device_register(&bus->dev);
bus_registered = true;
if (err)
goto unregister;
@ -1095,6 +1099,8 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
dev_info(&bus->dev, "root bus resource %pR%s\n", res, addr);
}
of_pci_make_host_bridge_node(bridge);
down_write(&pci_bus_sem);
list_add_tail(&bus->node, &pci_root_buses);
up_write(&pci_bus_sem);
@ -1104,12 +1110,15 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
unregister:
put_device(&bridge->dev);
device_del(&bridge->dev);
free:
#ifdef CONFIG_PCI_DOMAINS_GENERIC
pci_bus_release_domain_nr(parent, bus->domain_nr);
#endif
kfree(bus);
if (bus_registered)
put_device(&bus->dev);
else
kfree(bus);
return err;
}
@ -1218,7 +1227,10 @@ static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
add_dev:
pci_set_bus_msi_domain(child);
ret = device_register(&child->dev);
WARN_ON(ret < 0);
if (WARN_ON(ret < 0)) {
put_device(&child->dev);
return NULL;
}
pcibios_add_bus(child);
@ -1374,8 +1386,6 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
pci_write_config_word(dev, PCI_BRIDGE_CONTROL,
bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
pci_enable_rrs_sv(dev);
if ((secondary || subordinate) && !pcibios_assign_all_busses() &&
!is_cardbus && !broken) {
unsigned int cmax, buses;
@ -1616,6 +1626,11 @@ void set_pcie_port_type(struct pci_dev *pdev)
pdev->pcie_cap = pos;
pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
pdev->pcie_flags_reg = reg16;
type = pci_pcie_type(pdev);
if (type == PCI_EXP_TYPE_ROOT_PORT)
pci_enable_rrs_sv(pdev);
pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap);
pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap);
@ -1632,7 +1647,6 @@ void set_pcie_port_type(struct pci_dev *pdev)
* correctly so detect impossible configurations here and correct
* the port type accordingly.
*/
type = pci_pcie_type(pdev);
if (type == PCI_EXP_TYPE_DOWNSTREAM) {
/*
* If pdev claims to be downstream port but the parent
@ -2494,6 +2508,36 @@ bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l,
}
EXPORT_SYMBOL(pci_bus_read_dev_vendor_id);
static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn)
{
struct pci_host_bridge *host = pci_find_host_bridge(bus);
struct platform_device *pdev;
struct device_node *np;
np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn);
if (!np || of_find_device_by_node(np))
return NULL;
/*
* First check whether the pwrctrl device really needs to be created or
* not. This is decided based on at least one of the power supplies
* being defined in the devicetree node of the device.
*/
if (!of_pci_supply_present(np)) {
pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name);
return NULL;
}
/* Now create the pwrctrl device */
pdev = of_platform_device_create(np, NULL, &host->dev);
if (!pdev) {
pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name);
return NULL;
}
return pdev;
}
/*
* Read the config data for a PCI device, sanity-check it,
* and fill in the dev structure.
@ -2503,6 +2547,15 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
struct pci_dev *dev;
u32 l;
/*
* Create pwrctrl device (if required) for the PCI device to handle the
* power state. If the pwrctrl device is created, then skip scanning
* further as the pwrctrl core will rescan the bus after powering on
* the device.
*/
if (pci_pwrctrl_create_device(bus, devfn))
return NULL;
if (!pci_bus_read_dev_vendor_id(bus, devfn, &l, 60*1000))
return NULL;
@ -2565,6 +2618,7 @@ static void pci_init_capabilities(struct pci_dev *dev)
pci_rcec_init(dev); /* Root Complex Event Collector */
pci_doe_init(dev); /* Data Object Exchange */
pci_tph_init(dev); /* TLP Processing Hints */
pci_rebar_init(dev); /* Resizable BAR */
pcie_report_downtraining(dev);
pci_init_reset_methods(dev);
@ -2662,6 +2716,8 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
WARN_ON(ret < 0);
pci_npem_create(dev);
pci_doe_sysfs_init(dev);
}
struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn)

View File

@ -251,6 +251,10 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma)
security_locked_down(LOCKDOWN_PCI_ACCESS))
return -EPERM;
/* Skip devices with non-mappable BARs */
if (dev->non_mappable_bars)
return -EINVAL;
if (fpriv->mmap_state == pci_mmap_io) {
if (!arch_can_pci_mmap_io())
return -EINVAL;

View File

@ -10,3 +10,14 @@ config PCI_PWRCTL_PWRSEQ
tristate
select POWER_SEQUENCING
select PCI_PWRCTL
config PCI_PWRCTL_SLOT
tristate "PCI Power Control driver for PCI slots"
select PCI_PWRCTL
help
Say Y here to enable the PCI Power Control driver to control the power
state of PCI slots.
This is a generic driver that controls the power state of different
PCI slots. The voltage regulators powering the rails of the PCI slots
are expected to be defined in the devicetree node of the PCI bridge.

Some files were not shown because too many files have changed in this diff Show More