diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-08-16 00:04:25 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-08-16 00:04:25 +0200 |
commit | 9a76aba02a37718242d7cdc294f0a3901928aa57 (patch) | |
tree | 2040d038f85d2120f21af83b0793efd5af1864e3 /drivers/net/ethernet/stmicro/stmmac | |
parent | x86: i8259: Add missing include file (diff) | |
parent | bpf: test: fix spelling mistake "REUSEEPORT" -> "REUSEPORT" (diff) | |
download | linux-9a76aba02a37718242d7cdc294f0a3901928aa57.tar.xz linux-9a76aba02a37718242d7cdc294f0a3901928aa57.zip |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:
"Highlights:
- Gustavo A. R. Silva keeps working on the implicit switch fallthru
changes.
- Support 802.11ax High-Efficiency wireless in cfg80211 et al, From
Luca Coelho.
- Re-enable ASPM in r8169, from Kai-Heng Feng.
- Add virtual XFRM interfaces, which avoids all of the limitations of
existing IPSEC tunnels. From Steffen Klassert.
- Convert GRO over to use a hash table, so that when we have many
flows active we don't traverse a long list during accumluation.
- Many new self tests for routing, TC, tunnels, etc. Too many
contributors to mention them all, but I'm really happy to keep
seeing this stuff.
- Hardware timestamping support for dpaa_eth/fsl-fman from Yangbo Lu.
- Lots of cleanups and fixes in L2TP code from Guillaume Nault.
- Add IPSEC offload support to netdevsim, from Shannon Nelson.
- Add support for slotting with non-uniform distribution to netem
packet scheduler, from Yousuk Seung.
- Add UDP GSO support to mlx5e, from Boris Pismenny.
- Support offloading of Team LAG in NFP, from John Hurley.
- Allow to configure TX queue selection based upon RX queue, from
Amritha Nambiar.
- Support ethtool ring size configuration in aquantia, from Anton
Mikaev.
- Support DSCP and flowlabel per-transport in SCTP, from Xin Long.
- Support list based batching and stack traversal of SKBs, this is
very exciting work. From Edward Cree.
- Busyloop optimizations in vhost_net, from Toshiaki Makita.
- Introduce the ETF qdisc, which allows time based transmissions. IGB
can offload this in hardware. From Vinicius Costa Gomes.
- Add parameter support to devlink, from Moshe Shemesh.
- Several multiplication and division optimizations for BPF JIT in
nfp driver, from Jiong Wang.
- Lots of prepatory work to make more of the packet scheduler layer
lockless, when possible, from Vlad Buslov.
- Add ACK filter and NAT awareness to sch_cake packet scheduler, from
Toke Høiland-Jørgensen.
- Support regions and region snapshots in devlink, from Alex Vesker.
- Allow to attach XDP programs to both HW and SW at the same time on
a given device, with initial support in nfp. From Jakub Kicinski.
- Add TLS RX offload and support in mlx5, from Ilya Lesokhin.
- Use PHYLIB in r8169 driver, from Heiner Kallweit.
- All sorts of changes to support Spectrum 2 in mlxsw driver, from
Ido Schimmel.
- PTP support in mv88e6xxx DSA driver, from Andrew Lunn.
- Make TCP_USER_TIMEOUT socket option more accurate, from Jon
Maxwell.
- Support for templates in packet scheduler classifier, from Jiri
Pirko.
- IPV6 support in RDS, from Ka-Cheong Poon.
- Native tproxy support in nf_tables, from Máté Eckl.
- Maintain IP fragment queue in an rbtree, but optimize properly for
in-order frags. From Peter Oskolkov.
- Improvde handling of ACKs on hole repairs, from Yuchung Cheng"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1996 commits)
bpf: test: fix spelling mistake "REUSEEPORT" -> "REUSEPORT"
hv/netvsc: Fix NULL dereference at single queue mode fallback
net: filter: mark expected switch fall-through
xen-netfront: fix warn message as irq device name has '/'
cxgb4: Add new T5 PCI device ids 0x50af and 0x50b0
net: dsa: mv88e6xxx: missing unlock on error path
rds: fix building with IPV6=m
inet/connection_sock: prefer _THIS_IP_ to current_text_addr
net: dsa: mv88e6xxx: bitwise vs logical bug
net: sock_diag: Fix spectre v1 gadget in __sock_diag_cmd()
ieee802154: hwsim: using right kind of iteration
net: hns3: Add vlan filter setting by ethtool command -K
net: hns3: Set tx ring' tc info when netdev is up
net: hns3: Remove tx ring BD len register in hns3_enet
net: hns3: Fix desc num set to default when setting channel
net: hns3: Fix for phy link issue when using marvell phy driver
net: hns3: Fix for information of phydev lost problem when down/up
net: hns3: Fix for command format parsing error in hclge_is_all_function_id_zero
net: hns3: Add support for serdes loopback selftest
bnxt_en: take coredump_record structure off stack
...
Diffstat (limited to 'drivers/net/ethernet/stmicro/stmmac')
18 files changed, 1682 insertions, 37 deletions
diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile index 68e9e2640c62..99967a80a8c8 100644 --- a/drivers/net/ethernet/stmicro/stmmac/Makefile +++ b/drivers/net/ethernet/stmicro/stmmac/Makefile @@ -5,7 +5,8 @@ stmmac-objs:= stmmac_main.o stmmac_ethtool.o stmmac_mdio.o ring_mode.o \ dwmac100_core.o dwmac100_dma.o enh_desc.o norm_desc.o \ mmc_core.o stmmac_hwtstamp.o stmmac_ptp.o dwmac4_descs.o \ dwmac4_dma.o dwmac4_lib.o dwmac4_core.o dwmac5.o hwif.o \ - stmmac_tc.o $(stmmac-y) + stmmac_tc.o dwxgmac2_core.o dwxgmac2_dma.o dwxgmac2_descs.o \ + $(stmmac-y) # Ordering matters. Generic driver must be last. obj-$(CONFIG_STMMAC_PLATFORM) += stmmac-platform.o diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h index 78fd0f8b8e81..1854f270ad66 100644 --- a/drivers/net/ethernet/stmicro/stmmac/common.h +++ b/drivers/net/ethernet/stmicro/stmmac/common.h @@ -36,12 +36,14 @@ #include "mmc.h" /* Synopsys Core versions */ -#define DWMAC_CORE_3_40 0x34 -#define DWMAC_CORE_3_50 0x35 -#define DWMAC_CORE_4_00 0x40 -#define DWMAC_CORE_4_10 0x41 -#define DWMAC_CORE_5_00 0x50 -#define DWMAC_CORE_5_10 0x51 +#define DWMAC_CORE_3_40 0x34 +#define DWMAC_CORE_3_50 0x35 +#define DWMAC_CORE_4_00 0x40 +#define DWMAC_CORE_4_10 0x41 +#define DWMAC_CORE_5_00 0x50 +#define DWMAC_CORE_5_10 0x51 +#define DWXGMAC_CORE_2_10 0x21 + #define STMMAC_CHAN0 0 /* Always supported and default for all chips */ /* These need to be power of two, and >= 4 */ @@ -398,6 +400,8 @@ struct mac_link { u32 speed10; u32 speed100; u32 speed1000; + u32 speed2500; + u32 speed10000; u32 duplex; }; @@ -439,6 +443,7 @@ struct stmmac_rx_routing { int dwmac100_setup(struct stmmac_priv *priv); int dwmac1000_setup(struct stmmac_priv *priv); int dwmac4_setup(struct stmmac_priv *priv); +int dwxgmac2_setup(struct stmmac_priv *priv); void stmmac_set_mac_addr(void __iomem *ioaddr, u8 addr[6], unsigned int high, unsigned int low); diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c index 3304095c934c..fad503820e04 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c @@ -78,6 +78,8 @@ static const struct of_device_id dwmac_generic_match[] = { { .compatible = "snps,dwmac-4.00"}, { .compatible = "snps,dwmac-4.10a"}, { .compatible = "snps,dwmac"}, + { .compatible = "snps,dwxgmac-2.10"}, + { .compatible = "snps,dwxgmac"}, { } }; MODULE_DEVICE_TABLE(of, dwmac_generic_match); diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c index f08625a02cea..7b923362ee55 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c @@ -61,6 +61,7 @@ struct rk_priv_data { struct clk *mac_clk_tx; struct clk *clk_mac_ref; struct clk *clk_mac_refout; + struct clk *clk_mac_speed; struct clk *aclk_mac; struct clk *pclk_mac; struct clk *clk_phy; @@ -83,6 +84,64 @@ struct rk_priv_data { (((tx) ? soc##_GMAC_TXCLK_DLY_ENABLE : soc##_GMAC_TXCLK_DLY_DISABLE) | \ ((rx) ? soc##_GMAC_RXCLK_DLY_ENABLE : soc##_GMAC_RXCLK_DLY_DISABLE)) +#define PX30_GRF_GMAC_CON1 0x0904 + +/* PX30_GRF_GMAC_CON1 */ +#define PX30_GMAC_PHY_INTF_SEL_RMII (GRF_CLR_BIT(4) | GRF_CLR_BIT(5) | \ + GRF_BIT(6)) +#define PX30_GMAC_SPEED_10M GRF_CLR_BIT(2) +#define PX30_GMAC_SPEED_100M GRF_BIT(2) + +static void px30_set_to_rmii(struct rk_priv_data *bsp_priv) +{ + struct device *dev = &bsp_priv->pdev->dev; + + if (IS_ERR(bsp_priv->grf)) { + dev_err(dev, "%s: Missing rockchip,grf property\n", __func__); + return; + } + + regmap_write(bsp_priv->grf, PX30_GRF_GMAC_CON1, + PX30_GMAC_PHY_INTF_SEL_RMII); +} + +static void px30_set_rmii_speed(struct rk_priv_data *bsp_priv, int speed) +{ + struct device *dev = &bsp_priv->pdev->dev; + int ret; + + if (IS_ERR(bsp_priv->clk_mac_speed)) { + dev_err(dev, "%s: Missing clk_mac_speed clock\n", __func__); + return; + } + + if (speed == 10) { + regmap_write(bsp_priv->grf, PX30_GRF_GMAC_CON1, + PX30_GMAC_SPEED_10M); + + ret = clk_set_rate(bsp_priv->clk_mac_speed, 2500000); + if (ret) + dev_err(dev, "%s: set clk_mac_speed rate 2500000 failed: %d\n", + __func__, ret); + } else if (speed == 100) { + regmap_write(bsp_priv->grf, PX30_GRF_GMAC_CON1, + PX30_GMAC_SPEED_100M); + + ret = clk_set_rate(bsp_priv->clk_mac_speed, 25000000); + if (ret) + dev_err(dev, "%s: set clk_mac_speed rate 25000000 failed: %d\n", + __func__, ret); + + } else { + dev_err(dev, "unknown speed value for RMII! speed=%d", speed); + } +} + +static const struct rk_gmac_ops px30_ops = { + .set_to_rmii = px30_set_to_rmii, + .set_rmii_speed = px30_set_rmii_speed, +}; + #define RK3128_GRF_MAC_CON0 0x0168 #define RK3128_GRF_MAC_CON1 0x016c @@ -1042,6 +1101,10 @@ static int rk_gmac_clk_init(struct plat_stmmacenet_data *plat) } } + bsp_priv->clk_mac_speed = devm_clk_get(dev, "clk_mac_speed"); + if (IS_ERR(bsp_priv->clk_mac_speed)) + dev_err(dev, "cannot get clock %s\n", "clk_mac_speed"); + if (bsp_priv->clock_input) { dev_info(dev, "clock input from PHY\n"); } else { @@ -1094,6 +1157,9 @@ static int gmac_clk_enable(struct rk_priv_data *bsp_priv, bool enable) if (!IS_ERR(bsp_priv->mac_clk_tx)) clk_prepare_enable(bsp_priv->mac_clk_tx); + if (!IS_ERR(bsp_priv->clk_mac_speed)) + clk_prepare_enable(bsp_priv->clk_mac_speed); + /** * if (!IS_ERR(bsp_priv->clk_mac)) * clk_prepare_enable(bsp_priv->clk_mac); @@ -1118,6 +1184,8 @@ static int gmac_clk_enable(struct rk_priv_data *bsp_priv, bool enable) clk_disable_unprepare(bsp_priv->pclk_mac); clk_disable_unprepare(bsp_priv->mac_clk_tx); + + clk_disable_unprepare(bsp_priv->clk_mac_speed); /** * if (!IS_ERR(bsp_priv->clk_mac)) * clk_disable_unprepare(bsp_priv->clk_mac); @@ -1414,6 +1482,7 @@ static int rk_gmac_resume(struct device *dev) static SIMPLE_DEV_PM_OPS(rk_gmac_pm_ops, rk_gmac_suspend, rk_gmac_resume); static const struct of_device_id rk_gmac_dwmac_match[] = { + { .compatible = "rockchip,px30-gmac", .data = &px30_ops }, { .compatible = "rockchip,rk3128-gmac", .data = &rk3128_ops }, { .compatible = "rockchip,rk3228-gmac", .data = &rk3228_ops }, { .compatible = "rockchip,rk3288-gmac", .data = &rk3288_ops }, diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c index 65bc3556bd8f..edb6053bd980 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c @@ -407,6 +407,19 @@ static void dwmac4_enable_tso(void __iomem *ioaddr, bool en, u32 chan) } } +static void dwmac4_qmode(void __iomem *ioaddr, u32 channel, u8 qmode) +{ + u32 mtl_tx_op = readl(ioaddr + MTL_CHAN_TX_OP_MODE(channel)); + + mtl_tx_op &= ~MTL_OP_MODE_TXQEN_MASK; + if (qmode != MTL_QUEUE_AVB) + mtl_tx_op |= MTL_OP_MODE_TXQEN; + else + mtl_tx_op |= MTL_OP_MODE_TXQEN_AV; + + writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel)); +} + static void dwmac4_set_bfsize(void __iomem *ioaddr, int bfsize, u32 chan) { u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(chan)); @@ -441,6 +454,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = { .set_rx_tail_ptr = dwmac4_set_rx_tail_ptr, .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr, .enable_tso = dwmac4_enable_tso, + .qmode = dwmac4_qmode, .set_bfsize = dwmac4_set_bfsize, }; @@ -468,5 +482,6 @@ const struct stmmac_dma_ops dwmac410_dma_ops = { .set_rx_tail_ptr = dwmac4_set_rx_tail_ptr, .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr, .enable_tso = dwmac4_enable_tso, + .qmode = dwmac4_qmode, .set_bfsize = dwmac4_set_bfsize, }; diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h new file mode 100644 index 000000000000..0a80fa25afe3 --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h @@ -0,0 +1,228 @@ +// SPDX-License-Identifier: (GPL-2.0 OR MIT) +/* + * Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. + * stmmac XGMAC definitions. + */ + +#ifndef __STMMAC_DWXGMAC2_H__ +#define __STMMAC_DWXGMAC2_H__ + +#include "common.h" + +/* Misc */ +#define XGMAC_JUMBO_LEN 16368 + +/* MAC Registers */ +#define XGMAC_TX_CONFIG 0x00000000 +#define XGMAC_CONFIG_SS_OFF 29 +#define XGMAC_CONFIG_SS_MASK GENMASK(30, 29) +#define XGMAC_CONFIG_SS_10000 (0x0 << XGMAC_CONFIG_SS_OFF) +#define XGMAC_CONFIG_SS_2500 (0x2 << XGMAC_CONFIG_SS_OFF) +#define XGMAC_CONFIG_SS_1000 (0x3 << XGMAC_CONFIG_SS_OFF) +#define XGMAC_CONFIG_SARC GENMASK(22, 20) +#define XGMAC_CONFIG_SARC_SHIFT 20 +#define XGMAC_CONFIG_JD BIT(16) +#define XGMAC_CONFIG_TE BIT(0) +#define XGMAC_CORE_INIT_TX (XGMAC_CONFIG_JD) +#define XGMAC_RX_CONFIG 0x00000004 +#define XGMAC_CONFIG_ARPEN BIT(31) +#define XGMAC_CONFIG_GPSL GENMASK(29, 16) +#define XGMAC_CONFIG_GPSL_SHIFT 16 +#define XGMAC_CONFIG_S2KP BIT(11) +#define XGMAC_CONFIG_IPC BIT(9) +#define XGMAC_CONFIG_JE BIT(8) +#define XGMAC_CONFIG_WD BIT(7) +#define XGMAC_CONFIG_GPSLCE BIT(6) +#define XGMAC_CONFIG_CST BIT(2) +#define XGMAC_CONFIG_ACS BIT(1) +#define XGMAC_CONFIG_RE BIT(0) +#define XGMAC_CORE_INIT_RX 0 +#define XGMAC_PACKET_FILTER 0x00000008 +#define XGMAC_FILTER_RA BIT(31) +#define XGMAC_FILTER_PM BIT(4) +#define XGMAC_FILTER_HMC BIT(2) +#define XGMAC_FILTER_PR BIT(0) +#define XGMAC_HASH_TABLE(x) (0x00000010 + (x) * 4) +#define XGMAC_RXQ_CTRL0 0x000000a0 +#define XGMAC_RXQEN(x) GENMASK((x) * 2 + 1, (x) * 2) +#define XGMAC_RXQEN_SHIFT(x) ((x) * 2) +#define XGMAC_RXQ_CTRL2 0x000000a8 +#define XGMAC_RXQ_CTRL3 0x000000ac +#define XGMAC_PSRQ(x) GENMASK((x) * 8 + 7, (x) * 8) +#define XGMAC_PSRQ_SHIFT(x) ((x) * 8) +#define XGMAC_INT_STATUS 0x000000b0 +#define XGMAC_PMTIS BIT(4) +#define XGMAC_INT_EN 0x000000b4 +#define XGMAC_TSIE BIT(12) +#define XGMAC_LPIIE BIT(5) +#define XGMAC_PMTIE BIT(4) +#define XGMAC_INT_DEFAULT_EN (XGMAC_LPIIE | XGMAC_PMTIE | XGMAC_TSIE) +#define XGMAC_Qx_TX_FLOW_CTRL(x) (0x00000070 + (x) * 4) +#define XGMAC_PT GENMASK(31, 16) +#define XGMAC_PT_SHIFT 16 +#define XGMAC_TFE BIT(1) +#define XGMAC_RX_FLOW_CTRL 0x00000090 +#define XGMAC_RFE BIT(0) +#define XGMAC_PMT 0x000000c0 +#define XGMAC_GLBLUCAST BIT(9) +#define XGMAC_RWKPKTEN BIT(2) +#define XGMAC_MGKPKTEN BIT(1) +#define XGMAC_PWRDWN BIT(0) +#define XGMAC_HW_FEATURE0 0x0000011c +#define XGMAC_HWFEAT_SAVLANINS BIT(27) +#define XGMAC_HWFEAT_RXCOESEL BIT(16) +#define XGMAC_HWFEAT_TXCOESEL BIT(14) +#define XGMAC_HWFEAT_TSSEL BIT(12) +#define XGMAC_HWFEAT_AVSEL BIT(11) +#define XGMAC_HWFEAT_RAVSEL BIT(10) +#define XGMAC_HWFEAT_ARPOFFSEL BIT(9) +#define XGMAC_HWFEAT_MGKSEL BIT(7) +#define XGMAC_HWFEAT_RWKSEL BIT(6) +#define XGMAC_HWFEAT_GMIISEL BIT(1) +#define XGMAC_HW_FEATURE1 0x00000120 +#define XGMAC_HWFEAT_TSOEN BIT(18) +#define XGMAC_HWFEAT_TXFIFOSIZE GENMASK(10, 6) +#define XGMAC_HWFEAT_RXFIFOSIZE GENMASK(4, 0) +#define XGMAC_HW_FEATURE2 0x00000124 +#define XGMAC_HWFEAT_PPSOUTNUM GENMASK(26, 24) +#define XGMAC_HWFEAT_TXCHCNT GENMASK(21, 18) +#define XGMAC_HWFEAT_RXCHCNT GENMASK(15, 12) +#define XGMAC_HWFEAT_TXQCNT GENMASK(9, 6) +#define XGMAC_HWFEAT_RXQCNT GENMASK(3, 0) +#define XGMAC_MDIO_ADDR 0x00000200 +#define XGMAC_MDIO_DATA 0x00000204 +#define XGMAC_MDIO_C22P 0x00000220 +#define XGMAC_ADDR0_HIGH 0x00000300 +#define XGMAC_AE BIT(31) +#define XGMAC_DCS GENMASK(19, 16) +#define XGMAC_DCS_SHIFT 16 +#define XGMAC_ADDR0_LOW 0x00000304 +#define XGMAC_ARP_ADDR 0x00000c10 +#define XGMAC_TIMESTAMP_STATUS 0x00000d20 +#define XGMAC_TXTSC BIT(15) +#define XGMAC_TXTIMESTAMP_NSEC 0x00000d30 +#define XGMAC_TXTSSTSLO GENMASK(30, 0) +#define XGMAC_TXTIMESTAMP_SEC 0x00000d34 + +/* MTL Registers */ +#define XGMAC_MTL_OPMODE 0x00001000 +#define XGMAC_ETSALG GENMASK(6, 5) +#define XGMAC_WRR (0x0 << 5) +#define XGMAC_WFQ (0x1 << 5) +#define XGMAC_DWRR (0x2 << 5) +#define XGMAC_RAA BIT(2) +#define XGMAC_MTL_INT_STATUS 0x00001020 +#define XGMAC_MTL_RXQ_DMA_MAP0 0x00001030 +#define XGMAC_MTL_RXQ_DMA_MAP1 0x00001034 +#define XGMAC_QxMDMACH(x) GENMASK((x) * 8 + 3, (x) * 8) +#define XGMAC_QxMDMACH_SHIFT(x) ((x) * 8) +#define XGMAC_MTL_TXQ_OPMODE(x) (0x00001100 + (0x80 * (x))) +#define XGMAC_TQS GENMASK(25, 16) +#define XGMAC_TQS_SHIFT 16 +#define XGMAC_TTC GENMASK(6, 4) +#define XGMAC_TTC_SHIFT 4 +#define XGMAC_TXQEN GENMASK(3, 2) +#define XGMAC_TXQEN_SHIFT 2 +#define XGMAC_TSF BIT(1) +#define XGMAC_MTL_RXQ_OPMODE(x) (0x00001140 + (0x80 * (x))) +#define XGMAC_RQS GENMASK(25, 16) +#define XGMAC_RQS_SHIFT 16 +#define XGMAC_EHFC BIT(7) +#define XGMAC_RSF BIT(5) +#define XGMAC_RTC GENMASK(1, 0) +#define XGMAC_RTC_SHIFT 0 +#define XGMAC_MTL_QINTEN(x) (0x00001170 + (0x80 * (x))) +#define XGMAC_RXOIE BIT(16) +#define XGMAC_MTL_QINT_STATUS(x) (0x00001174 + (0x80 * (x))) +#define XGMAC_RXOVFIS BIT(16) +#define XGMAC_ABPSIS BIT(1) +#define XGMAC_TXUNFIS BIT(0) + +/* DMA Registers */ +#define XGMAC_DMA_MODE 0x00003000 +#define XGMAC_SWR BIT(0) +#define XGMAC_DMA_SYSBUS_MODE 0x00003004 +#define XGMAC_WR_OSR_LMT GENMASK(29, 24) +#define XGMAC_WR_OSR_LMT_SHIFT 24 +#define XGMAC_RD_OSR_LMT GENMASK(21, 16) +#define XGMAC_RD_OSR_LMT_SHIFT 16 +#define XGMAC_EN_LPI BIT(15) +#define XGMAC_LPI_XIT_PKT BIT(14) +#define XGMAC_AAL BIT(12) +#define XGMAC_BLEN GENMASK(7, 1) +#define XGMAC_BLEN256 BIT(7) +#define XGMAC_BLEN128 BIT(6) +#define XGMAC_BLEN64 BIT(5) +#define XGMAC_BLEN32 BIT(4) +#define XGMAC_BLEN16 BIT(3) +#define XGMAC_BLEN8 BIT(2) +#define XGMAC_BLEN4 BIT(1) +#define XGMAC_UNDEF BIT(0) +#define XGMAC_DMA_CH_CONTROL(x) (0x00003100 + (0x80 * (x))) +#define XGMAC_PBLx8 BIT(16) +#define XGMAC_DMA_CH_TX_CONTROL(x) (0x00003104 + (0x80 * (x))) +#define XGMAC_TxPBL GENMASK(21, 16) +#define XGMAC_TxPBL_SHIFT 16 +#define XGMAC_TSE BIT(12) +#define XGMAC_OSP BIT(4) +#define XGMAC_TXST BIT(0) +#define XGMAC_DMA_CH_RX_CONTROL(x) (0x00003108 + (0x80 * (x))) +#define XGMAC_RxPBL GENMASK(21, 16) +#define XGMAC_RxPBL_SHIFT 16 +#define XGMAC_RXST BIT(0) +#define XGMAC_DMA_CH_TxDESC_LADDR(x) (0x00003114 + (0x80 * (x))) +#define XGMAC_DMA_CH_RxDESC_LADDR(x) (0x0000311c + (0x80 * (x))) +#define XGMAC_DMA_CH_TxDESC_TAIL_LPTR(x) (0x00003124 + (0x80 * (x))) +#define XGMAC_DMA_CH_RxDESC_TAIL_LPTR(x) (0x0000312c + (0x80 * (x))) +#define XGMAC_DMA_CH_TxDESC_RING_LEN(x) (0x00003130 + (0x80 * (x))) +#define XGMAC_DMA_CH_RxDESC_RING_LEN(x) (0x00003134 + (0x80 * (x))) +#define XGMAC_DMA_CH_INT_EN(x) (0x00003138 + (0x80 * (x))) +#define XGMAC_NIE BIT(15) +#define XGMAC_AIE BIT(14) +#define XGMAC_RBUE BIT(7) +#define XGMAC_RIE BIT(6) +#define XGMAC_TIE BIT(0) +#define XGMAC_DMA_INT_DEFAULT_EN (XGMAC_NIE | XGMAC_AIE | XGMAC_RBUE | \ + XGMAC_RIE | XGMAC_TIE) +#define XGMAC_DMA_CH_Rx_WATCHDOG(x) (0x0000313c + (0x80 * (x))) +#define XGMAC_RWT GENMASK(7, 0) +#define XGMAC_DMA_CH_STATUS(x) (0x00003160 + (0x80 * (x))) +#define XGMAC_NIS BIT(15) +#define XGMAC_AIS BIT(14) +#define XGMAC_FBE BIT(12) +#define XGMAC_RBU BIT(7) +#define XGMAC_RI BIT(6) +#define XGMAC_TPS BIT(1) +#define XGMAC_TI BIT(0) + +/* Descriptors */ +#define XGMAC_TDES2_IOC BIT(31) +#define XGMAC_TDES2_TTSE BIT(30) +#define XGMAC_TDES2_B2L GENMASK(29, 16) +#define XGMAC_TDES2_B2L_SHIFT 16 +#define XGMAC_TDES2_B1L GENMASK(13, 0) +#define XGMAC_TDES3_OWN BIT(31) +#define XGMAC_TDES3_CTXT BIT(30) +#define XGMAC_TDES3_FD BIT(29) +#define XGMAC_TDES3_LD BIT(28) +#define XGMAC_TDES3_CPC GENMASK(27, 26) +#define XGMAC_TDES3_CPC_SHIFT 26 +#define XGMAC_TDES3_TCMSSV BIT(26) +#define XGMAC_TDES3_THL GENMASK(22, 19) +#define XGMAC_TDES3_THL_SHIFT 19 +#define XGMAC_TDES3_TSE BIT(18) +#define XGMAC_TDES3_CIC GENMASK(17, 16) +#define XGMAC_TDES3_CIC_SHIFT 16 +#define XGMAC_TDES3_TPL GENMASK(17, 0) +#define XGMAC_TDES3_FL GENMASK(14, 0) +#define XGMAC_RDES3_OWN BIT(31) +#define XGMAC_RDES3_CTXT BIT(30) +#define XGMAC_RDES3_IOC BIT(30) +#define XGMAC_RDES3_LD BIT(28) +#define XGMAC_RDES3_CDA BIT(27) +#define XGMAC_RDES3_ES BIT(15) +#define XGMAC_RDES3_PL GENMASK(13, 0) +#define XGMAC_RDES3_TSD BIT(6) +#define XGMAC_RDES3_TSA BIT(4) + +#endif /* __STMMAC_DWXGMAC2_H__ */ diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c new file mode 100644 index 000000000000..d182f82f7b58 --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c @@ -0,0 +1,371 @@ +// SPDX-License-Identifier: (GPL-2.0 OR MIT) +/* + * Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. + * stmmac XGMAC support. + */ + +#include "stmmac.h" +#include "dwxgmac2.h" + +static void dwxgmac2_core_init(struct mac_device_info *hw, + struct net_device *dev) +{ + void __iomem *ioaddr = hw->pcsr; + int mtu = dev->mtu; + u32 tx, rx; + + tx = readl(ioaddr + XGMAC_TX_CONFIG); + rx = readl(ioaddr + XGMAC_RX_CONFIG); + + tx |= XGMAC_CORE_INIT_TX; + rx |= XGMAC_CORE_INIT_RX; + + if (mtu >= 9000) { + rx |= XGMAC_CONFIG_GPSLCE; + rx |= XGMAC_JUMBO_LEN << XGMAC_CONFIG_GPSL_SHIFT; + rx |= XGMAC_CONFIG_WD; + } else if (mtu > 2000) { + rx |= XGMAC_CONFIG_JE; + } else if (mtu > 1500) { + rx |= XGMAC_CONFIG_S2KP; + } + + if (hw->ps) { + tx |= XGMAC_CONFIG_TE; + tx &= ~hw->link.speed_mask; + + switch (hw->ps) { + case SPEED_10000: + tx |= hw->link.speed10000; + break; + case SPEED_2500: + tx |= hw->link.speed2500; + break; + case SPEED_1000: + default: + tx |= hw->link.speed1000; + break; + } + } + + writel(tx, ioaddr + XGMAC_TX_CONFIG); + writel(rx, ioaddr + XGMAC_RX_CONFIG); + writel(XGMAC_INT_DEFAULT_EN, ioaddr + XGMAC_INT_EN); +} + +static void dwxgmac2_set_mac(void __iomem *ioaddr, bool enable) +{ + u32 tx = readl(ioaddr + XGMAC_TX_CONFIG); + u32 rx = readl(ioaddr + XGMAC_RX_CONFIG); + + if (enable) { + tx |= XGMAC_CONFIG_TE; + rx |= XGMAC_CONFIG_RE; + } else { + tx &= ~XGMAC_CONFIG_TE; + rx &= ~XGMAC_CONFIG_RE; + } + + writel(tx, ioaddr + XGMAC_TX_CONFIG); + writel(rx, ioaddr + XGMAC_RX_CONFIG); +} + +static int dwxgmac2_rx_ipc(struct mac_device_info *hw) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + value = readl(ioaddr + XGMAC_RX_CONFIG); + if (hw->rx_csum) + value |= XGMAC_CONFIG_IPC; + else + value &= ~XGMAC_CONFIG_IPC; + writel(value, ioaddr + XGMAC_RX_CONFIG); + + return !!(readl(ioaddr + XGMAC_RX_CONFIG) & XGMAC_CONFIG_IPC); +} + +static void dwxgmac2_rx_queue_enable(struct mac_device_info *hw, u8 mode, + u32 queue) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + value = readl(ioaddr + XGMAC_RXQ_CTRL0) & ~XGMAC_RXQEN(queue); + if (mode == MTL_QUEUE_AVB) + value |= 0x1 << XGMAC_RXQEN_SHIFT(queue); + else if (mode == MTL_QUEUE_DCB) + value |= 0x2 << XGMAC_RXQEN_SHIFT(queue); + writel(value, ioaddr + XGMAC_RXQ_CTRL0); +} + +static void dwxgmac2_rx_queue_prio(struct mac_device_info *hw, u32 prio, + u32 queue) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value, reg; + + reg = (queue < 4) ? XGMAC_RXQ_CTRL2 : XGMAC_RXQ_CTRL3; + + value = readl(ioaddr + reg); + value &= ~XGMAC_PSRQ(queue); + value |= (prio << XGMAC_PSRQ_SHIFT(queue)) & XGMAC_PSRQ(queue); + + writel(value, ioaddr + reg); +} + +static void dwxgmac2_prog_mtl_rx_algorithms(struct mac_device_info *hw, + u32 rx_alg) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + value = readl(ioaddr + XGMAC_MTL_OPMODE); + value &= ~XGMAC_RAA; + + switch (rx_alg) { + case MTL_RX_ALGORITHM_SP: + break; + case MTL_RX_ALGORITHM_WSP: + value |= XGMAC_RAA; + break; + default: + break; + } + + writel(value, ioaddr + XGMAC_MTL_OPMODE); +} + +static void dwxgmac2_prog_mtl_tx_algorithms(struct mac_device_info *hw, + u32 tx_alg) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + value = readl(ioaddr + XGMAC_MTL_OPMODE); + value &= ~XGMAC_ETSALG; + + switch (tx_alg) { + case MTL_TX_ALGORITHM_WRR: + value |= XGMAC_WRR; + break; + case MTL_TX_ALGORITHM_WFQ: + value |= XGMAC_WFQ; + break; + case MTL_TX_ALGORITHM_DWRR: + value |= XGMAC_DWRR; + break; + default: + break; + } + + writel(value, ioaddr + XGMAC_MTL_OPMODE); +} + +static void dwxgmac2_map_mtl_to_dma(struct mac_device_info *hw, u32 queue, + u32 chan) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value, reg; + + reg = (queue < 4) ? XGMAC_MTL_RXQ_DMA_MAP0 : XGMAC_MTL_RXQ_DMA_MAP1; + + value = readl(ioaddr + reg); + value &= ~XGMAC_QxMDMACH(queue); + value |= (chan << XGMAC_QxMDMACH_SHIFT(queue)) & XGMAC_QxMDMACH(queue); + + writel(value, ioaddr + reg); +} + +static int dwxgmac2_host_irq_status(struct mac_device_info *hw, + struct stmmac_extra_stats *x) +{ + void __iomem *ioaddr = hw->pcsr; + u32 stat, en; + + en = readl(ioaddr + XGMAC_INT_EN); + stat = readl(ioaddr + XGMAC_INT_STATUS); + + stat &= en; + + if (stat & XGMAC_PMTIS) { + x->irq_receive_pmt_irq_n++; + readl(ioaddr + XGMAC_PMT); + } + + return 0; +} + +static int dwxgmac2_host_mtl_irq_status(struct mac_device_info *hw, u32 chan) +{ + void __iomem *ioaddr = hw->pcsr; + int ret = 0; + u32 status; + + status = readl(ioaddr + XGMAC_MTL_INT_STATUS); + if (status & BIT(chan)) { + u32 chan_status = readl(ioaddr + XGMAC_MTL_QINT_STATUS(chan)); + + if (chan_status & XGMAC_RXOVFIS) + ret |= CORE_IRQ_MTL_RX_OVERFLOW; + + writel(~0x0, ioaddr + XGMAC_MTL_QINT_STATUS(chan)); + } + + return ret; +} + +static void dwxgmac2_flow_ctrl(struct mac_device_info *hw, unsigned int duplex, + unsigned int fc, unsigned int pause_time, + u32 tx_cnt) +{ + void __iomem *ioaddr = hw->pcsr; + u32 i; + + if (fc & FLOW_RX) + writel(XGMAC_RFE, ioaddr + XGMAC_RX_FLOW_CTRL); + if (fc & FLOW_TX) { + for (i = 0; i < tx_cnt; i++) { + u32 value = XGMAC_TFE; + + if (duplex) + value |= pause_time << XGMAC_PT_SHIFT; + + writel(value, ioaddr + XGMAC_Qx_TX_FLOW_CTRL(i)); + } + } +} + +static void dwxgmac2_pmt(struct mac_device_info *hw, unsigned long mode) +{ + void __iomem *ioaddr = hw->pcsr; + u32 val = 0x0; + + if (mode & WAKE_MAGIC) + val |= XGMAC_PWRDWN | XGMAC_MGKPKTEN; + if (mode & WAKE_UCAST) + val |= XGMAC_PWRDWN | XGMAC_GLBLUCAST | XGMAC_RWKPKTEN; + if (val) { + u32 cfg = readl(ioaddr + XGMAC_RX_CONFIG); + cfg |= XGMAC_CONFIG_RE; + writel(cfg, ioaddr + XGMAC_RX_CONFIG); + } + + writel(val, ioaddr + XGMAC_PMT); +} + +static void dwxgmac2_set_umac_addr(struct mac_device_info *hw, + unsigned char *addr, unsigned int reg_n) +{ + void __iomem *ioaddr = hw->pcsr; + u32 value; + + value = (addr[5] << 8) | addr[4]; + writel(value | XGMAC_AE, ioaddr + XGMAC_ADDR0_HIGH); + + value = (addr[3] << 24) | (addr[2] << 16) | (addr[1] << 8) | addr[0]; + writel(value, ioaddr + XGMAC_ADDR0_LOW); +} + +static void dwxgmac2_get_umac_addr(struct mac_device_info *hw, + unsigned char *addr, unsigned int reg_n) +{ + void __iomem *ioaddr = hw->pcsr; + u32 hi_addr, lo_addr; + + /* Read the MAC address from the hardware */ + hi_addr = readl(ioaddr + XGMAC_ADDR0_HIGH); + lo_addr = readl(ioaddr + XGMAC_ADDR0_LOW); + + /* Extract the MAC address from the high and low words */ + addr[0] = lo_addr & 0xff; + addr[1] = (lo_addr >> 8) & 0xff; + addr[2] = (lo_addr >> 16) & 0xff; + addr[3] = (lo_addr >> 24) & 0xff; + addr[4] = hi_addr & 0xff; + addr[5] = (hi_addr >> 8) & 0xff; +} + +static void dwxgmac2_set_filter(struct mac_device_info *hw, + struct net_device *dev) +{ + void __iomem *ioaddr = (void __iomem *)dev->base_addr; + u32 value = XGMAC_FILTER_RA; + + if (dev->flags & IFF_PROMISC) { + value |= XGMAC_FILTER_PR; + } else if ((dev->flags & IFF_ALLMULTI) || + (netdev_mc_count(dev) > HASH_TABLE_SIZE)) { + value |= XGMAC_FILTER_PM; + writel(~0x0, ioaddr + XGMAC_HASH_TABLE(0)); + writel(~0x0, ioaddr + XGMAC_HASH_TABLE(1)); + } + + writel(value, ioaddr + XGMAC_PACKET_FILTER); +} + +const struct stmmac_ops dwxgmac210_ops = { + .core_init = dwxgmac2_core_init, + .set_mac = dwxgmac2_set_mac, + .rx_ipc = dwxgmac2_rx_ipc, + .rx_queue_enable = dwxgmac2_rx_queue_enable, + .rx_queue_prio = dwxgmac2_rx_queue_prio, + .tx_queue_prio = NULL, + .rx_queue_routing = NULL, + .prog_mtl_rx_algorithms = dwxgmac2_prog_mtl_rx_algorithms, + .prog_mtl_tx_algorithms = dwxgmac2_prog_mtl_tx_algorithms, + .set_mtl_tx_queue_weight = NULL, + .map_mtl_to_dma = dwxgmac2_map_mtl_to_dma, + .config_cbs = NULL, + .dump_regs = NULL, + .host_irq_status = dwxgmac2_host_irq_status, + .host_mtl_irq_status = dwxgmac2_host_mtl_irq_status, + .flow_ctrl = dwxgmac2_flow_ctrl, + .pmt = dwxgmac2_pmt, + .set_umac_addr = dwxgmac2_set_umac_addr, + .get_umac_addr = dwxgmac2_get_umac_addr, + .set_eee_mode = NULL, + .reset_eee_mode = NULL, + .set_eee_timer = NULL, + .set_eee_pls = NULL, + .pcs_ctrl_ane = NULL, + .pcs_rane = NULL, + .pcs_get_adv_lp = NULL, + .debug = NULL, + .set_filter = dwxgmac2_set_filter, +}; + +int dwxgmac2_setup(struct stmmac_priv *priv) +{ + struct mac_device_info *mac = priv->hw; + + dev_info(priv->device, "\tXGMAC2\n"); + + priv->dev->priv_flags |= IFF_UNICAST_FLT; + mac->pcsr = priv->ioaddr; + mac->multicast_filter_bins = priv->plat->multicast_filter_bins; + mac->unicast_filter_entries = priv->plat->unicast_filter_entries; + mac->mcast_bits_log2 = 0; + + if (mac->multicast_filter_bins) + mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins); + + mac->link.duplex = 0; + mac->link.speed10 = 0; + mac->link.speed100 = 0; + mac->link.speed1000 = XGMAC_CONFIG_SS_1000; + mac->link.speed2500 = XGMAC_CONFIG_SS_2500; + mac->link.speed10000 = XGMAC_CONFIG_SS_10000; + mac->link.speed_mask = XGMAC_CONFIG_SS_MASK; + + mac->mii.addr = XGMAC_MDIO_ADDR; + mac->mii.data = XGMAC_MDIO_DATA; + mac->mii.addr_shift = 16; + mac->mii.addr_mask = GENMASK(20, 16); + mac->mii.reg_shift = 0; + mac->mii.reg_mask = GENMASK(15, 0); + mac->mii.clk_csr_shift = 19; + mac->mii.clk_csr_mask = GENMASK(21, 19); + + return 0; +} diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c new file mode 100644 index 000000000000..1d858fdec997 --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c @@ -0,0 +1,280 @@ +// SPDX-License-Identifier: (GPL-2.0 OR MIT) +/* + * Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. + * stmmac XGMAC support. + */ + +#include <linux/stmmac.h> +#include "common.h" +#include "dwxgmac2.h" + +static int dwxgmac2_get_tx_status(void *data, struct stmmac_extra_stats *x, + struct dma_desc *p, void __iomem *ioaddr) +{ + unsigned int tdes3 = le32_to_cpu(p->des3); + int ret = tx_done; + + if (unlikely(tdes3 & XGMAC_TDES3_OWN)) + return tx_dma_own; + if (likely(!(tdes3 & XGMAC_TDES3_LD))) + return tx_not_ls; + + return ret; +} + +static int dwxgmac2_get_rx_status(void *data, struct stmmac_extra_stats *x, + struct dma_desc *p) +{ + unsigned int rdes3 = le32_to_cpu(p->des3); + int ret = good_frame; + + if (unlikely(rdes3 & XGMAC_RDES3_OWN)) + return dma_own; + if (likely(!(rdes3 & XGMAC_RDES3_LD))) + return discard_frame; + if (unlikely(rdes3 & XGMAC_RDES3_ES)) + ret = discard_frame; + + return ret; +} + +static int dwxgmac2_get_tx_len(struct dma_desc *p) +{ + return (le32_to_cpu(p->des2) & XGMAC_TDES2_B1L); +} + +static int dwxgmac2_get_tx_owner(struct dma_desc *p) +{ + return (le32_to_cpu(p->des3) & XGMAC_TDES3_OWN) > 0; +} + +static void dwxgmac2_set_tx_owner(struct dma_desc *p) +{ + p->des3 |= cpu_to_le32(XGMAC_TDES3_OWN); +} + +static void dwxgmac2_set_rx_owner(struct dma_desc *p, int disable_rx_ic) +{ + p->des3 = cpu_to_le32(XGMAC_RDES3_OWN); + + if (!disable_rx_ic) + p->des3 |= cpu_to_le32(XGMAC_RDES3_IOC); +} + +static int dwxgmac2_get_tx_ls(struct dma_desc *p) +{ + return (le32_to_cpu(p->des3) & XGMAC_RDES3_LD) > 0; +} + +static int dwxgmac2_get_rx_frame_len(struct dma_desc *p, int rx_coe) +{ + return (le32_to_cpu(p->des3) & XGMAC_RDES3_PL); +} + +static void dwxgmac2_enable_tx_timestamp(struct dma_desc *p) +{ + p->des2 |= cpu_to_le32(XGMAC_TDES2_TTSE); +} + +static int dwxgmac2_get_tx_timestamp_status(struct dma_desc *p) +{ + return 0; /* Not supported */ +} + +static inline void dwxgmac2_get_timestamp(void *desc, u32 ats, u64 *ts) +{ + struct dma_desc *p = (struct dma_desc *)desc; + u64 ns = 0; + + ns += le32_to_cpu(p->des1) * 1000000000ULL; + ns += le32_to_cpu(p->des0); + + *ts = ns; +} + +static int dwxgmac2_rx_check_timestamp(void *desc) +{ + struct dma_desc *p = (struct dma_desc *)desc; + unsigned int rdes3 = le32_to_cpu(p->des3); + bool desc_valid, ts_valid; + + desc_valid = !(rdes3 & XGMAC_RDES3_OWN) && (rdes3 & XGMAC_RDES3_CTXT); + ts_valid = !(rdes3 & XGMAC_RDES3_TSD) && (rdes3 & XGMAC_RDES3_TSA); + + if (likely(desc_valid && ts_valid)) + return 0; + return -EINVAL; +} + +static int dwxgmac2_get_rx_timestamp_status(void *desc, void *next_desc, + u32 ats) +{ + struct dma_desc *p = (struct dma_desc *)desc; + unsigned int rdes3 = le32_to_cpu(p->des3); + int ret = -EBUSY; + + if (likely(rdes3 & XGMAC_RDES3_CDA)) { + ret = dwxgmac2_rx_check_timestamp(next_desc); + if (ret) + return ret; + } + + return ret; +} + +static void dwxgmac2_init_rx_desc(struct dma_desc *p, int disable_rx_ic, + int mode, int end) +{ + dwxgmac2_set_rx_owner(p, disable_rx_ic); +} + +static void dwxgmac2_init_tx_desc(struct dma_desc *p, int mode, int end) +{ + p->des0 = 0; + p->des1 = 0; + p->des2 = 0; + p->des3 = 0; +} + +static void dwxgmac2_prepare_tx_desc(struct dma_desc *p, int is_fs, int len, + bool csum_flag, int mode, bool tx_own, + bool ls, unsigned int tot_pkt_len) +{ + unsigned int tdes3 = le32_to_cpu(p->des3); + + p->des2 |= cpu_to_le32(len & XGMAC_TDES2_B1L); + + tdes3 = tot_pkt_len & XGMAC_TDES3_FL; + if (is_fs) + tdes3 |= XGMAC_TDES3_FD; + else + tdes3 &= ~XGMAC_TDES3_FD; + + if (csum_flag) + tdes3 |= 0x3 << XGMAC_TDES3_CIC_SHIFT; + else + tdes3 &= ~XGMAC_TDES3_CIC; + + if (ls) + tdes3 |= XGMAC_TDES3_LD; + else + tdes3 &= ~XGMAC_TDES3_LD; + + /* Finally set the OWN bit. Later the DMA will start! */ + if (tx_own) + tdes3 |= XGMAC_TDES3_OWN; + + if (is_fs && tx_own) + /* When the own bit, for the first frame, has to be set, all + * descriptors for the same frame has to be set before, to + * avoid race condition. + */ + dma_wmb(); + + p->des3 = cpu_to_le32(tdes3); +} + +static void dwxgmac2_prepare_tso_tx_desc(struct dma_desc *p, int is_fs, + int len1, int len2, bool tx_own, + bool ls, unsigned int tcphdrlen, + unsigned int tcppayloadlen) +{ + unsigned int tdes3 = le32_to_cpu(p->des3); + + if (len1) + p->des2 |= cpu_to_le32(len1 & XGMAC_TDES2_B1L); + if (len2) + p->des2 |= cpu_to_le32((len2 << XGMAC_TDES2_B2L_SHIFT) & + XGMAC_TDES2_B2L); + if (is_fs) { + tdes3 |= XGMAC_TDES3_FD | XGMAC_TDES3_TSE; + tdes3 |= (tcphdrlen << XGMAC_TDES3_THL_SHIFT) & + XGMAC_TDES3_THL; + tdes3 |= tcppayloadlen & XGMAC_TDES3_TPL; + } else { + tdes3 &= ~XGMAC_TDES3_FD; + } + + if (ls) + tdes3 |= XGMAC_TDES3_LD; + else + tdes3 &= ~XGMAC_TDES3_LD; + + /* Finally set the OWN bit. Later the DMA will start! */ + if (tx_own) + tdes3 |= XGMAC_TDES3_OWN; + + if (is_fs && tx_own) + /* When the own bit, for the first frame, has to be set, all + * descriptors for the same frame has to be set before, to + * avoid race condition. + */ + dma_wmb(); + + p->des3 = cpu_to_le32(tdes3); +} + +static void dwxgmac2_release_tx_desc(struct dma_desc *p, int mode) +{ + p->des0 = 0; + p->des1 = 0; + p->des2 = 0; + p->des3 = 0; +} + +static void dwxgmac2_set_tx_ic(struct dma_desc *p) +{ + p->des2 |= cpu_to_le32(XGMAC_TDES2_IOC); +} + +static void dwxgmac2_set_mss(struct dma_desc *p, unsigned int mss) +{ + p->des0 = 0; + p->des1 = 0; + p->des2 = cpu_to_le32(mss); + p->des3 = cpu_to_le32(XGMAC_TDES3_CTXT | XGMAC_TDES3_TCMSSV); +} + +static void dwxgmac2_get_addr(struct dma_desc *p, unsigned int *addr) +{ + *addr = le32_to_cpu(p->des0); +} + +static void dwxgmac2_set_addr(struct dma_desc *p, dma_addr_t addr) +{ + p->des0 = cpu_to_le32(addr); + p->des1 = 0; +} + +static void dwxgmac2_clear(struct dma_desc *p) +{ + p->des0 = 0; + p->des1 = 0; + p->des2 = 0; + p->des3 = 0; +} + +const struct stmmac_desc_ops dwxgmac210_desc_ops = { + .tx_status = dwxgmac2_get_tx_status, + .rx_status = dwxgmac2_get_rx_status, + .get_tx_len = dwxgmac2_get_tx_len, + .get_tx_owner = dwxgmac2_get_tx_owner, + .set_tx_owner = dwxgmac2_set_tx_owner, + .set_rx_owner = dwxgmac2_set_rx_owner, + .get_tx_ls = dwxgmac2_get_tx_ls, + .get_rx_frame_len = dwxgmac2_get_rx_frame_len, + .enable_tx_timestamp = dwxgmac2_enable_tx_timestamp, + .get_tx_timestamp_status = dwxgmac2_get_tx_timestamp_status, + .get_rx_timestamp_status = dwxgmac2_get_rx_timestamp_status, + .get_timestamp = dwxgmac2_get_timestamp, + .set_tx_ic = dwxgmac2_set_tx_ic, + .prepare_tx_desc = dwxgmac2_prepare_tx_desc, + .prepare_tso_tx_desc = dwxgmac2_prepare_tso_tx_desc, + .release_tx_desc = dwxgmac2_release_tx_desc, + .init_rx_desc = dwxgmac2_init_rx_desc, + .init_tx_desc = dwxgmac2_init_tx_desc, + .set_mss = dwxgmac2_set_mss, + .get_addr = dwxgmac2_get_addr, + .set_addr = dwxgmac2_set_addr, + .clear = dwxgmac2_clear, +}; diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c new file mode 100644 index 000000000000..20909036e002 --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c @@ -0,0 +1,411 @@ +// SPDX-License-Identifier: (GPL-2.0 OR MIT) +/* + * Copyright (c) 2018 Synopsys, Inc. and/or its affiliates. + * stmmac XGMAC support. + */ + +#include <linux/iopoll.h> +#include "stmmac.h" +#include "dwxgmac2.h" + +static int dwxgmac2_dma_reset(void __iomem *ioaddr) +{ + u32 value = readl(ioaddr + XGMAC_DMA_MODE); + + /* DMA SW reset */ + writel(value | XGMAC_SWR, ioaddr + XGMAC_DMA_MODE); + + return readl_poll_timeout(ioaddr + XGMAC_DMA_MODE, value, + !(value & XGMAC_SWR), 0, 100000); +} + +static void dwxgmac2_dma_init(void __iomem *ioaddr, + struct stmmac_dma_cfg *dma_cfg, int atds) +{ + u32 value = readl(ioaddr + XGMAC_DMA_SYSBUS_MODE); + + if (dma_cfg->aal) + value |= XGMAC_AAL; + + writel(value, ioaddr + XGMAC_DMA_SYSBUS_MODE); +} + +static void dwxgmac2_dma_init_chan(void __iomem *ioaddr, + struct stmmac_dma_cfg *dma_cfg, u32 chan) +{ + u32 value = readl(ioaddr + XGMAC_DMA_CH_CONTROL(chan)); + + if (dma_cfg->pblx8) + value |= XGMAC_PBLx8; + + writel(value, ioaddr + XGMAC_DMA_CH_CONTROL(chan)); + writel(XGMAC_DMA_INT_DEFAULT_EN, ioaddr + XGMAC_DMA_CH_INT_EN(chan)); +} + +static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr, + struct stmmac_dma_cfg *dma_cfg, + u32 dma_rx_phy, u32 chan) +{ + u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl; + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + value &= ~XGMAC_RxPBL; + value |= (rxpbl << XGMAC_RxPBL_SHIFT) & XGMAC_RxPBL; + writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + + writel(dma_rx_phy, ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan)); +} + +static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr, + struct stmmac_dma_cfg *dma_cfg, + u32 dma_tx_phy, u32 chan) +{ + u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl; + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + value &= ~XGMAC_TxPBL; + value |= (txpbl << XGMAC_TxPBL_SHIFT) & XGMAC_TxPBL; + value |= XGMAC_OSP; + writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + + writel(dma_tx_phy, ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan)); +} + +static void dwxgmac2_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi) +{ + u32 value = readl(ioaddr + XGMAC_DMA_SYSBUS_MODE); + int i; + + if (axi->axi_lpi_en) + value |= XGMAC_EN_LPI; + if (axi->axi_xit_frm) + value |= XGMAC_LPI_XIT_PKT; + + value &= ~XGMAC_WR_OSR_LMT; + value |= (axi->axi_wr_osr_lmt << XGMAC_WR_OSR_LMT_SHIFT) & + XGMAC_WR_OSR_LMT; + + value &= ~XGMAC_RD_OSR_LMT; + value |= (axi->axi_rd_osr_lmt << XGMAC_RD_OSR_LMT_SHIFT) & + XGMAC_RD_OSR_LMT; + + value &= ~XGMAC_BLEN; + for (i = 0; i < AXI_BLEN; i++) { + if (axi->axi_blen[i]) + value &= ~XGMAC_UNDEF; + + switch (axi->axi_blen[i]) { + case 256: + value |= XGMAC_BLEN256; + break; + case 128: + value |= XGMAC_BLEN128; + break; + case 64: + value |= XGMAC_BLEN64; + break; + case 32: + value |= XGMAC_BLEN32; + break; + case 16: + value |= XGMAC_BLEN16; + break; + case 8: + value |= XGMAC_BLEN8; + break; + case 4: + value |= XGMAC_BLEN4; + break; + } + } + + writel(value, ioaddr + XGMAC_DMA_SYSBUS_MODE); +} + +static void dwxgmac2_dma_rx_mode(void __iomem *ioaddr, int mode, + u32 channel, int fifosz, u8 qmode) +{ + u32 value = readl(ioaddr + XGMAC_MTL_RXQ_OPMODE(channel)); + unsigned int rqs = fifosz / 256 - 1; + + if (mode == SF_DMA_MODE) { + value |= XGMAC_RSF; + } else { + value &= ~XGMAC_RSF; + value &= ~XGMAC_RTC; + + if (mode <= 64) + value |= 0x0 << XGMAC_RTC_SHIFT; + else if (mode <= 96) + value |= 0x2 << XGMAC_RTC_SHIFT; + else + value |= 0x3 << XGMAC_RTC_SHIFT; + } + + value &= ~XGMAC_RQS; + value |= (rqs << XGMAC_RQS_SHIFT) & XGMAC_RQS; + + writel(value, ioaddr + XGMAC_MTL_RXQ_OPMODE(channel)); + + /* Enable MTL RX overflow */ + value = readl(ioaddr + XGMAC_MTL_QINTEN(channel)); + writel(value | XGMAC_RXOIE, ioaddr + XGMAC_MTL_QINTEN(channel)); +} + +static void dwxgmac2_dma_tx_mode(void __iomem *ioaddr, int mode, + u32 channel, int fifosz, u8 qmode) +{ + u32 value = readl(ioaddr + XGMAC_MTL_TXQ_OPMODE(channel)); + unsigned int tqs = fifosz / 256 - 1; + + if (mode == SF_DMA_MODE) { + value |= XGMAC_TSF; + } else { + value &= ~XGMAC_TSF; + value &= ~XGMAC_TTC; + + if (mode <= 64) + value |= 0x0 << XGMAC_TTC_SHIFT; + else if (mode <= 96) + value |= 0x2 << XGMAC_TTC_SHIFT; + else if (mode <= 128) + value |= 0x3 << XGMAC_TTC_SHIFT; + else if (mode <= 192) + value |= 0x4 << XGMAC_TTC_SHIFT; + else if (mode <= 256) + value |= 0x5 << XGMAC_TTC_SHIFT; + else if (mode <= 384) + value |= 0x6 << XGMAC_TTC_SHIFT; + else + value |= 0x7 << XGMAC_TTC_SHIFT; + } + + value &= ~XGMAC_TXQEN; + if (qmode != MTL_QUEUE_AVB) + value |= 0x2 << XGMAC_TXQEN_SHIFT; + else + value |= 0x1 << XGMAC_TXQEN_SHIFT; + + value &= ~XGMAC_TQS; + value |= (tqs << XGMAC_TQS_SHIFT) & XGMAC_TQS; + + writel(value, ioaddr + XGMAC_MTL_TXQ_OPMODE(channel)); +} + +static void dwxgmac2_enable_dma_irq(void __iomem *ioaddr, u32 chan) +{ + writel(XGMAC_DMA_INT_DEFAULT_EN, ioaddr + XGMAC_DMA_CH_INT_EN(chan)); +} + +static void dwxgmac2_disable_dma_irq(void __iomem *ioaddr, u32 chan) +{ + writel(0, ioaddr + XGMAC_DMA_CH_INT_EN(chan)); +} + +static void dwxgmac2_dma_start_tx(void __iomem *ioaddr, u32 chan) +{ + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + value |= XGMAC_TXST; + writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + + value = readl(ioaddr + XGMAC_TX_CONFIG); + value |= XGMAC_CONFIG_TE; + writel(value, ioaddr + XGMAC_TX_CONFIG); +} + +static void dwxgmac2_dma_stop_tx(void __iomem *ioaddr, u32 chan) +{ + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + value &= ~XGMAC_TXST; + writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + + value = readl(ioaddr + XGMAC_TX_CONFIG); + value &= ~XGMAC_CONFIG_TE; + writel(value, ioaddr + XGMAC_TX_CONFIG); +} + +static void dwxgmac2_dma_start_rx(void __iomem *ioaddr, u32 chan) +{ + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + value |= XGMAC_RXST; + writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + + value = readl(ioaddr + XGMAC_RX_CONFIG); + value |= XGMAC_CONFIG_RE; + writel(value, ioaddr + XGMAC_RX_CONFIG); +} + +static void dwxgmac2_dma_stop_rx(void __iomem *ioaddr, u32 chan) +{ + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + value &= ~XGMAC_RXST; + writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + + value = readl(ioaddr + XGMAC_RX_CONFIG); + value &= ~XGMAC_CONFIG_RE; + writel(value, ioaddr + XGMAC_RX_CONFIG); +} + +static int dwxgmac2_dma_interrupt(void __iomem *ioaddr, + struct stmmac_extra_stats *x, u32 chan) +{ + u32 intr_status = readl(ioaddr + XGMAC_DMA_CH_STATUS(chan)); + int ret = 0; + + /* ABNORMAL interrupts */ + if (unlikely(intr_status & XGMAC_AIS)) { + if (unlikely(intr_status & XGMAC_TPS)) { + x->tx_process_stopped_irq++; + ret |= tx_hard_error; + } + if (unlikely(intr_status & XGMAC_FBE)) { + x->fatal_bus_error_irq++; + ret |= tx_hard_error; + } + } + + /* TX/RX NORMAL interrupts */ + if (likely(intr_status & XGMAC_NIS)) { + x->normal_irq_n++; + + if (likely(intr_status & XGMAC_RI)) { + u32 value = readl(ioaddr + XGMAC_DMA_CH_INT_EN(chan)); + if (likely(value & XGMAC_RIE)) { + x->rx_normal_irq_n++; + ret |= handle_rx; + } + } + if (likely(intr_status & XGMAC_TI)) { + x->tx_normal_irq_n++; + ret |= handle_tx; + } + } + + /* Clear interrupts */ + writel(~0x0, ioaddr + XGMAC_DMA_CH_STATUS(chan)); + + return ret; +} + +static void dwxgmac2_get_hw_feature(void __iomem *ioaddr, + struct dma_features *dma_cap) +{ + u32 hw_cap; + + /* MAC HW feature 0 */ + hw_cap = readl(ioaddr + XGMAC_HW_FEATURE0); + dma_cap->rx_coe = (hw_cap & XGMAC_HWFEAT_RXCOESEL) >> 16; + dma_cap->tx_coe = (hw_cap & XGMAC_HWFEAT_TXCOESEL) >> 14; + dma_cap->atime_stamp = (hw_cap & XGMAC_HWFEAT_TSSEL) >> 12; + dma_cap->av = (hw_cap & XGMAC_HWFEAT_AVSEL) >> 11; + dma_cap->av &= (hw_cap & XGMAC_HWFEAT_RAVSEL) >> 10; + dma_cap->pmt_magic_frame = (hw_cap & XGMAC_HWFEAT_MGKSEL) >> 7; + dma_cap->pmt_remote_wake_up = (hw_cap & XGMAC_HWFEAT_RWKSEL) >> 6; + dma_cap->mbps_1000 = (hw_cap & XGMAC_HWFEAT_GMIISEL) >> 1; + + /* MAC HW feature 1 */ + hw_cap = readl(ioaddr + XGMAC_HW_FEATURE1); + dma_cap->tsoen = (hw_cap & XGMAC_HWFEAT_TSOEN) >> 18; + dma_cap->tx_fifo_size = + 128 << ((hw_cap & XGMAC_HWFEAT_TXFIFOSIZE) >> 6); + dma_cap->rx_fifo_size = + 128 << ((hw_cap & XGMAC_HWFEAT_RXFIFOSIZE) >> 0); + + /* MAC HW feature 2 */ + hw_cap = readl(ioaddr + XGMAC_HW_FEATURE2); + dma_cap->pps_out_num = (hw_cap & XGMAC_HWFEAT_PPSOUTNUM) >> 24; + dma_cap->number_tx_channel = + ((hw_cap & XGMAC_HWFEAT_TXCHCNT) >> 18) + 1; + dma_cap->number_rx_channel = + ((hw_cap & XGMAC_HWFEAT_RXCHCNT) >> 12) + 1; + dma_cap->number_tx_queues = + ((hw_cap & XGMAC_HWFEAT_TXQCNT) >> 6) + 1; + dma_cap->number_rx_queues = + ((hw_cap & XGMAC_HWFEAT_RXQCNT) >> 0) + 1; +} + +static void dwxgmac2_rx_watchdog(void __iomem *ioaddr, u32 riwt, u32 nchan) +{ + u32 i; + + for (i = 0; i < nchan; i++) + writel(riwt & XGMAC_RWT, ioaddr + XGMAC_DMA_CH_Rx_WATCHDOG(i)); +} + +static void dwxgmac2_set_rx_ring_len(void __iomem *ioaddr, u32 len, u32 chan) +{ + writel(len, ioaddr + XGMAC_DMA_CH_RxDESC_RING_LEN(chan)); +} + +static void dwxgmac2_set_tx_ring_len(void __iomem *ioaddr, u32 len, u32 chan) +{ + writel(len, ioaddr + XGMAC_DMA_CH_TxDESC_RING_LEN(chan)); +} + +static void dwxgmac2_set_rx_tail_ptr(void __iomem *ioaddr, u32 ptr, u32 chan) +{ + writel(ptr, ioaddr + XGMAC_DMA_CH_RxDESC_TAIL_LPTR(chan)); +} + +static void dwxgmac2_set_tx_tail_ptr(void __iomem *ioaddr, u32 ptr, u32 chan) +{ + writel(ptr, ioaddr + XGMAC_DMA_CH_TxDESC_TAIL_LPTR(chan)); +} + +static void dwxgmac2_enable_tso(void __iomem *ioaddr, bool en, u32 chan) +{ + u32 value = readl(ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); + + if (en) + value |= XGMAC_TSE; + else + value &= ~XGMAC_TSE; + + writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan)); +} + +static void dwxgmac2_set_bfsize(void __iomem *ioaddr, int bfsize, u32 chan) +{ + u32 value; + + value = readl(ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); + value |= bfsize << 1; + writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan)); +} + +const struct stmmac_dma_ops dwxgmac210_dma_ops = { + .reset = dwxgmac2_dma_reset, + .init = dwxgmac2_dma_init, + .init_chan = dwxgmac2_dma_init_chan, + .init_rx_chan = dwxgmac2_dma_init_rx_chan, + .init_tx_chan = dwxgmac2_dma_init_tx_chan, + .axi = dwxgmac2_dma_axi, + .dump_regs = NULL, + .dma_rx_mode = dwxgmac2_dma_rx_mode, + .dma_tx_mode = dwxgmac2_dma_tx_mode, + .enable_dma_irq = dwxgmac2_enable_dma_irq, + .disable_dma_irq = dwxgmac2_disable_dma_irq, + .start_tx = dwxgmac2_dma_start_tx, + .stop_tx = dwxgmac2_dma_stop_tx, + .start_rx = dwxgmac2_dma_start_rx, + .stop_rx = dwxgmac2_dma_stop_rx, + .dma_interrupt = dwxgmac2_dma_interrupt, + .get_hw_feature = dwxgmac2_get_hw_feature, + .rx_watchdog = dwxgmac2_rx_watchdog, + .set_rx_ring_len = dwxgmac2_set_rx_ring_len, + .set_tx_ring_len = dwxgmac2_set_tx_ring_len, + .set_rx_tail_ptr = dwxgmac2_set_rx_tail_ptr, + .set_tx_tail_ptr = dwxgmac2_set_tx_tail_ptr, + .enable_tso = dwxgmac2_enable_tso, + .set_bfsize = dwxgmac2_set_bfsize, +}; diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.c b/drivers/net/ethernet/stmicro/stmmac/hwif.c index 1f50e83cafb2..357309a6d6a5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/hwif.c +++ b/drivers/net/ethernet/stmicro/stmmac/hwif.c @@ -72,6 +72,7 @@ static int stmmac_dwmac4_quirks(struct stmmac_priv *priv) static const struct stmmac_hwif_entry { bool gmac; bool gmac4; + bool xgmac; u32 min_id; const struct stmmac_regs_off regs; const void *desc; @@ -87,6 +88,7 @@ static const struct stmmac_hwif_entry { { .gmac = false, .gmac4 = false, + .xgmac = false, .min_id = 0, .regs = { .ptp_off = PTP_GMAC3_X_OFFSET, @@ -103,6 +105,7 @@ static const struct stmmac_hwif_entry { }, { .gmac = true, .gmac4 = false, + .xgmac = false, .min_id = 0, .regs = { .ptp_off = PTP_GMAC3_X_OFFSET, @@ -119,6 +122,7 @@ static const struct stmmac_hwif_entry { }, { .gmac = false, .gmac4 = true, + .xgmac = false, .min_id = 0, .regs = { .ptp_off = PTP_GMAC4_OFFSET, @@ -135,6 +139,7 @@ static const struct stmmac_hwif_entry { }, { .gmac = false, .gmac4 = true, + .xgmac = false, .min_id = DWMAC_CORE_4_00, .regs = { .ptp_off = PTP_GMAC4_OFFSET, @@ -151,6 +156,7 @@ static const struct stmmac_hwif_entry { }, { .gmac = false, .gmac4 = true, + .xgmac = false, .min_id = DWMAC_CORE_4_10, .regs = { .ptp_off = PTP_GMAC4_OFFSET, @@ -167,6 +173,7 @@ static const struct stmmac_hwif_entry { }, { .gmac = false, .gmac4 = true, + .xgmac = false, .min_id = DWMAC_CORE_5_10, .regs = { .ptp_off = PTP_GMAC4_OFFSET, @@ -180,11 +187,29 @@ static const struct stmmac_hwif_entry { .tc = &dwmac510_tc_ops, .setup = dwmac4_setup, .quirks = NULL, - } + }, { + .gmac = false, + .gmac4 = false, + .xgmac = true, + .min_id = DWXGMAC_CORE_2_10, + .regs = { + .ptp_off = PTP_XGMAC_OFFSET, + .mmc_off = 0, + }, + .desc = &dwxgmac210_desc_ops, + .dma = &dwxgmac210_dma_ops, + .mac = &dwxgmac210_ops, + .hwtimestamp = &stmmac_ptp, + .mode = NULL, + .tc = NULL, + .setup = dwxgmac2_setup, + .quirks = NULL, + }, }; int stmmac_hwif_init(struct stmmac_priv *priv) { + bool needs_xgmac = priv->plat->has_xgmac; bool needs_gmac4 = priv->plat->has_gmac4; bool needs_gmac = priv->plat->has_gmac; const struct stmmac_hwif_entry *entry; @@ -195,7 +220,7 @@ int stmmac_hwif_init(struct stmmac_priv *priv) if (needs_gmac) { id = stmmac_get_id(priv, GMAC_VERSION); - } else if (needs_gmac4) { + } else if (needs_gmac4 || needs_xgmac) { id = stmmac_get_id(priv, GMAC4_VERSION); } else { id = 0; @@ -229,6 +254,8 @@ int stmmac_hwif_init(struct stmmac_priv *priv) continue; if (needs_gmac4 ^ entry->gmac4) continue; + if (needs_xgmac ^ entry->xgmac) + continue; /* Use synopsys_id var because some setups can override this */ if (priv->synopsys_id < entry->min_id) continue; diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h index fe8b536b13f8..92b8944f26e3 100644 --- a/drivers/net/ethernet/stmicro/stmmac/hwif.h +++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h @@ -183,6 +183,7 @@ struct stmmac_dma_ops { void (*set_rx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan); void (*set_tx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan); void (*enable_tso)(void __iomem *ioaddr, bool en, u32 chan); + void (*qmode)(void __iomem *ioaddr, u32 channel, u8 qmode); void (*set_bfsize)(void __iomem *ioaddr, int bfsize, u32 chan); }; @@ -236,6 +237,8 @@ struct stmmac_dma_ops { stmmac_do_void_callback(__priv, dma, set_tx_tail_ptr, __args) #define stmmac_enable_tso(__priv, __args...) \ stmmac_do_void_callback(__priv, dma, enable_tso, __args) +#define stmmac_dma_qmode(__priv, __args...) \ + stmmac_do_void_callback(__priv, dma, qmode, __args) #define stmmac_set_dma_bfsize(__priv, __args...) \ stmmac_do_void_callback(__priv, dma, set_bfsize, __args) @@ -444,17 +447,22 @@ struct stmmac_mode_ops { struct stmmac_priv; struct tc_cls_u32_offload; +struct tc_cbs_qopt_offload; struct stmmac_tc_ops { int (*init)(struct stmmac_priv *priv); int (*setup_cls_u32)(struct stmmac_priv *priv, struct tc_cls_u32_offload *cls); + int (*setup_cbs)(struct stmmac_priv *priv, + struct tc_cbs_qopt_offload *qopt); }; #define stmmac_tc_init(__priv, __args...) \ stmmac_do_callback(__priv, tc, init, __args) #define stmmac_tc_setup_cls_u32(__priv, __args...) \ stmmac_do_callback(__priv, tc, setup_cls_u32, __args) +#define stmmac_tc_setup_cbs(__priv, __args...) \ + stmmac_do_callback(__priv, tc, setup_cbs, __args) struct stmmac_regs_off { u32 ptp_off; @@ -471,6 +479,9 @@ extern const struct stmmac_ops dwmac410_ops; extern const struct stmmac_dma_ops dwmac410_dma_ops; extern const struct stmmac_ops dwmac510_ops; extern const struct stmmac_tc_ops dwmac510_tc_ops; +extern const struct stmmac_ops dwxgmac210_ops; +extern const struct stmmac_dma_ops dwxgmac210_dma_ops; +extern const struct stmmac_desc_ops dwxgmac210_desc_ops; #define GMAC_VERSION 0x00000020 /* GMAC CORE Version */ #define GMAC4_VERSION 0x00000110 /* GMAC4+ CORE Version */ diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index ef6a8d39db2f..ff1ffb46198a 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -51,6 +51,7 @@ #include <linux/reset.h> #include <linux/of_mdio.h> #include "dwmac1000.h" +#include "dwxgmac2.h" #include "hwif.h" #define STMMAC_ALIGN(x) __ALIGN_KERNEL(x, SMP_CACHE_BYTES) @@ -262,6 +263,21 @@ static void stmmac_clk_csr_set(struct stmmac_priv *priv) else priv->clk_csr = 0; } + + if (priv->plat->has_xgmac) { + if (clk_rate > 400000000) + priv->clk_csr = 0x5; + else if (clk_rate > 350000000) + priv->clk_csr = 0x4; + else if (clk_rate > 300000000) + priv->clk_csr = 0x3; + else if (clk_rate > 250000000) + priv->clk_csr = 0x2; + else if (clk_rate > 150000000) + priv->clk_csr = 0x1; + else + priv->clk_csr = 0x0; + } } static void print_pkt(unsigned char *buf, int len) @@ -498,7 +514,7 @@ static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p, if (!priv->hwts_rx_en) return; /* For GMAC4, the valid timestamp is from CTX next desc. */ - if (priv->plat->has_gmac4) + if (priv->plat->has_gmac4 || priv->plat->has_xgmac) desc = np; /* Check if timestamp is available */ @@ -540,6 +556,9 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr) u32 ts_event_en = 0; u32 value = 0; u32 sec_inc; + bool xmac; + + xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; if (!(priv->dma_cap.time_stamp || priv->adv_ts)) { netdev_alert(priv->dev, "No support for HW time stamping\n"); @@ -575,7 +594,7 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr) /* PTP v1, UDP, any kind of event packet */ config.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; /* take time stamp for all event messages */ - if (priv->plat->has_gmac4) + if (xmac) snap_type_sel = PTP_GMAC4_TCR_SNAPTYPSEL_1; else snap_type_sel = PTP_TCR_SNAPTYPSEL_1; @@ -610,7 +629,7 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr) config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT; ptp_v2 = PTP_TCR_TSVER2ENA; /* take time stamp for all event messages */ - if (priv->plat->has_gmac4) + if (xmac) snap_type_sel = PTP_GMAC4_TCR_SNAPTYPSEL_1; else snap_type_sel = PTP_TCR_SNAPTYPSEL_1; @@ -647,7 +666,7 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr) config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; ptp_v2 = PTP_TCR_TSVER2ENA; /* take time stamp for all event messages */ - if (priv->plat->has_gmac4) + if (xmac) snap_type_sel = PTP_GMAC4_TCR_SNAPTYPSEL_1; else snap_type_sel = PTP_TCR_SNAPTYPSEL_1; @@ -718,7 +737,7 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr) /* program Sub Second Increment reg */ stmmac_config_sub_second_increment(priv, priv->ptpaddr, priv->plat->clk_ptp_rate, - priv->plat->has_gmac4, &sec_inc); + xmac, &sec_inc); temp = div_u64(1000000000ULL, sec_inc); /* Store sub second increment and flags for later use */ @@ -755,12 +774,14 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr) */ static int stmmac_init_ptp(struct stmmac_priv *priv) { + bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; + if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp)) return -EOPNOTSUPP; priv->adv_ts = 0; - /* Check if adv_ts can be enabled for dwmac 4.x core */ - if (priv->plat->has_gmac4 && priv->dma_cap.atime_stamp) + /* Check if adv_ts can be enabled for dwmac 4.x / xgmac core */ + if (xmac && priv->dma_cap.atime_stamp) priv->adv_ts = 1; /* Dwmac 3.x core with extend_desc can support adv_ts */ else if (priv->extend_desc && priv->dma_cap.atime_stamp) @@ -2173,6 +2194,12 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv) return ret; } + /* DMA Configuration */ + stmmac_dma_init(priv, priv->ioaddr, priv->plat->dma_cfg, atds); + + if (priv->plat->axi) + stmmac_axi(priv, priv->ioaddr, priv->plat->axi); + /* DMA RX Channel Configuration */ for (chan = 0; chan < rx_channels_count; chan++) { rx_q = &priv->rx_queue[chan]; @@ -2203,12 +2230,6 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv) for (chan = 0; chan < dma_csr_ch; chan++) stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan); - /* DMA Configuration */ - stmmac_dma_init(priv, priv->ioaddr, priv->plat->dma_cfg, atds); - - if (priv->plat->axi) - stmmac_axi(priv, priv->ioaddr, priv->plat->axi); - return ret; } @@ -2526,9 +2547,6 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp) netdev_warn(priv->dev, "%s: failed debugFS registration\n", __func__); #endif - /* Start the ball rolling... */ - stmmac_start_all_dma(priv); - priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS; if (priv->use_riwt) { @@ -2549,6 +2567,9 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp) stmmac_enable_tso(priv, priv->ioaddr, 1, chan); } + /* Start the ball rolling... */ + stmmac_start_all_dma(priv); + return 0; } @@ -3305,6 +3326,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) int coe = priv->hw->rx_csum; unsigned int next_entry; unsigned int count = 0; + bool xmac; + + xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; if (netif_msg_rx_status(priv)) { void *rx_head; @@ -3406,7 +3430,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) * in case of GMAC4 because it needs * to refill the used descriptors, always. */ - if (unlikely(!priv->plat->has_gmac4 && + if (unlikely(!xmac && ((frame_len < priv->rx_copybreak) || stmmac_rx_threshold_count(rx_q)))) { skb = netdev_alloc_skb_ip_align(priv->dev, @@ -3642,7 +3666,9 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id) u32 tx_cnt = priv->plat->tx_queues_to_use; u32 queues_count; u32 queue; + bool xmac; + xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; queues_count = (rx_cnt > tx_cnt) ? rx_cnt : tx_cnt; if (priv->irq_wake) @@ -3661,7 +3687,7 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id) return IRQ_HANDLED; /* To handle GMAC own interrupts */ - if ((priv->plat->has_gmac) || (priv->plat->has_gmac4)) { + if ((priv->plat->has_gmac) || xmac) { int status = stmmac_host_irq_status(priv, priv->hw, &priv->xstats); int mtl_status; @@ -3778,7 +3804,7 @@ static int stmmac_setup_tc_block(struct stmmac_priv *priv, switch (f->command) { case TC_BLOCK_BIND: return tcf_block_cb_register(f->block, stmmac_setup_tc_block_cb, - priv, priv); + priv, priv, f->extack); case TC_BLOCK_UNBIND: tcf_block_cb_unregister(f->block, stmmac_setup_tc_block_cb, priv); return 0; @@ -3795,6 +3821,8 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type, switch (type) { case TC_SETUP_BLOCK: return stmmac_setup_tc_block(priv, type_data); + case TC_SETUP_QDISC_CBS: + return stmmac_tc_setup_cbs(priv, priv, type_data); default: return -EOPNOTSUPP; } @@ -4267,6 +4295,8 @@ int stmmac_dvr_probe(struct device *device, ndev->min_mtu = ETH_ZLEN - ETH_HLEN; if ((priv->plat->enh_desc) || (priv->synopsys_id >= DWMAC_CORE_4_00)) ndev->max_mtu = JUMBO_LEN; + else if (priv->plat->has_xgmac) + ndev->max_mtu = XGMAC_JUMBO_LEN; else ndev->max_mtu = SKB_MAX_HEAD(NET_SKB_PAD + NET_IP_ALIGN); /* Will not overwrite ndev->max_mtu if plat->maxmtu > ndev->max_mtu @@ -4288,7 +4318,8 @@ int stmmac_dvr_probe(struct device *device, * has to be disable and this can be done by passing the * riwt_off field from the platform. */ - if ((priv->synopsys_id >= DWMAC_CORE_3_50) && (!priv->plat->riwt_off)) { + if (((priv->synopsys_id >= DWMAC_CORE_3_50) || + (priv->plat->has_xgmac)) && (!priv->plat->riwt_off)) { priv->use_riwt = 1; dev_info(priv->device, "Enable RX Mitigation via HW Watchdog Timer\n"); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c index 5df1a608e566..b72ef171477e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c @@ -29,6 +29,7 @@ #include <linux/phy.h> #include <linux/slab.h> +#include "dwxgmac2.h" #include "stmmac.h" #define MII_BUSY 0x00000001 @@ -39,6 +40,115 @@ #define MII_GMAC4_WRITE (1 << MII_GMAC4_GOC_SHIFT) #define MII_GMAC4_READ (3 << MII_GMAC4_GOC_SHIFT) +/* XGMAC defines */ +#define MII_XGMAC_SADDR BIT(18) +#define MII_XGMAC_CMD_SHIFT 16 +#define MII_XGMAC_WRITE (1 << MII_XGMAC_CMD_SHIFT) +#define MII_XGMAC_READ (3 << MII_XGMAC_CMD_SHIFT) +#define MII_XGMAC_BUSY BIT(22) +#define MII_XGMAC_MAX_C22ADDR 3 +#define MII_XGMAC_C22P_MASK GENMASK(MII_XGMAC_MAX_C22ADDR, 0) + +static int stmmac_xgmac2_c22_format(struct stmmac_priv *priv, int phyaddr, + int phyreg, u32 *hw_addr) +{ + unsigned int mii_data = priv->hw->mii.data; + u32 tmp; + + /* HW does not support C22 addr >= 4 */ + if (phyaddr > MII_XGMAC_MAX_C22ADDR) + return -ENODEV; + /* Wait until any existing MII operation is complete */ + if (readl_poll_timeout(priv->ioaddr + mii_data, tmp, + !(tmp & MII_XGMAC_BUSY), 100, 10000)) + return -EBUSY; + + /* Set port as Clause 22 */ + tmp = readl(priv->ioaddr + XGMAC_MDIO_C22P); + tmp &= ~MII_XGMAC_C22P_MASK; + tmp |= BIT(phyaddr); + writel(tmp, priv->ioaddr + XGMAC_MDIO_C22P); + + *hw_addr = (phyaddr << 16) | (phyreg & 0x1f); + return 0; +} + +static int stmmac_xgmac2_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv = netdev_priv(ndev); + unsigned int mii_address = priv->hw->mii.addr; + unsigned int mii_data = priv->hw->mii.data; + u32 tmp, addr, value = MII_XGMAC_BUSY; + int ret; + + if (phyreg & MII_ADDR_C45) { + return -EOPNOTSUPP; + } else { + ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr); + if (ret) + return ret; + } + + value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) + & priv->hw->mii.clk_csr_mask; + value |= MII_XGMAC_SADDR | MII_XGMAC_READ; + + /* Wait until any existing MII operation is complete */ + if (readl_poll_timeout(priv->ioaddr + mii_data, tmp, + !(tmp & MII_XGMAC_BUSY), 100, 10000)) + return -EBUSY; + + /* Set the MII address register to read */ + writel(addr, priv->ioaddr + mii_address); + writel(value, priv->ioaddr + mii_data); + + /* Wait until any existing MII operation is complete */ + if (readl_poll_timeout(priv->ioaddr + mii_data, tmp, + !(tmp & MII_XGMAC_BUSY), 100, 10000)) + return -EBUSY; + + /* Read the data from the MII data register */ + return readl(priv->ioaddr + mii_data) & GENMASK(15, 0); +} + +static int stmmac_xgmac2_mdio_write(struct mii_bus *bus, int phyaddr, + int phyreg, u16 phydata) +{ + struct net_device *ndev = bus->priv; + struct stmmac_priv *priv = netdev_priv(ndev); + unsigned int mii_address = priv->hw->mii.addr; + unsigned int mii_data = priv->hw->mii.data; + u32 addr, tmp, value = MII_XGMAC_BUSY; + int ret; + + if (phyreg & MII_ADDR_C45) { + return -EOPNOTSUPP; + } else { + ret = stmmac_xgmac2_c22_format(priv, phyaddr, phyreg, &addr); + if (ret) + return ret; + } + + value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift) + & priv->hw->mii.clk_csr_mask; + value |= phydata | MII_XGMAC_SADDR; + value |= MII_XGMAC_WRITE; + + /* Wait until any existing MII operation is complete */ + if (readl_poll_timeout(priv->ioaddr + mii_data, tmp, + !(tmp & MII_XGMAC_BUSY), 100, 10000)) + return -EBUSY; + + /* Set the MII address register to write */ + writel(addr, priv->ioaddr + mii_address); + writel(value, priv->ioaddr + mii_data); + + /* Wait until any existing MII operation is complete */ + return readl_poll_timeout(priv->ioaddr + mii_data, tmp, + !(tmp & MII_XGMAC_BUSY), 100, 10000); +} + /** * stmmac_mdio_read * @bus: points to the mii_bus structure @@ -205,7 +315,7 @@ int stmmac_mdio_register(struct net_device *ndev) struct stmmac_mdio_bus_data *mdio_bus_data = priv->plat->mdio_bus_data; struct device_node *mdio_node = priv->plat->mdio_node; struct device *dev = ndev->dev.parent; - int addr, found; + int addr, found, max_addr; if (!mdio_bus_data) return 0; @@ -223,8 +333,23 @@ int stmmac_mdio_register(struct net_device *ndev) #endif new_bus->name = "stmmac"; - new_bus->read = &stmmac_mdio_read; - new_bus->write = &stmmac_mdio_write; + + if (priv->plat->has_xgmac) { + new_bus->read = &stmmac_xgmac2_mdio_read; + new_bus->write = &stmmac_xgmac2_mdio_write; + + /* Right now only C22 phys are supported */ + max_addr = MII_XGMAC_MAX_C22ADDR + 1; + + /* Check if DT specified an unsupported phy addr */ + if (priv->plat->phy_addr > MII_XGMAC_MAX_C22ADDR) + dev_err(dev, "Unsupported phy_addr (max=%d)\n", + MII_XGMAC_MAX_C22ADDR); + } else { + new_bus->read = &stmmac_mdio_read; + new_bus->write = &stmmac_mdio_write; + max_addr = PHY_MAX_ADDR; + } new_bus->reset = &stmmac_mdio_reset; snprintf(new_bus->id, MII_BUS_ID_SIZE, "%s-%x", @@ -243,7 +368,7 @@ int stmmac_mdio_register(struct net_device *ndev) goto bus_register_done; found = 0; - for (addr = 0; addr < PHY_MAX_ADDR; addr++) { + for (addr = 0; addr < max_addr; addr++) { struct phy_device *phydev = mdiobus_get_phy(new_bus, addr); if (!phydev) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c index 6a393b16a1fc..c54a50dbd5ac 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c @@ -303,7 +303,7 @@ static void stmmac_pci_remove(struct pci_dev *pdev) pci_disable_device(pdev); } -static int stmmac_pci_suspend(struct device *dev) +static int __maybe_unused stmmac_pci_suspend(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); int ret; @@ -321,7 +321,7 @@ static int stmmac_pci_suspend(struct device *dev) return 0; } -static int stmmac_pci_resume(struct device *dev) +static int __maybe_unused stmmac_pci_resume(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); int ret; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index 72da77b94ecd..3609c7b696c7 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -486,6 +486,12 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac) plat->force_sf_dma_mode = 1; } + if (of_device_is_compatible(np, "snps,dwxgmac")) { + plat->has_xgmac = 1; + plat->pmt = 1; + plat->tso_en = of_property_read_bool(np, "snps,tso"); + } + dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg), GFP_KERNEL); if (!dma_cfg) { diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c index 0cb0e39a2be9..2293e21f789f 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c @@ -71,6 +71,9 @@ static int stmmac_adjust_time(struct ptp_clock_info *ptp, s64 delta) u32 sec, nsec; u32 quotient, reminder; int neg_adj = 0; + bool xmac; + + xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; if (delta < 0) { neg_adj = 1; @@ -82,8 +85,7 @@ static int stmmac_adjust_time(struct ptp_clock_info *ptp, s64 delta) nsec = reminder; spin_lock_irqsave(&priv->ptp_lock, flags); - stmmac_adjust_systime(priv, priv->ptpaddr, sec, nsec, neg_adj, - priv->plat->has_gmac4); + stmmac_adjust_systime(priv, priv->ptpaddr, sec, nsec, neg_adj, xmac); spin_unlock_irqrestore(&priv->ptp_lock, flags); return 0; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h index f4b31d69f60e..ecccf895fd7e 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h @@ -21,6 +21,7 @@ #ifndef __STMMAC_PTP_H__ #define __STMMAC_PTP_H__ +#define PTP_XGMAC_OFFSET 0xd00 #define PTP_GMAC4_OFFSET 0xb00 #define PTP_GMAC3_X_OFFSET 0x700 diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c index 2258cd8cc844..1a96dd9c1091 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c @@ -289,7 +289,67 @@ static int tc_init(struct stmmac_priv *priv) return 0; } +static int tc_setup_cbs(struct stmmac_priv *priv, + struct tc_cbs_qopt_offload *qopt) +{ + u32 tx_queues_count = priv->plat->tx_queues_to_use; + u32 queue = qopt->queue; + u32 ptr, speed_div; + u32 mode_to_use; + u64 value; + int ret; + + /* Queue 0 is not AVB capable */ + if (queue <= 0 || queue >= tx_queues_count) + return -EINVAL; + if (priv->speed != SPEED_100 && priv->speed != SPEED_1000) + return -EOPNOTSUPP; + + mode_to_use = priv->plat->tx_queues_cfg[queue].mode_to_use; + if (mode_to_use == MTL_QUEUE_DCB && qopt->enable) { + ret = stmmac_dma_qmode(priv, priv->ioaddr, queue, MTL_QUEUE_AVB); + if (ret) + return ret; + + priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_AVB; + } else if (!qopt->enable) { + return stmmac_dma_qmode(priv, priv->ioaddr, queue, MTL_QUEUE_DCB); + } + + /* Port Transmit Rate and Speed Divider */ + ptr = (priv->speed == SPEED_100) ? 4 : 8; + speed_div = (priv->speed == SPEED_100) ? 100000 : 1000000; + + /* Final adjustments for HW */ + value = div_s64(qopt->idleslope * 1024ll * ptr, speed_div); + priv->plat->tx_queues_cfg[queue].idle_slope = value & GENMASK(31, 0); + + value = div_s64(-qopt->sendslope * 1024ll * ptr, speed_div); + priv->plat->tx_queues_cfg[queue].send_slope = value & GENMASK(31, 0); + + value = qopt->hicredit * 1024ll * 8; + priv->plat->tx_queues_cfg[queue].high_credit = value & GENMASK(31, 0); + + value = qopt->locredit * 1024ll * 8; + priv->plat->tx_queues_cfg[queue].low_credit = value & GENMASK(31, 0); + + ret = stmmac_config_cbs(priv, priv->hw, + priv->plat->tx_queues_cfg[queue].send_slope, + priv->plat->tx_queues_cfg[queue].idle_slope, + priv->plat->tx_queues_cfg[queue].high_credit, + priv->plat->tx_queues_cfg[queue].low_credit, + queue); + if (ret) + return ret; + + dev_info(priv->device, "CBS queue %d: send %d, idle %d, hi %d, lo %d\n", + queue, qopt->sendslope, qopt->idleslope, + qopt->hicredit, qopt->locredit); + return 0; +} + const struct stmmac_tc_ops dwmac510_tc_ops = { .init = tc_init, .setup_cls_u32 = tc_setup_cls_u32, + .setup_cbs = tc_setup_cbs, }; |