diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-04 01:27:18 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-04 01:27:18 +0200 |
commit | cb8e59cc87201af93dfbb6c3dccc8fcad72a09c2 (patch) | |
tree | a334db9022f89654b777bbce8c4c6632e65b9031 /drivers/net/ethernet/freescale | |
parent | Merge branch 'uaccess.comedi' of git://git.kernel.org/pub/scm/linux/kernel/gi... (diff) | |
parent | selftests: net: ip_defrag: ignore EPERM (diff) | |
download | linux-cb8e59cc87201af93dfbb6c3dccc8fcad72a09c2.tar.xz linux-cb8e59cc87201af93dfbb6c3dccc8fcad72a09c2.zip |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from David Miller:
1) Allow setting bluetooth L2CAP modes via socket option, from Luiz
Augusto von Dentz.
2) Add GSO partial support to igc, from Sasha Neftin.
3) Several cleanups and improvements to r8169 from Heiner Kallweit.
4) Add IF_OPER_TESTING link state and use it when ethtool triggers a
device self-test. From Andrew Lunn.
5) Start moving away from custom driver versions, use the globally
defined kernel version instead, from Leon Romanovsky.
6) Support GRO vis gro_cells in DSA layer, from Alexander Lobakin.
7) Allow hard IRQ deferral during NAPI, from Eric Dumazet.
8) Add sriov and vf support to hinic, from Luo bin.
9) Support Media Redundancy Protocol (MRP) in the bridging code, from
Horatiu Vultur.
10) Support netmap in the nft_nat code, from Pablo Neira Ayuso.
11) Allow UDPv6 encapsulation of ESP in the ipsec code, from Sabrina
Dubroca. Also add ipv6 support for espintcp.
12) Lots of ReST conversions of the networking documentation, from Mauro
Carvalho Chehab.
13) Support configuration of ethtool rxnfc flows in bcmgenet driver,
from Doug Berger.
14) Allow to dump cgroup id and filter by it in inet_diag code, from
Dmitry Yakunin.
15) Add infrastructure to export netlink attribute policies to
userspace, from Johannes Berg.
16) Several optimizations to sch_fq scheduler, from Eric Dumazet.
17) Fallback to the default qdisc if qdisc init fails because otherwise
a packet scheduler init failure will make a device inoperative. From
Jesper Dangaard Brouer.
18) Several RISCV bpf jit optimizations, from Luke Nelson.
19) Correct the return type of the ->ndo_start_xmit() method in several
drivers, it's netdev_tx_t but many drivers were using
'int'. From Yunjian Wang.
20) Add an ethtool interface for PHY master/slave config, from Oleksij
Rempel.
21) Add BPF iterators, from Yonghang Song.
22) Add cable test infrastructure, including ethool interfaces, from
Andrew Lunn. Marvell PHY driver is the first to support this
facility.
23) Remove zero-length arrays all over, from Gustavo A. R. Silva.
24) Calculate and maintain an explicit frame size in XDP, from Jesper
Dangaard Brouer.
25) Add CAP_BPF, from Alexei Starovoitov.
26) Support terse dumps in the packet scheduler, from Vlad Buslov.
27) Support XDP_TX bulking in dpaa2 driver, from Ioana Ciornei.
28) Add devm_register_netdev(), from Bartosz Golaszewski.
29) Minimize qdisc resets, from Cong Wang.
30) Get rid of kernel_getsockopt and kernel_setsockopt in order to
eliminate set_fs/get_fs calls. From Christoph Hellwig.
* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2517 commits)
selftests: net: ip_defrag: ignore EPERM
net_failover: fixed rollback in net_failover_open()
Revert "tipc: Fix potential tipc_aead refcnt leak in tipc_crypto_rcv"
Revert "tipc: Fix potential tipc_node refcnt leak in tipc_rcv"
vmxnet3: allow rx flow hash ops only when rss is enabled
hinic: add set_channels ethtool_ops support
selftests/bpf: Add a default $(CXX) value
tools/bpf: Don't use $(COMPILE.c)
bpf, selftests: Use bpf_probe_read_kernel
s390/bpf: Use bcr 0,%0 as tail call nop filler
s390/bpf: Maintain 8-byte stack alignment
selftests/bpf: Fix verifier test
selftests/bpf: Fix sample_cnt shared between two threads
bpf, selftests: Adapt cls_redirect to call csum_level helper
bpf: Add csum_level helper for fixing up csum levels
bpf: Fix up bpf_skb_adjust_room helper's skb csum setting
sfc: add missing annotation for efx_ef10_try_update_nic_stats_vf()
crypto/chtls: IPv6 support for inline TLS
Crypto/chcr: Fixes a coccinile check error
Crypto/chcr: Fixes compilations warnings
...
Diffstat (limited to 'drivers/net/ethernet/freescale')
18 files changed, 2447 insertions, 230 deletions
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 6bfa7575af94..2972244e6eb0 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2107,7 +2107,7 @@ workaround: /* Workaround for DPAA_A050385 requires data start to be aligned */ start = PTR_ALIGN(new_skb->data, DPAA_A050385_ALIGN); - if (start - new_skb->data != 0) + if (start - new_skb->data) skb_reserve(new_skb, start - new_skb->data); skb_put(new_skb, skb->len); diff --git a/drivers/net/ethernet/freescale/dpaa2/Kconfig b/drivers/net/ethernet/freescale/dpaa2/Kconfig index c6fb8e4021ac..feea797cde02 100644 --- a/drivers/net/ethernet/freescale/dpaa2/Kconfig +++ b/drivers/net/ethernet/freescale/dpaa2/Kconfig @@ -9,6 +9,16 @@ config FSL_DPAA2_ETH The driver manages network objects discovered on the Freescale MC bus. +if FSL_DPAA2_ETH +config FSL_DPAA2_ETH_DCB + bool "Data Center Bridging (DCB) Support" + default n + depends on DCB + help + Enable Priority-Based Flow Control (PFC) support for DPAA2 Ethernet + devices. +endif + config FSL_DPAA2_PTP_CLOCK tristate "Freescale DPAA2 PTP Clock" depends on FSL_DPAA2_ETH && PTP_1588_CLOCK_QORIQ diff --git a/drivers/net/ethernet/freescale/dpaa2/Makefile b/drivers/net/ethernet/freescale/dpaa2/Makefile index 69184ca3b7b9..6e7f33c956bf 100644 --- a/drivers/net/ethernet/freescale/dpaa2/Makefile +++ b/drivers/net/ethernet/freescale/dpaa2/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o obj-$(CONFIG_FSL_DPAA2_PTP_CLOCK) += fsl-dpaa2-ptp.o fsl-dpaa2-eth-objs := dpaa2-eth.o dpaa2-ethtool.o dpni.o dpaa2-mac.o dpmac.o +fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DCB} += dpaa2-eth-dcb.o fsl-dpaa2-eth-${CONFIG_DEBUG_FS} += dpaa2-eth-debugfs.o fsl-dpaa2-ptp-objs := dpaa2-ptp.o dprtc.o diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-dcb.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-dcb.c new file mode 100644 index 000000000000..83dee575c2fa --- /dev/null +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-dcb.c @@ -0,0 +1,150 @@ +// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) +/* Copyright 2020 NXP */ + +#include "dpaa2-eth.h" + +static int dpaa2_eth_dcbnl_ieee_getpfc(struct net_device *net_dev, + struct ieee_pfc *pfc) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + if (!(priv->link_state.options & DPNI_LINK_OPT_PFC_PAUSE)) + return 0; + + memcpy(pfc, &priv->pfc, sizeof(priv->pfc)); + pfc->pfc_cap = dpaa2_eth_tc_count(priv); + + return 0; +} + +static inline bool is_prio_enabled(u8 pfc_en, u8 tc) +{ + return !!(pfc_en & (1 << tc)); +} + +static int set_pfc_cn(struct dpaa2_eth_priv *priv, u8 pfc_en) +{ + struct dpni_congestion_notification_cfg cfg = {0}; + int i, err; + + cfg.notification_mode = DPNI_CONG_OPT_FLOW_CONTROL; + cfg.units = DPNI_CONGESTION_UNIT_FRAMES; + cfg.message_iova = 0ULL; + cfg.message_ctx = 0ULL; + + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + if (is_prio_enabled(pfc_en, i)) { + cfg.threshold_entry = DPAA2_ETH_CN_THRESH_ENTRY(priv); + cfg.threshold_exit = DPAA2_ETH_CN_THRESH_EXIT(priv); + } else { + /* For priorities not set in the pfc_en mask, we leave + * the congestion thresholds at zero, which effectively + * disables generation of PFC frames for them + */ + cfg.threshold_entry = 0; + cfg.threshold_exit = 0; + } + + err = dpni_set_congestion_notification(priv->mc_io, 0, + priv->mc_token, + DPNI_QUEUE_RX, i, &cfg); + if (err) { + netdev_err(priv->net_dev, + "dpni_set_congestion_notification failed\n"); + return err; + } + } + + return 0; +} + +static int dpaa2_eth_dcbnl_ieee_setpfc(struct net_device *net_dev, + struct ieee_pfc *pfc) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + struct dpni_link_cfg link_cfg = {0}; + bool tx_pause; + int err; + + if (pfc->mbc || pfc->delay) + return -EOPNOTSUPP; + + /* If same PFC enabled mask, nothing to do */ + if (priv->pfc.pfc_en == pfc->pfc_en) + return 0; + + /* We allow PFC configuration even if it won't have any effect until + * general pause frames are enabled + */ + tx_pause = dpaa2_eth_tx_pause_enabled(priv->link_state.options); + if (!dpaa2_eth_rx_pause_enabled(priv->link_state.options) || !tx_pause) + netdev_warn(net_dev, "Pause support must be enabled in order for PFC to work!\n"); + + link_cfg.rate = priv->link_state.rate; + link_cfg.options = priv->link_state.options; + if (pfc->pfc_en) + link_cfg.options |= DPNI_LINK_OPT_PFC_PAUSE; + else + link_cfg.options &= ~DPNI_LINK_OPT_PFC_PAUSE; + err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &link_cfg); + if (err) { + netdev_err(net_dev, "dpni_set_link_cfg failed\n"); + return err; + } + + /* Configure congestion notifications for the enabled priorities */ + err = set_pfc_cn(priv, pfc->pfc_en); + if (err) + return err; + + memcpy(&priv->pfc, pfc, sizeof(priv->pfc)); + priv->pfc_enabled = !!pfc->pfc_en; + + dpaa2_eth_set_rx_taildrop(priv, tx_pause, priv->pfc_enabled); + + return 0; +} + +static u8 dpaa2_eth_dcbnl_getdcbx(struct net_device *net_dev) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + return priv->dcbx_mode; +} + +static u8 dpaa2_eth_dcbnl_setdcbx(struct net_device *net_dev, u8 mode) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + return (mode != (priv->dcbx_mode)) ? 1 : 0; +} + +static u8 dpaa2_eth_dcbnl_getcap(struct net_device *net_dev, int capid, u8 *cap) +{ + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + + switch (capid) { + case DCB_CAP_ATTR_PFC: + *cap = true; + break; + case DCB_CAP_ATTR_PFC_TCS: + *cap = 1 << (dpaa2_eth_tc_count(priv) - 1); + break; + case DCB_CAP_ATTR_DCBX: + *cap = priv->dcbx_mode; + break; + default: + *cap = false; + break; + } + + return 0; +} + +const struct dcbnl_rtnl_ops dpaa2_eth_dcbnl_ops = { + .ieee_getpfc = dpaa2_eth_dcbnl_ieee_getpfc, + .ieee_setpfc = dpaa2_eth_dcbnl_ieee_setpfc, + .getdcbx = dpaa2_eth_dcbnl_getdcbx, + .setdcbx = dpaa2_eth_dcbnl_setdcbx, + .getcap = dpaa2_eth_dcbnl_getcap, +}; diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c index a9afe46b837f..c453a23045c1 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-debugfs.c @@ -81,8 +81,8 @@ static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset) int i, err; seq_printf(file, "FQ stats for %s:\n", priv->net_dev->name); - seq_printf(file, "%s%16s%16s%16s%16s\n", - "VFQID", "CPU", "Type", "Frames", "Pending frames"); + seq_printf(file, "%s%16s%16s%16s%16s%16s\n", + "VFQID", "CPU", "TC", "Type", "Frames", "Pending frames"); for (i = 0; i < priv->num_fqs; i++) { fq = &priv->fq[i]; @@ -90,9 +90,10 @@ static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset) if (err) fcnt = 0; - seq_printf(file, "%5d%16d%16s%16llu%16u\n", + seq_printf(file, "%5d%16d%16d%16s%16llu%16u\n", fq->fqid, fq->target_cpu, + fq->tc, fq_type_to_str(fq), fq->stats.frames, fcnt); @@ -127,16 +128,19 @@ static int dpaa2_dbg_ch_show(struct seq_file *file, void *offset) int i; seq_printf(file, "Channel stats for %s:\n", priv->net_dev->name); - seq_printf(file, "%s%16s%16s%16s%16s\n", - "CHID", "CPU", "Deq busy", "CDANs", "Buf count"); + seq_printf(file, "%s%16s%16s%16s%16s%16s%16s\n", + "CHID", "CPU", "Deq busy", "Frames", "CDANs", + "Avg Frm/CDAN", "Buf count"); for (i = 0; i < priv->num_channels; i++) { ch = priv->channel[i]; - seq_printf(file, "%4d%16d%16llu%16llu%16d\n", + seq_printf(file, "%4d%16d%16llu%16llu%16llu%16llu%16d\n", ch->ch_id, ch->nctx.desired_cpu, ch->stats.dequeue_portal_busy, + ch->stats.frames, ch->stats.cdan, + div64_u64(ch->stats.frames, ch->stats.cdan), ch->buf_count); } diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index d97c320a2dc0..8fb48de5d18c 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) /* Copyright 2014-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP */ #include <linux/init.h> #include <linux/module.h> @@ -244,13 +244,72 @@ static void xdp_release_buf(struct dpaa2_eth_priv *priv, ch->xdp.drop_cnt = 0; } -static int xdp_enqueue(struct dpaa2_eth_priv *priv, struct dpaa2_fd *fd, - void *buf_start, u16 queue_id) +static int dpaa2_eth_xdp_flush(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_fq *fq, + struct dpaa2_eth_xdp_fds *xdp_fds) +{ + int total_enqueued = 0, retries = 0, enqueued; + struct dpaa2_eth_drv_stats *percpu_extras; + int num_fds, err, max_retries; + struct dpaa2_fd *fds; + + percpu_extras = this_cpu_ptr(priv->percpu_extras); + + /* try to enqueue all the FDs until the max number of retries is hit */ + fds = xdp_fds->fds; + num_fds = xdp_fds->num; + max_retries = num_fds * DPAA2_ETH_ENQUEUE_RETRIES; + while (total_enqueued < num_fds && retries < max_retries) { + err = priv->enqueue(priv, fq, &fds[total_enqueued], + 0, num_fds - total_enqueued, &enqueued); + if (err == -EBUSY) { + percpu_extras->tx_portal_busy += ++retries; + continue; + } + total_enqueued += enqueued; + } + xdp_fds->num = 0; + + return total_enqueued; +} + +static void xdp_tx_flush(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_channel *ch, + struct dpaa2_eth_fq *fq) +{ + struct rtnl_link_stats64 *percpu_stats; + struct dpaa2_fd *fds; + int enqueued, i; + + percpu_stats = this_cpu_ptr(priv->percpu_stats); + + // enqueue the array of XDP_TX frames + enqueued = dpaa2_eth_xdp_flush(priv, fq, &fq->xdp_tx_fds); + + /* update statistics */ + percpu_stats->tx_packets += enqueued; + fds = fq->xdp_tx_fds.fds; + for (i = 0; i < enqueued; i++) { + percpu_stats->tx_bytes += dpaa2_fd_get_len(&fds[i]); + ch->stats.xdp_tx++; + } + for (i = enqueued; i < fq->xdp_tx_fds.num; i++) { + xdp_release_buf(priv, ch, dpaa2_fd_get_addr(&fds[i])); + percpu_stats->tx_errors++; + ch->stats.xdp_tx_err++; + } + fq->xdp_tx_fds.num = 0; +} + +static void xdp_enqueue(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_channel *ch, + struct dpaa2_fd *fd, + void *buf_start, u16 queue_id) { - struct dpaa2_eth_fq *fq; struct dpaa2_faead *faead; + struct dpaa2_fd *dest_fd; + struct dpaa2_eth_fq *fq; u32 ctrl, frc; - int i, err; /* Mark the egress frame hardware annotation area as valid */ frc = dpaa2_fd_get_frc(fd); @@ -267,13 +326,13 @@ static int xdp_enqueue(struct dpaa2_eth_priv *priv, struct dpaa2_fd *fd, faead->conf_fqid = 0; fq = &priv->fq[queue_id]; - for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, fd, 0); - if (err != -EBUSY) - break; - } + dest_fd = &fq->xdp_tx_fds.fds[fq->xdp_tx_fds.num++]; + memcpy(dest_fd, fd, sizeof(*dest_fd)); - return err; + if (fq->xdp_tx_fds.num < DEV_MAP_BULK_SIZE) + return; + + xdp_tx_flush(priv, ch, fq); } static u32 run_xdp(struct dpaa2_eth_priv *priv, @@ -282,14 +341,11 @@ static u32 run_xdp(struct dpaa2_eth_priv *priv, struct dpaa2_fd *fd, void *vaddr) { dma_addr_t addr = dpaa2_fd_get_addr(fd); - struct rtnl_link_stats64 *percpu_stats; struct bpf_prog *xdp_prog; struct xdp_buff xdp; u32 xdp_act = XDP_PASS; int err; - percpu_stats = this_cpu_ptr(priv->percpu_stats); - rcu_read_lock(); xdp_prog = READ_ONCE(ch->xdp.prog); @@ -302,6 +358,9 @@ static u32 run_xdp(struct dpaa2_eth_priv *priv, xdp_set_data_meta_invalid(&xdp); xdp.rxq = &ch->xdp_rxq; + xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE - + (dpaa2_fd_get_offset(fd) - XDP_PACKET_HEADROOM); + xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); /* xdp.data pointer may have changed */ @@ -312,16 +371,7 @@ static u32 run_xdp(struct dpaa2_eth_priv *priv, case XDP_PASS: break; case XDP_TX: - err = xdp_enqueue(priv, fd, vaddr, rx_fq->flowid); - if (err) { - xdp_release_buf(priv, ch, addr); - percpu_stats->tx_errors++; - ch->stats.xdp_tx_err++; - } else { - percpu_stats->tx_packets++; - percpu_stats->tx_bytes += dpaa2_fd_get_len(fd); - ch->stats.xdp_tx++; - } + xdp_enqueue(priv, ch, fd, vaddr, rx_fq->flowid); break; default: bpf_warn_invalid_xdp_action(xdp_act); @@ -337,7 +387,11 @@ static u32 run_xdp(struct dpaa2_eth_priv *priv, dma_unmap_page(priv->net_dev->dev.parent, addr, priv->rx_buf_size, DMA_BIDIRECTIONAL); ch->buf_count--; + + /* Allow redirect use of full headroom */ xdp.data_hard_start = vaddr; + xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE; + err = xdp_do_redirect(priv->net_dev, &xdp, xdp_prog); if (unlikely(err)) ch->stats.xdp_drop++; @@ -493,6 +547,7 @@ static int consume_frames(struct dpaa2_eth_channel *ch, return 0; fq->stats.frames += cleaned; + ch->stats.frames += cleaned; /* A dequeue operation only pulls frames from a single queue * into the store. Return the frame queue as an out param. @@ -847,7 +902,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) * the Tx confirmation callback for this frame */ for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, prio); + err = priv->enqueue(priv, fq, &fd, prio, 1, NULL); if (err != -EBUSY) break; } @@ -1138,6 +1193,7 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget) int store_cleaned, work_done; struct list_head rx_list; int retries = 0; + u16 flowid; int err; ch = container_of(napi, struct dpaa2_eth_channel, napi); @@ -1160,6 +1216,7 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget) break; if (fq->type == DPAA2_RX_FQ) { rx_cleaned += store_cleaned; + flowid = fq->flowid; } else { txconf_cleaned += store_cleaned; /* We have a single Tx conf FQ on this channel */ @@ -1202,6 +1259,8 @@ out: if (ch->xdp.res & XDP_REDIRECT) xdp_do_flush_map(); + else if (rx_cleaned && ch->xdp.res & XDP_TX) + xdp_tx_flush(priv, ch, &priv->fq[flowid]); return work_done; } @@ -1228,31 +1287,67 @@ static void disable_ch_napi(struct dpaa2_eth_priv *priv) } } -static void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv, bool enable) +void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv, + bool tx_pause, bool pfc) { struct dpni_taildrop td = {0}; + struct dpaa2_eth_fq *fq; int i, err; - if (priv->rx_td_enabled == enable) - return; + /* FQ taildrop: threshold is in bytes, per frame queue. Enabled if + * flow control is disabled (as it might interfere with either the + * buffer pool depletion trigger for pause frames or with the group + * congestion trigger for PFC frames) + */ + td.enable = !tx_pause; + if (priv->rx_fqtd_enabled == td.enable) + goto set_cgtd; - td.enable = enable; - td.threshold = DPAA2_ETH_TAILDROP_THRESH; + td.threshold = DPAA2_ETH_FQ_TAILDROP_THRESH; + td.units = DPNI_CONGESTION_UNIT_BYTES; for (i = 0; i < priv->num_fqs; i++) { - if (priv->fq[i].type != DPAA2_RX_FQ) + fq = &priv->fq[i]; + if (fq->type != DPAA2_RX_FQ) continue; err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token, - DPNI_CP_QUEUE, DPNI_QUEUE_RX, 0, - priv->fq[i].flowid, &td); + DPNI_CP_QUEUE, DPNI_QUEUE_RX, + fq->tc, fq->flowid, &td); if (err) { netdev_err(priv->net_dev, - "dpni_set_taildrop() failed\n"); - break; + "dpni_set_taildrop(FQ) failed\n"); + return; + } + } + + priv->rx_fqtd_enabled = td.enable; + +set_cgtd: + /* Congestion group taildrop: threshold is in frames, per group + * of FQs belonging to the same traffic class + * Enabled if general Tx pause disabled or if PFCs are enabled + * (congestion group threhsold for PFC generation is lower than the + * CG taildrop threshold, so it won't interfere with it; we also + * want frames in non-PFC enabled traffic classes to be kept in check) + */ + td.enable = !tx_pause || (tx_pause && pfc); + if (priv->rx_cgtd_enabled == td.enable) + return; + + td.threshold = DPAA2_ETH_CG_TAILDROP_THRESH(priv); + td.units = DPNI_CONGESTION_UNIT_FRAMES; + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token, + DPNI_CP_GROUP, DPNI_QUEUE_RX, + i, 0, &td); + if (err) { + netdev_err(priv->net_dev, + "dpni_set_taildrop(CG) failed\n"); + return; } } - priv->rx_td_enabled = enable; + priv->rx_cgtd_enabled = td.enable; } static int link_state_update(struct dpaa2_eth_priv *priv) @@ -1272,9 +1367,8 @@ static int link_state_update(struct dpaa2_eth_priv *priv) * Rx FQ taildrop configuration as well. We configure taildrop * only when pause frame generation is disabled. */ - tx_pause = !!(state.options & DPNI_LINK_OPT_PAUSE) ^ - !!(state.options & DPNI_LINK_OPT_ASYM_PAUSE); - dpaa2_eth_set_rx_taildrop(priv, !tx_pause); + tx_pause = dpaa2_eth_tx_pause_enabled(state.options); + dpaa2_eth_set_rx_taildrop(priv, tx_pause, priv->pfc_enabled); /* When we manage the MAC/PHY using phylink there is no need * to manually update the netif_carrier. @@ -1880,20 +1974,16 @@ static int dpaa2_eth_xdp(struct net_device *dev, struct netdev_bpf *xdp) return 0; } -static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev, - struct xdp_frame *xdpf) +static int dpaa2_eth_xdp_create_fd(struct net_device *net_dev, + struct xdp_frame *xdpf, + struct dpaa2_fd *fd) { struct dpaa2_eth_priv *priv = netdev_priv(net_dev); struct device *dev = net_dev->dev.parent; - struct rtnl_link_stats64 *percpu_stats; - struct dpaa2_eth_drv_stats *percpu_extras; unsigned int needed_headroom; struct dpaa2_eth_swa *swa; - struct dpaa2_eth_fq *fq; - struct dpaa2_fd fd; void *buffer_start, *aligned_start; dma_addr_t addr; - int err, i; /* We require a minimum headroom to be able to transmit the frame. * Otherwise return an error and let the original net_device handle it @@ -1902,11 +1992,8 @@ static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev, if (xdpf->headroom < needed_headroom) return -EINVAL; - percpu_stats = this_cpu_ptr(priv->percpu_stats); - percpu_extras = this_cpu_ptr(priv->percpu_extras); - /* Setup the FD fields */ - memset(&fd, 0, sizeof(fd)); + memset(fd, 0, sizeof(*fd)); /* Align FD address, if possible */ buffer_start = xdpf->data - needed_headroom; @@ -1924,32 +2011,14 @@ static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev, addr = dma_map_single(dev, buffer_start, swa->xdp.dma_size, DMA_BIDIRECTIONAL); - if (unlikely(dma_mapping_error(dev, addr))) { - percpu_stats->tx_dropped++; + if (unlikely(dma_mapping_error(dev, addr))) return -ENOMEM; - } - - dpaa2_fd_set_addr(&fd, addr); - dpaa2_fd_set_offset(&fd, xdpf->data - buffer_start); - dpaa2_fd_set_len(&fd, xdpf->len); - dpaa2_fd_set_format(&fd, dpaa2_fd_single); - dpaa2_fd_set_ctrl(&fd, FD_CTRL_PTA); - - fq = &priv->fq[smp_processor_id() % dpaa2_eth_queue_count(priv)]; - for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, 0); - if (err != -EBUSY) - break; - } - percpu_extras->tx_portal_busy += i; - if (unlikely(err < 0)) { - percpu_stats->tx_errors++; - /* let the Rx device handle the cleanup */ - return err; - } - percpu_stats->tx_packets++; - percpu_stats->tx_bytes += dpaa2_fd_get_len(&fd); + dpaa2_fd_set_addr(fd, addr); + dpaa2_fd_set_offset(fd, xdpf->data - buffer_start); + dpaa2_fd_set_len(fd, xdpf->len); + dpaa2_fd_set_format(fd, dpaa2_fd_single); + dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA); return 0; } @@ -1957,8 +2026,12 @@ static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev, static int dpaa2_eth_xdp_xmit(struct net_device *net_dev, int n, struct xdp_frame **frames, u32 flags) { - int drops = 0; - int i, err; + struct dpaa2_eth_priv *priv = netdev_priv(net_dev); + struct dpaa2_eth_xdp_fds *xdp_redirect_fds; + struct rtnl_link_stats64 *percpu_stats; + struct dpaa2_eth_fq *fq; + struct dpaa2_fd *fds; + int enqueued, i, err; if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) return -EINVAL; @@ -1966,17 +2039,31 @@ static int dpaa2_eth_xdp_xmit(struct net_device *net_dev, int n, if (!netif_running(net_dev)) return -ENETDOWN; - for (i = 0; i < n; i++) { - struct xdp_frame *xdpf = frames[i]; + fq = &priv->fq[smp_processor_id()]; + xdp_redirect_fds = &fq->xdp_redirect_fds; + fds = xdp_redirect_fds->fds; - err = dpaa2_eth_xdp_xmit_frame(net_dev, xdpf); - if (err) { - xdp_return_frame_rx_napi(xdpf); - drops++; - } + percpu_stats = this_cpu_ptr(priv->percpu_stats); + + /* create a FD for each xdp_frame in the list received */ + for (i = 0; i < n; i++) { + err = dpaa2_eth_xdp_create_fd(net_dev, frames[i], &fds[i]); + if (err) + break; } + xdp_redirect_fds->num = i; + + /* enqueue all the frame descriptors */ + enqueued = dpaa2_eth_xdp_flush(priv, fq, xdp_redirect_fds); + + /* update statistics */ + percpu_stats->tx_packets += enqueued; + for (i = 0; i < enqueued; i++) + percpu_stats->tx_bytes += dpaa2_fd_get_len(&fds[i]); + for (i = enqueued; i < n; i++) + xdp_return_frame_rx_napi(frames[i]); - return n - drops; + return enqueued; } static int update_xps(struct dpaa2_eth_priv *priv) @@ -2018,7 +2105,7 @@ static int dpaa2_eth_setup_tc(struct net_device *net_dev, int i; if (type != TC_SETUP_QDISC_MQPRIO) - return -EINVAL; + return -EOPNOTSUPP; mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS; num_queues = dpaa2_eth_queue_count(priv); @@ -2030,7 +2117,7 @@ static int dpaa2_eth_setup_tc(struct net_device *net_dev, if (num_tc > dpaa2_eth_tc_count(priv)) { netdev_err(net_dev, "Max %d traffic classes supported\n", dpaa2_eth_tc_count(priv)); - return -EINVAL; + return -EOPNOTSUPP; } if (!num_tc) { @@ -2355,7 +2442,7 @@ static void set_fq_affinity(struct dpaa2_eth_priv *priv) static void setup_fqs(struct dpaa2_eth_priv *priv) { - int i; + int i, j; /* We have one TxConf FQ per Tx flow. * The number of Tx and Rx queues is the same. @@ -2367,10 +2454,13 @@ static void setup_fqs(struct dpaa2_eth_priv *priv) priv->fq[priv->num_fqs++].flowid = (u16)i; } - for (i = 0; i < dpaa2_eth_queue_count(priv); i++) { - priv->fq[priv->num_fqs].type = DPAA2_RX_FQ; - priv->fq[priv->num_fqs].consume = dpaa2_eth_rx; - priv->fq[priv->num_fqs++].flowid = (u16)i; + for (j = 0; j < dpaa2_eth_tc_count(priv); j++) { + for (i = 0; i < dpaa2_eth_queue_count(priv); i++) { + priv->fq[priv->num_fqs].type = DPAA2_RX_FQ; + priv->fq[priv->num_fqs].consume = dpaa2_eth_rx; + priv->fq[priv->num_fqs].tc = (u8)j; + priv->fq[priv->num_fqs++].flowid = (u16)i; + } } /* For each FQ, decide on which core to process incoming frames */ @@ -2528,19 +2618,38 @@ static int set_buffer_layout(struct dpaa2_eth_priv *priv) static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio) + struct dpaa2_fd *fd, u8 prio, + u32 num_frames __always_unused, + int *frames_enqueued) { - return dpaa2_io_service_enqueue_qd(fq->channel->dpio, - priv->tx_qdid, prio, - fq->tx_qdbin, fd); + int err; + + err = dpaa2_io_service_enqueue_qd(fq->channel->dpio, + priv->tx_qdid, prio, + fq->tx_qdbin, fd); + if (!err && frames_enqueued) + *frames_enqueued = 1; + return err; } -static inline int dpaa2_eth_enqueue_fq(struct dpaa2_eth_priv *priv, - struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio) +static inline int dpaa2_eth_enqueue_fq_multiple(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_fq *fq, + struct dpaa2_fd *fd, + u8 prio, u32 num_frames, + int *frames_enqueued) { - return dpaa2_io_service_enqueue_fq(fq->channel->dpio, - fq->tx_fqid[prio], fd); + int err; + + err = dpaa2_io_service_enqueue_multiple_fq(fq->channel->dpio, + fq->tx_fqid[prio], + fd, num_frames); + + if (err == 0) + return -EBUSY; + + if (frames_enqueued) + *frames_enqueued = err; + return 0; } static void set_enqueue_mode(struct dpaa2_eth_priv *priv) @@ -2549,7 +2658,7 @@ static void set_enqueue_mode(struct dpaa2_eth_priv *priv) DPNI_ENQUEUE_FQID_VER_MINOR) < 0) priv->enqueue = dpaa2_eth_enqueue_qd; else - priv->enqueue = dpaa2_eth_enqueue_fq; + priv->enqueue = dpaa2_eth_enqueue_fq_multiple; } static int set_pause(struct dpaa2_eth_priv *priv) @@ -2610,7 +2719,7 @@ static void update_tx_fqids(struct dpaa2_eth_priv *priv) } } - priv->enqueue = dpaa2_eth_enqueue_fq; + priv->enqueue = dpaa2_eth_enqueue_fq_multiple; return; @@ -2620,6 +2729,118 @@ out_err: priv->enqueue = dpaa2_eth_enqueue_qd; } +/* Configure ingress classification based on VLAN PCP */ +static int set_vlan_qos(struct dpaa2_eth_priv *priv) +{ + struct device *dev = priv->net_dev->dev.parent; + struct dpkg_profile_cfg kg_cfg = {0}; + struct dpni_qos_tbl_cfg qos_cfg = {0}; + struct dpni_rule_cfg key_params; + void *dma_mem, *key, *mask; + u8 key_size = 2; /* VLAN TCI field */ + int i, pcp, err; + + /* VLAN-based classification only makes sense if we have multiple + * traffic classes. + * Also, we need to extract just the 3-bit PCP field from the VLAN + * header and we can only do that by using a mask + */ + if (dpaa2_eth_tc_count(priv) == 1 || !dpaa2_eth_fs_mask_enabled(priv)) { + dev_dbg(dev, "VLAN-based QoS classification not supported\n"); + return -EOPNOTSUPP; + } + + dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_KERNEL); + if (!dma_mem) + return -ENOMEM; + + kg_cfg.num_extracts = 1; + kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_HDR; + kg_cfg.extracts[0].extract.from_hdr.prot = NET_PROT_VLAN; + kg_cfg.extracts[0].extract.from_hdr.type = DPKG_FULL_FIELD; + kg_cfg.extracts[0].extract.from_hdr.field = NH_FLD_VLAN_TCI; + + err = dpni_prepare_key_cfg(&kg_cfg, dma_mem); + if (err) { + dev_err(dev, "dpni_prepare_key_cfg failed\n"); + goto out_free_tbl; + } + + /* set QoS table */ + qos_cfg.default_tc = 0; + qos_cfg.discard_on_miss = 0; + qos_cfg.key_cfg_iova = dma_map_single(dev, dma_mem, + DPAA2_CLASSIFIER_DMA_SIZE, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, qos_cfg.key_cfg_iova)) { + dev_err(dev, "QoS table DMA mapping failed\n"); + err = -ENOMEM; + goto out_free_tbl; + } + + err = dpni_set_qos_table(priv->mc_io, 0, priv->mc_token, &qos_cfg); + if (err) { + dev_err(dev, "dpni_set_qos_table failed\n"); + goto out_unmap_tbl; + } + + /* Add QoS table entries */ + key = kzalloc(key_size * 2, GFP_KERNEL); + if (!key) { + err = -ENOMEM; + goto out_unmap_tbl; + } + mask = key + key_size; + *(__be16 *)mask = cpu_to_be16(VLAN_PRIO_MASK); + + key_params.key_iova = dma_map_single(dev, key, key_size * 2, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, key_params.key_iova)) { + dev_err(dev, "Qos table entry DMA mapping failed\n"); + err = -ENOMEM; + goto out_free_key; + } + + key_params.mask_iova = key_params.key_iova + key_size; + key_params.key_size = key_size; + + /* We add rules for PCP-based distribution starting with highest + * priority (VLAN PCP = 7). If this DPNI doesn't have enough traffic + * classes to accommodate all priority levels, the lowest ones end up + * on TC 0 which was configured as default + */ + for (i = dpaa2_eth_tc_count(priv) - 1, pcp = 7; i >= 0; i--, pcp--) { + *(__be16 *)key = cpu_to_be16(pcp << VLAN_PRIO_SHIFT); + dma_sync_single_for_device(dev, key_params.key_iova, + key_size * 2, DMA_TO_DEVICE); + + err = dpni_add_qos_entry(priv->mc_io, 0, priv->mc_token, + &key_params, i, i); + if (err) { + dev_err(dev, "dpni_add_qos_entry failed\n"); + dpni_clear_qos_table(priv->mc_io, 0, priv->mc_token); + goto out_unmap_key; + } + } + + priv->vlan_cls_enabled = true; + + /* Table and key memory is not persistent, clean everything up after + * configuration is finished + */ +out_unmap_key: + dma_unmap_single(dev, key_params.key_iova, key_size * 2, DMA_TO_DEVICE); +out_free_key: + kfree(key); +out_unmap_tbl: + dma_unmap_single(dev, qos_cfg.key_cfg_iova, DPAA2_CLASSIFIER_DMA_SIZE, + DMA_TO_DEVICE); +out_free_tbl: + kfree(dma_mem); + + return err; +} + /* Configure the DPNI object this interface is associated with */ static int setup_dpni(struct fsl_mc_device *ls_dev) { @@ -2682,10 +2903,16 @@ static int setup_dpni(struct fsl_mc_device *ls_dev) goto close; } + err = set_vlan_qos(priv); + if (err && err != -EOPNOTSUPP) + goto close; + priv->cls_rules = devm_kzalloc(dev, sizeof(struct dpaa2_eth_cls_rule) * dpaa2_eth_fs_count(priv), GFP_KERNEL); - if (!priv->cls_rules) + if (!priv->cls_rules) { + err = -ENOMEM; goto close; + } return 0; @@ -2716,7 +2943,7 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv, int err; err = dpni_get_queue(priv->mc_io, 0, priv->mc_token, - DPNI_QUEUE_RX, 0, fq->flowid, &queue, &qid); + DPNI_QUEUE_RX, fq->tc, fq->flowid, &queue, &qid); if (err) { dev_err(dev, "dpni_get_queue(RX) failed\n"); return err; @@ -2729,7 +2956,7 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv, queue.destination.priority = 1; queue.user_context = (u64)(uintptr_t)fq; err = dpni_set_queue(priv->mc_io, 0, priv->mc_token, - DPNI_QUEUE_RX, 0, fq->flowid, + DPNI_QUEUE_RX, fq->tc, fq->flowid, DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST, &queue); if (err) { @@ -2738,6 +2965,10 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv, } /* xdp_rxq setup */ + /* only once for each channel */ + if (fq->tc > 0) + return 0; + err = xdp_rxq_info_reg(&fq->channel->xdp_rxq, priv->net_dev, fq->flowid); if (err) { @@ -2875,7 +3106,7 @@ static int config_legacy_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) { struct device *dev = priv->net_dev->dev.parent; struct dpni_rx_tc_dist_cfg dist_cfg; - int err; + int i, err = 0; memset(&dist_cfg, 0, sizeof(dist_cfg)); @@ -2883,9 +3114,14 @@ static int config_legacy_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) dist_cfg.dist_size = dpaa2_eth_queue_count(priv); dist_cfg.dist_mode = DPNI_DIST_MODE_HASH; - err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg); - if (err) - dev_err(dev, "dpni_set_rx_tc_dist failed\n"); + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, + i, &dist_cfg); + if (err) { + dev_err(dev, "dpni_set_rx_tc_dist failed\n"); + break; + } + } return err; } @@ -2895,7 +3131,7 @@ static int config_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) { struct device *dev = priv->net_dev->dev.parent; struct dpni_rx_dist_cfg dist_cfg; - int err; + int i, err = 0; memset(&dist_cfg, 0, sizeof(dist_cfg)); @@ -2903,9 +3139,15 @@ static int config_hash_key(struct dpaa2_eth_priv *priv, dma_addr_t key) dist_cfg.dist_size = dpaa2_eth_queue_count(priv); dist_cfg.enable = 1; - err = dpni_set_rx_hash_dist(priv->mc_io, 0, priv->mc_token, &dist_cfg); - if (err) - dev_err(dev, "dpni_set_rx_hash_dist failed\n"); + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + dist_cfg.tc = i; + err = dpni_set_rx_hash_dist(priv->mc_io, 0, priv->mc_token, + &dist_cfg); + if (err) { + dev_err(dev, "dpni_set_rx_hash_dist failed\n"); + break; + } + } return err; } @@ -2915,7 +3157,7 @@ static int config_cls_key(struct dpaa2_eth_priv *priv, dma_addr_t key) { struct device *dev = priv->net_dev->dev.parent; struct dpni_rx_dist_cfg dist_cfg; - int err; + int i, err = 0; memset(&dist_cfg, 0, sizeof(dist_cfg)); @@ -2923,9 +3165,15 @@ static int config_cls_key(struct dpaa2_eth_priv *priv, dma_addr_t key) dist_cfg.dist_size = dpaa2_eth_queue_count(priv); dist_cfg.enable = 1; - err = dpni_set_rx_fs_dist(priv->mc_io, 0, priv->mc_token, &dist_cfg); - if (err) - dev_err(dev, "dpni_set_rx_fs_dist failed\n"); + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + dist_cfg.tc = i; + err = dpni_set_rx_fs_dist(priv->mc_io, 0, priv->mc_token, + &dist_cfg); + if (err) { + dev_err(dev, "dpni_set_rx_fs_dist failed\n"); + break; + } + } return err; } @@ -3611,6 +3859,15 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev) if (err) goto err_alloc_rings; +#ifdef CONFIG_FSL_DPAA2_ETH_DCB + if (dpaa2_eth_has_pause_support(priv) && priv->vlan_cls_enabled) { + priv->dcbx_mode = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE; + net_dev->dcbnl_ops = &dpaa2_eth_dcbnl_ops; + } else { + dev_dbg(dev, "PFC not supported\n"); + } +#endif + err = setup_irqs(dpni_dev); if (err) { netdev_warn(net_dev, "Failed to set link interrupt, fall back to polling\n"); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index 13242bf5b427..2d7ada0f0dbd 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -1,11 +1,12 @@ /* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */ /* Copyright 2014-2016 Freescale Semiconductor Inc. - * Copyright 2016 NXP + * Copyright 2016-2020 NXP */ #ifndef __DPAA2_ETH_H #define __DPAA2_ETH_H +#include <linux/dcbnl.h> #include <linux/netdevice.h> #include <linux/if_vlan.h> #include <linux/fsl/mc.h> @@ -36,27 +37,46 @@ /* Convert L3 MTU to L2 MFL */ #define DPAA2_ETH_L2_MAX_FRM(mtu) ((mtu) + VLAN_ETH_HLEN) -/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo - * frames in the Rx queues (length of the current frame is not - * taken into account when making the taildrop decision) +/* Set the taildrop threshold (in bytes) to allow the enqueue of a large + * enough number of jumbo frames in the Rx queues (length of the current + * frame is not taken into account when making the taildrop decision) */ -#define DPAA2_ETH_TAILDROP_THRESH (64 * 1024) +#define DPAA2_ETH_FQ_TAILDROP_THRESH (1024 * 1024) /* Maximum number of Tx confirmation frames to be processed * in a single NAPI call */ #define DPAA2_ETH_TXCONF_PER_NAPI 256 -/* Buffer quota per queue. Must be large enough such that for minimum sized - * frames taildrop kicks in before the bpool gets depleted, so we compute - * how many 64B frames fit inside the taildrop threshold and add a margin - * to accommodate the buffer refill delay. +/* Buffer qouta per channel. We want to keep in check number of ingress frames + * in flight: for small sized frames, congestion group taildrop may kick in + * first; for large sizes, Rx FQ taildrop threshold will ensure only a + * reasonable number of frames will be pending at any given time. + * Ingress frame drop due to buffer pool depletion should be a corner case only */ -#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE (DPAA2_ETH_TAILDROP_THRESH / 64) -#define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256) +#define DPAA2_ETH_NUM_BUFS 1280 #define DPAA2_ETH_REFILL_THRESH \ (DPAA2_ETH_NUM_BUFS - DPAA2_ETH_BUFS_PER_CMD) +/* Congestion group taildrop threshold: number of frames allowed to accumulate + * at any moment in a group of Rx queues belonging to the same traffic class. + * Choose value such that we don't risk depleting the buffer pool before the + * taildrop kicks in + */ +#define DPAA2_ETH_CG_TAILDROP_THRESH(priv) \ + (1024 * dpaa2_eth_queue_count(priv) / dpaa2_eth_tc_count(priv)) + +/* Congestion group notification threshold: when this many frames accumulate + * on the Rx queues belonging to the same TC, the MAC is instructed to send + * PFC frames for that TC. + * When number of pending frames drops below exit threshold transmission of + * PFC frames is stopped. + */ +#define DPAA2_ETH_CN_THRESH_ENTRY(priv) \ + (DPAA2_ETH_CG_TAILDROP_THRESH(priv) / 2) +#define DPAA2_ETH_CN_THRESH_EXIT(priv) \ + (DPAA2_ETH_CN_THRESH_ENTRY(priv) * 3 / 4) + /* Maximum number of buffers that can be acquired/released through a single * QBMan command */ @@ -288,11 +308,15 @@ struct dpaa2_eth_ch_stats { __u64 xdp_tx; __u64 xdp_tx_err; __u64 xdp_redirect; + /* Must be last, does not show up in ethtool stats */ + __u64 frames; }; /* Maximum number of queues associated with a DPNI */ #define DPAA2_ETH_MAX_TCS 8 -#define DPAA2_ETH_MAX_RX_QUEUES 16 +#define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16 +#define DPAA2_ETH_MAX_RX_QUEUES \ + (DPAA2_ETH_MAX_RX_QUEUES_PER_TC * DPAA2_ETH_MAX_TCS) #define DPAA2_ETH_MAX_TX_QUEUES 16 #define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \ DPAA2_ETH_MAX_TX_QUEUES) @@ -308,6 +332,11 @@ enum dpaa2_eth_fq_type { struct dpaa2_eth_priv; +struct dpaa2_eth_xdp_fds { + struct dpaa2_fd fds[DEV_MAP_BULK_SIZE]; + ssize_t num; +}; + struct dpaa2_eth_fq { u32 fqid; u32 tx_qdbin; @@ -325,6 +354,9 @@ struct dpaa2_eth_fq { const struct dpaa2_fd *fd, struct dpaa2_eth_fq *fq); struct dpaa2_eth_fq_stats stats; + + struct dpaa2_eth_xdp_fds xdp_redirect_fds; + struct dpaa2_eth_xdp_fds xdp_tx_fds; }; struct dpaa2_eth_ch_xdp { @@ -371,7 +403,9 @@ struct dpaa2_eth_priv { struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES]; int (*enqueue)(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio); + struct dpaa2_fd *fd, u8 prio, + u32 num_frames, + int *frames_enqueued); u8 num_channels; struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS]; @@ -402,7 +436,8 @@ struct dpaa2_eth_priv { struct dpaa2_eth_drv_stats __percpu *percpu_extras; u16 mc_token; - u8 rx_td_enabled; + u8 rx_fqtd_enabled; + u8 rx_cgtd_enabled; struct dpni_link_state link_state; bool do_link_poll; @@ -413,6 +448,12 @@ struct dpaa2_eth_priv { u64 rx_cls_fields; struct dpaa2_eth_cls_rule *cls_rules; u8 rx_cls_enabled; + u8 vlan_cls_enabled; + u8 pfc_enabled; +#ifdef CONFIG_FSL_DPAA2_ETH_DCB + u8 dcbx_mode; + struct ieee_pfc pfc; +#endif struct bpf_prog *xdp_prog; #ifdef CONFIG_DEBUG_FS struct dpaa2_debugfs dbg; @@ -495,6 +536,17 @@ enum dpaa2_eth_rx_dist { (dpaa2_eth_cmp_dpni_ver((priv), DPNI_PAUSE_VER_MAJOR, \ DPNI_PAUSE_VER_MINOR) >= 0) +static inline bool dpaa2_eth_tx_pause_enabled(u64 link_options) +{ + return !!(link_options & DPNI_LINK_OPT_PAUSE) ^ + !!(link_options & DPNI_LINK_OPT_ASYM_PAUSE); +} + +static inline bool dpaa2_eth_rx_pause_enabled(u64 link_options) +{ + return !!(link_options & DPNI_LINK_OPT_PAUSE); +} + static inline unsigned int dpaa2_eth_needed_headroom(struct dpaa2_eth_priv *priv, struct sk_buff *skb) @@ -534,4 +586,9 @@ int dpaa2_eth_cls_key_size(u64 key); int dpaa2_eth_cls_fld_off(int prot, int field); void dpaa2_eth_cls_trim_rule(void *key_mem, u64 fields); +void dpaa2_eth_set_rx_taildrop(struct dpaa2_eth_priv *priv, + bool tx_pause, bool pfc); + +extern const struct dcbnl_rtnl_ops dpaa2_eth_dcbnl_ops; + #endif /* __DPAA2_H */ diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c index b7141fdc279e..e88269fe3de7 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c @@ -130,9 +130,8 @@ static void dpaa2_eth_get_pauseparam(struct net_device *net_dev, return; } - pause->rx_pause = !!(link_options & DPNI_LINK_OPT_PAUSE); - pause->tx_pause = pause->rx_pause ^ - !!(link_options & DPNI_LINK_OPT_ASYM_PAUSE); + pause->rx_pause = dpaa2_eth_rx_pause_enabled(link_options); + pause->tx_pause = dpaa2_eth_tx_pause_enabled(link_options); pause->autoneg = AUTONEG_DISABLE; } @@ -277,7 +276,7 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev, /* Per-channel stats */ for (k = 0; k < priv->num_channels; k++) { ch_stats = &priv->channel[k]->stats; - for (j = 0; j < sizeof(*ch_stats) / sizeof(__u64); j++) + for (j = 0; j < sizeof(*ch_stats) / sizeof(__u64) - 1; j++) *((__u64 *)data + i + j) += *((__u64 *)ch_stats + j); } i += j; @@ -547,7 +546,7 @@ static int do_cls_rule(struct net_device *net_dev, dma_addr_t key_iova; u64 fields = 0; void *key_buf; - int err; + int i, err; if (fs->ring_cookie != RX_CLS_FLOW_DISC && fs->ring_cookie >= dpaa2_eth_queue_count(priv)) @@ -607,11 +606,18 @@ static int do_cls_rule(struct net_device *net_dev, fs_act.options |= DPNI_FS_OPT_DISCARD; else fs_act.flow_id = fs->ring_cookie; - err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, 0, - fs->location, &rule_cfg, &fs_act); - } else { - err = dpni_remove_fs_entry(priv->mc_io, 0, priv->mc_token, 0, - &rule_cfg); + } + for (i = 0; i < dpaa2_eth_tc_count(priv); i++) { + if (add) + err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, + i, fs->location, &rule_cfg, + &fs_act); + else + err = dpni_remove_fs_entry(priv->mc_io, 0, + priv->mc_token, i, + &rule_cfg); + if (err) + break; } dma_unmap_single(dev, key_iova, rule_cfg.key_size * 2, DMA_TO_DEVICE); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h index d9b6918807af..fd069f67be9b 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h @@ -59,6 +59,10 @@ #define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD(0x235) +#define DPNI_CMDID_SET_QOS_TBL DPNI_CMD(0x240) +#define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD(0x241) +#define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242) +#define DPNI_CMDID_CLR_QOS_TBL DPNI_CMD(0x243) #define DPNI_CMDID_ADD_FS_ENT DPNI_CMD(0x244) #define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245) #define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246) @@ -567,4 +571,59 @@ struct dpni_cmd_remove_fs_entry { __le64 mask_iova; }; +#define DPNI_DISCARD_ON_MISS_SHIFT 0 +#define DPNI_DISCARD_ON_MISS_SIZE 1 + +struct dpni_cmd_set_qos_table { + __le32 pad; + u8 default_tc; + /* only the LSB */ + u8 discard_on_miss; + __le16 pad1[21]; + __le64 key_cfg_iova; +}; + +struct dpni_cmd_add_qos_entry { + __le16 pad; + u8 tc_id; + u8 key_size; + __le16 index; + __le16 pad1; + __le64 key_iova; + __le64 mask_iova; +}; + +struct dpni_cmd_remove_qos_entry { + u8 pad[3]; + u8 key_size; + __le32 pad1; + __le64 key_iova; + __le64 mask_iova; +}; + +#define DPNI_DEST_TYPE_SHIFT 0 +#define DPNI_DEST_TYPE_SIZE 4 +#define DPNI_CONG_UNITS_SHIFT 4 +#define DPNI_CONG_UNITS_SIZE 2 + +struct dpni_cmd_set_congestion_notification { + /* cmd word 0 */ + u8 qtype; + u8 tc; + u8 pad[6]; + /* cmd word 1 */ + __le32 dest_id; + __le16 notification_mode; + u8 dest_priority; + /* from LSB: dest_type: 4 units:2 */ + u8 type_units; + /* cmd word 2 */ + __le64 message_iova; + /* cmd word 3 */ + __le64 message_ctx; + /* cmd word 4 */ + __le32 threshold_entry; + __le32 threshold_exit; +}; + #endif /* _FSL_DPNI_CMD_H */ diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.c b/drivers/net/ethernet/freescale/dpaa2/dpni.c index dd54e6953aeb..6b479ba66465 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpni.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpni.c @@ -1355,6 +1355,52 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io, } /** + * dpni_set_congestion_notification() - Set traffic class congestion + * notification configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @qtype: Type of queue - Rx, Tx and Tx confirm types are supported + * @tc_id: Traffic class selection (0-7) + * @cfg: Congestion notification configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_set_congestion_notification( + struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + enum dpni_queue_type qtype, + u8 tc_id, + const struct dpni_congestion_notification_cfg *cfg) +{ + struct dpni_cmd_set_congestion_notification *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = + mc_encode_cmd_header(DPNI_CMDID_SET_CONGESTION_NOTIFICATION, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_congestion_notification *)cmd.params; + cmd_params->qtype = qtype; + cmd_params->tc = tc_id; + cmd_params->dest_id = cpu_to_le32(cfg->dest_cfg.dest_id); + cmd_params->notification_mode = cpu_to_le16(cfg->notification_mode); + cmd_params->dest_priority = cfg->dest_cfg.priority; + dpni_set_field(cmd_params->type_units, DEST_TYPE, + cfg->dest_cfg.dest_type); + dpni_set_field(cmd_params->type_units, CONG_UNITS, cfg->units); + cmd_params->message_iova = cpu_to_le64(cfg->message_iova); + cmd_params->message_ctx = cpu_to_le64(cfg->message_ctx); + cmd_params->threshold_entry = cpu_to_le32(cfg->threshold_entry); + cmd_params->threshold_exit = cpu_to_le32(cfg->threshold_exit); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** * dpni_set_queue() - Set queue parameters * @mc_io: Pointer to MC portal's I/O object * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' @@ -1786,3 +1832,134 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io, /* send command to mc*/ return mc_send_command(mc_io, &cmd); } + +/** + * dpni_set_qos_table() - Set QoS mapping table + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: QoS table configuration + * + * This function and all QoS-related functions require that + *'max_tcs > 1' was set at DPNI creation. + * + * warning: Before calling this function, call dpkg_prepare_key_cfg() to + * prepare the key_cfg_iova parameter + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_set_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_qos_tbl_cfg *cfg) +{ + struct dpni_cmd_set_qos_table *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QOS_TBL, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_qos_table *)cmd.params; + cmd_params->default_tc = cfg->default_tc; + cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova); + dpni_set_field(cmd_params->discard_on_miss, DISCARD_ON_MISS, + cfg->discard_on_miss); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_add_qos_entry() - Add QoS mapping entry (to select a traffic class) + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: QoS rule to add + * @tc_id: Traffic class selection (0-7) + * @index: Location in the QoS table where to insert the entry. + * Only relevant if MASKING is enabled for QoS classification on + * this DPNI, it is ignored for exact match. + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_add_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg, + u8 tc_id, + u16 index) +{ + struct dpni_cmd_add_qos_entry *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_QOS_ENT, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_add_qos_entry *)cmd.params; + cmd_params->tc_id = tc_id; + cmd_params->key_size = cfg->key_size; + cmd_params->index = cpu_to_le16(index); + cmd_params->key_iova = cpu_to_le64(cfg->key_iova); + cmd_params->mask_iova = cpu_to_le64(cfg->mask_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_remove_qos_entry() - Remove QoS mapping entry + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: QoS rule to remove + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg) +{ + struct dpni_cmd_remove_qos_entry *cmd_params; + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_QOS_ENT, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_remove_qos_entry *)cmd.params; + cmd_params->key_size = cfg->key_size; + cmd_params->key_iova = cpu_to_le64(cfg->key_iova); + cmd_params->mask_iova = cpu_to_le64(cfg->mask_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_clear_qos_table() - Clear all QoS mapping entries + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * + * Following this function call, all frames are directed to + * the default traffic class (0) + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_clear_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token) +{ + struct fsl_mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL, + cmd_flags, + token); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.h b/drivers/net/ethernet/freescale/dpaa2/dpni.h index ee0711d06b3a..e874d8084142 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpni.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpni.h @@ -514,6 +514,11 @@ int dpni_get_statistics(struct fsl_mc_io *mc_io, #define DPNI_LINK_OPT_ASYM_PAUSE 0x0000000000000008ULL /** + * Enable priority flow control pause frames + */ +#define DPNI_LINK_OPT_PFC_PAUSE 0x0000000000000010ULL + +/** * struct - Structure representing DPNI link configuration * @rate: Rate * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values @@ -716,6 +721,26 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, const struct dpni_rx_dist_cfg *cfg); /** + * struct dpni_qos_tbl_cfg - Structure representing QOS table configuration + * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with + * key extractions to be used as the QoS criteria by calling + * dpkg_prepare_key_cfg() + * @discard_on_miss: Set to '1' to discard frames in case of no match (miss); + * '0' to use the 'default_tc' in such cases + * @default_tc: Used in case of no-match and 'discard_on_miss'= 0 + */ +struct dpni_qos_tbl_cfg { + u64 key_cfg_iova; + int discard_on_miss; + u8 default_tc; +}; + +int dpni_set_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_qos_tbl_cfg *cfg); + +/** * enum dpni_dest - DPNI destination types * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and * does not generate FQDAN notifications; user is expected to @@ -858,6 +883,62 @@ enum dpni_congestion_point { }; /** + * struct dpni_dest_cfg - Structure representing DPNI destination parameters + * @dest_type: Destination type + * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type + * @priority: Priority selection within the DPIO or DPCON channel; valid + * values are 0-1 or 0-7, depending on the number of priorities + * in that channel; not relevant for 'DPNI_DEST_NONE' option + */ +struct dpni_dest_cfg { + enum dpni_dest dest_type; + int dest_id; + u8 priority; +}; + +/* DPNI congestion options */ + +/** + * This congestion will trigger flow control or priority flow control. + * This will have effect only if flow control is enabled with + * dpni_set_link_cfg(). + */ +#define DPNI_CONG_OPT_FLOW_CONTROL 0x00000040 + +/** + * struct dpni_congestion_notification_cfg - congestion notification + * configuration + * @units: Units type + * @threshold_entry: Above this threshold we enter a congestion state. + * set it to '0' to disable it + * @threshold_exit: Below this threshold we exit the congestion state. + * @message_ctx: The context that will be part of the CSCN message + * @message_iova: I/O virtual address (must be in DMA-able memory), + * must be 16B aligned; valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' + * is contained in 'options' + * @dest_cfg: CSCN can be send to either DPIO or DPCON WQ channel + * @notification_mode: Mask of available options; use 'DPNI_CONG_OPT_<X>' values + */ + +struct dpni_congestion_notification_cfg { + enum dpni_congestion_unit units; + u32 threshold_entry; + u32 threshold_exit; + u64 message_ctx; + u64 message_iova; + struct dpni_dest_cfg dest_cfg; + u16 notification_mode; +}; + +int dpni_set_congestion_notification( + struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + enum dpni_queue_type qtype, + u8 tc_id, + const struct dpni_congestion_notification_cfg *cfg); + +/** * struct dpni_taildrop - Structure representing the taildrop * @enable: Indicates whether the taildrop is active or not. * @units: Indicates the unit of THRESHOLD. Queue taildrop only supports @@ -961,6 +1042,22 @@ int dpni_remove_fs_entry(struct fsl_mc_io *mc_io, u8 tc_id, const struct dpni_rule_cfg *cfg); +int dpni_add_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg, + u8 tc_id, + u16 index); + +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token, + const struct dpni_rule_cfg *cfg); + +int dpni_clear_qos_table(struct fsl_mc_io *mc_io, + u32 cmd_flags, + u16 token); + int dpni_get_api_version(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 *major_ver, diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index ccf2611f4a20..298c55786fd9 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -756,6 +756,9 @@ void enetc_get_si_caps(struct enetc_si *si) if (val & ENETC_SIPCAPR0_QBV) si->hw_features |= ENETC_SI_F_QBV; + + if (val & ENETC_SIPCAPR0_PSFP) + si->hw_features |= ENETC_SI_F_PSFP; } static int enetc_dma_alloc_bdr(struct enetc_bdr *r, size_t bd_size) @@ -1518,6 +1521,8 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type, return enetc_setup_tc_cbs(ndev, type_data); case TC_SETUP_QDISC_ETF: return enetc_setup_tc_txtime(ndev, type_data); + case TC_SETUP_BLOCK: + return enetc_setup_tc_psfp(ndev, type_data); default: return -EOPNOTSUPP; } @@ -1567,15 +1572,42 @@ static int enetc_set_rss(struct net_device *ndev, int en) return 0; } +static int enetc_set_psfp(struct net_device *ndev, int en) +{ + struct enetc_ndev_priv *priv = netdev_priv(ndev); + int err; + + if (en) { + err = enetc_psfp_enable(priv); + if (err) + return err; + + priv->active_offloads |= ENETC_F_QCI; + return 0; + } + + err = enetc_psfp_disable(priv); + if (err) + return err; + + priv->active_offloads &= ~ENETC_F_QCI; + + return 0; +} + int enetc_set_features(struct net_device *ndev, netdev_features_t features) { netdev_features_t changed = ndev->features ^ features; + int err = 0; if (changed & NETIF_F_RXHASH) enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH)); - return 0; + if (changed & NETIF_F_HW_TC) + err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC)); + + return err; } #ifdef CONFIG_FSL_ENETC_PTP_CLOCK diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h index 56c43f35b633..b705464f6882 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.h +++ b/drivers/net/ethernet/freescale/enetc/enetc.h @@ -151,6 +151,7 @@ enum enetc_errata { }; #define ENETC_SI_F_QBV BIT(0) +#define ENETC_SI_F_PSFP BIT(1) /* PCI IEP device data */ struct enetc_si { @@ -203,12 +204,20 @@ struct enetc_cls_rule { }; #define ENETC_MAX_BDR_INT 2 /* fixed to max # of available cpus */ +struct psfp_cap { + u32 max_streamid; + u32 max_psfp_filter; + u32 max_psfp_gate; + u32 max_psfp_gatelist; + u32 max_psfp_meter; +}; /* TODO: more hardware offloads */ enum enetc_active_offloads { ENETC_F_RX_TSTAMP = BIT(0), ENETC_F_TX_TSTAMP = BIT(1), ENETC_F_QBV = BIT(2), + ENETC_F_QCI = BIT(3), }; struct enetc_ndev_priv { @@ -231,6 +240,8 @@ struct enetc_ndev_priv { struct enetc_cls_rule *cls_rules; + struct psfp_cap psfp_cap; + struct device_node *phy_node; phy_interface_t if_mode; }; @@ -289,9 +300,84 @@ int enetc_setup_tc_taprio(struct net_device *ndev, void *type_data); void enetc_sched_speed_set(struct net_device *ndev); int enetc_setup_tc_cbs(struct net_device *ndev, void *type_data); int enetc_setup_tc_txtime(struct net_device *ndev, void *type_data); +int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv); +int enetc_setup_tc_psfp(struct net_device *ndev, void *type_data); +int enetc_psfp_init(struct enetc_ndev_priv *priv); +int enetc_psfp_clean(struct enetc_ndev_priv *priv); + +static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv) +{ + u32 reg; + + reg = enetc_port_rd(&priv->si->hw, ENETC_PSIDCAPR); + priv->psfp_cap.max_streamid = reg & ENETC_PSIDCAPR_MSK; + /* Port stream filter capability */ + reg = enetc_port_rd(&priv->si->hw, ENETC_PSFCAPR); + priv->psfp_cap.max_psfp_filter = reg & ENETC_PSFCAPR_MSK; + /* Port stream gate capability */ + reg = enetc_port_rd(&priv->si->hw, ENETC_PSGCAPR); + priv->psfp_cap.max_psfp_gate = (reg & ENETC_PSGCAPR_SGIT_MSK); + priv->psfp_cap.max_psfp_gatelist = (reg & ENETC_PSGCAPR_GCL_MSK) >> 16; + /* Port flow meter capability */ + reg = enetc_port_rd(&priv->si->hw, ENETC_PFMCAPR); + priv->psfp_cap.max_psfp_meter = reg & ENETC_PFMCAPR_MSK; +} + +static inline int enetc_psfp_enable(struct enetc_ndev_priv *priv) +{ + struct enetc_hw *hw = &priv->si->hw; + int err; + + enetc_get_max_cap(priv); + + err = enetc_psfp_init(priv); + if (err) + return err; + + enetc_wr(hw, ENETC_PPSFPMR, enetc_rd(hw, ENETC_PPSFPMR) | + ENETC_PPSFPMR_PSFPEN | ENETC_PPSFPMR_VS | + ENETC_PPSFPMR_PVC | ENETC_PPSFPMR_PVZC); + + return 0; +} + +static inline int enetc_psfp_disable(struct enetc_ndev_priv *priv) +{ + struct enetc_hw *hw = &priv->si->hw; + int err; + + err = enetc_psfp_clean(priv); + if (err) + return err; + + enetc_wr(hw, ENETC_PPSFPMR, enetc_rd(hw, ENETC_PPSFPMR) & + ~ENETC_PPSFPMR_PSFPEN & ~ENETC_PPSFPMR_VS & + ~ENETC_PPSFPMR_PVC & ~ENETC_PPSFPMR_PVZC); + + memset(&priv->psfp_cap, 0, sizeof(struct psfp_cap)); + + return 0; +} + #else #define enetc_setup_tc_taprio(ndev, type_data) -EOPNOTSUPP #define enetc_sched_speed_set(ndev) (void)0 #define enetc_setup_tc_cbs(ndev, type_data) -EOPNOTSUPP #define enetc_setup_tc_txtime(ndev, type_data) -EOPNOTSUPP +#define enetc_setup_tc_psfp(ndev, type_data) -EOPNOTSUPP +#define enetc_setup_tc_block_cb NULL + +#define enetc_get_max_cap(p) \ + memset(&((p)->psfp_cap), 0, sizeof(struct psfp_cap)) + +static inline int enetc_psfp_enable(struct enetc_ndev_priv *priv) +{ + return 0; +} + +static inline int enetc_psfp_disable(struct enetc_ndev_priv *priv) +{ + return 0; +} #endif diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h index 2a6523136947..6314051bc6c1 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h +++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h @@ -19,6 +19,7 @@ #define ENETC_SICTR1 0x1c #define ENETC_SIPCAPR0 0x20 #define ENETC_SIPCAPR0_QBV BIT(4) +#define ENETC_SIPCAPR0_PSFP BIT(9) #define ENETC_SIPCAPR0_RSS BIT(8) #define ENETC_SIPCAPR1 0x24 #define ENETC_SITGTGR 0x30 @@ -228,6 +229,15 @@ enum enetc_bdr_type {TX, RX}; #define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11)) #define ENETC_PM0_IFM_RGAUTO (BIT(15) | ENETC_PMO_IFM_RG | BIT(1)) #define ENETC_PM0_IFM_XGMII BIT(12) +#define ENETC_PSIDCAPR 0x1b08 +#define ENETC_PSIDCAPR_MSK GENMASK(15, 0) +#define ENETC_PSFCAPR 0x1b18 +#define ENETC_PSFCAPR_MSK GENMASK(15, 0) +#define ENETC_PSGCAPR 0x1b28 +#define ENETC_PSGCAPR_GCL_MSK GENMASK(18, 16) +#define ENETC_PSGCAPR_SGIT_MSK GENMASK(15, 0) +#define ENETC_PFMCAPR 0x1b38 +#define ENETC_PFMCAPR_MSK GENMASK(15, 0) /* MAC counters */ #define ENETC_PM0_REOCT 0x8100 @@ -557,6 +567,9 @@ enum bdcr_cmd_class { BDCR_CMD_RFS, BDCR_CMD_PORT_GCL, BDCR_CMD_RECV_CLASSIFIER, + BDCR_CMD_STREAM_IDENTIFY, + BDCR_CMD_STREAM_FILTER, + BDCR_CMD_STREAM_GCL, __BDCR_CMD_MAX_LEN, BDCR_CMD_MAX_LEN = __BDCR_CMD_MAX_LEN - 1, }; @@ -588,13 +601,152 @@ struct tgs_gcl_data { struct gce entry[]; }; +/* class 7, command 0, Stream Identity Entry Configuration */ +struct streamid_conf { + __le32 stream_handle; /* init gate value */ + __le32 iports; + u8 id_type; + u8 oui[3]; + u8 res[3]; + u8 en; +}; + +#define ENETC_CBDR_SID_VID_MASK 0xfff +#define ENETC_CBDR_SID_VIDM BIT(12) +#define ENETC_CBDR_SID_TG_MASK 0xc000 +/* streamid_conf address point to this data space */ +struct streamid_data { + union { + u8 dmac[6]; + u8 smac[6]; + }; + u16 vid_vidm_tg; +}; + +#define ENETC_CBDR_SFI_PRI_MASK 0x7 +#define ENETC_CBDR_SFI_PRIM BIT(3) +#define ENETC_CBDR_SFI_BLOV BIT(4) +#define ENETC_CBDR_SFI_BLEN BIT(5) +#define ENETC_CBDR_SFI_MSDUEN BIT(6) +#define ENETC_CBDR_SFI_FMITEN BIT(7) +#define ENETC_CBDR_SFI_ENABLE BIT(7) +/* class 8, command 0, Stream Filter Instance, Short Format */ +struct sfi_conf { + __le32 stream_handle; + u8 multi; + u8 res[2]; + u8 sthm; + /* Max Service Data Unit or Flow Meter Instance Table index. + * Depending on the value of FLT this represents either Max + * Service Data Unit (max frame size) allowed by the filter + * entry or is an index into the Flow Meter Instance table + * index identifying the policer which will be used to police + * it. + */ + __le16 fm_inst_table_index; + __le16 msdu; + __le16 sg_inst_table_index; + u8 res1[2]; + __le32 input_ports; + u8 res2[3]; + u8 en; +}; + +/* class 8, command 2 stream Filter Instance status query short format + * command no need structure define + * Stream Filter Instance Query Statistics Response data + */ +struct sfi_counter_data { + u32 matchl; + u32 matchh; + u32 msdu_dropl; + u32 msdu_droph; + u32 stream_gate_dropl; + u32 stream_gate_droph; + u32 flow_meter_dropl; + u32 flow_meter_droph; +}; + +#define ENETC_CBDR_SGI_OIPV_MASK 0x7 +#define ENETC_CBDR_SGI_OIPV_EN BIT(3) +#define ENETC_CBDR_SGI_CGTST BIT(6) +#define ENETC_CBDR_SGI_OGTST BIT(7) +#define ENETC_CBDR_SGI_CFG_CHG BIT(1) +#define ENETC_CBDR_SGI_CFG_PND BIT(2) +#define ENETC_CBDR_SGI_OEX BIT(4) +#define ENETC_CBDR_SGI_OEXEN BIT(5) +#define ENETC_CBDR_SGI_IRX BIT(6) +#define ENETC_CBDR_SGI_IRXEN BIT(7) +#define ENETC_CBDR_SGI_ACLLEN_MASK 0x3 +#define ENETC_CBDR_SGI_OCLLEN_MASK 0xc +#define ENETC_CBDR_SGI_EN BIT(7) +/* class 9, command 0, Stream Gate Instance Table, Short Format + * class 9, command 2, Stream Gate Instance Table entry query write back + * Short Format + */ +struct sgi_table { + u8 res[8]; + u8 oipv; + u8 res0[2]; + u8 ocgtst; + u8 res1[7]; + u8 gset; + u8 oacl_len; + u8 res2[2]; + u8 en; +}; + +#define ENETC_CBDR_SGI_AIPV_MASK 0x7 +#define ENETC_CBDR_SGI_AIPV_EN BIT(3) +#define ENETC_CBDR_SGI_AGTST BIT(7) + +/* class 9, command 1, Stream Gate Control List, Long Format */ +struct sgcl_conf { + u8 aipv; + u8 res[2]; + u8 agtst; + u8 res1[4]; + union { + struct { + u8 res2[4]; + u8 acl_len; + u8 res3[3]; + }; + u8 cct[8]; /* Config change time */ + }; +}; + +#define ENETC_CBDR_SGL_IOMEN BIT(0) +#define ENETC_CBDR_SGL_IPVEN BIT(3) +#define ENETC_CBDR_SGL_GTST BIT(4) +#define ENETC_CBDR_SGL_IPV_MASK 0xe +/* Stream Gate Control List Entry */ +struct sgce { + u32 interval; + u8 msdu[3]; + u8 multi; +}; + +/* stream control list class 9 , cmd 1 data buffer */ +struct sgcl_data { + u32 btl; + u32 bth; + u32 ct; + u32 cte; + struct sgce sgcl[0]; +}; + struct enetc_cbd { union{ + struct sfi_conf sfi_conf; + struct sgi_table sgi_table; struct { __le32 addr[2]; union { __le32 opt[4]; struct tgs_gcl_conf gcl_conf; + struct streamid_conf sid_set; + struct sgcl_conf sgcl_conf; }; }; /* Long format */ __le32 data[6]; @@ -621,3 +773,10 @@ struct enetc_cbd { /* Port time specific departure */ #define ENETC_PTCTSDR(n) (0x1210 + 4 * (n)) #define ENETC_TSDE BIT(31) + +/* PSFP setting */ +#define ENETC_PPSFPMR 0x11b00 +#define ENETC_PPSFPMR_PSFPEN BIT(0) +#define ENETC_PPSFPMR_VS BIT(1) +#define ENETC_PPSFPMR_PVC BIT(2) +#define ENETC_PPSFPMR_PVZC BIT(3) diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c index 85e2b741df41..824d211ec00f 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c @@ -50,21 +50,6 @@ static void enetc_set_vlan_promisc(struct enetc_hw *hw, char si_map) enetc_port_wr(hw, ENETC_PSIPVMR, ENETC_PSIPVMR_SET_VP(si_map) | val); } -static bool enetc_si_vlan_promisc_is_on(struct enetc_pf *pf, int si_idx) -{ - return pf->vlan_promisc_simap & BIT(si_idx); -} - -static bool enetc_vlan_filter_is_on(struct enetc_pf *pf) -{ - int i; - - for_each_set_bit(i, pf->active_vlans, VLAN_N_VID) - return true; - - return false; -} - static void enetc_enable_si_vlan_promisc(struct enetc_pf *pf, int si_idx) { pf->vlan_promisc_simap |= BIT(si_idx); @@ -204,6 +189,7 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev) { struct enetc_ndev_priv *priv = netdev_priv(ndev); struct enetc_pf *pf = enetc_si_priv(priv->si); + char vlan_promisc_simap = pf->vlan_promisc_simap; struct enetc_hw *hw = &priv->si->hw; bool uprom = false, mprom = false; struct enetc_mac_filter *filter; @@ -216,16 +202,16 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev) psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0); uprom = true; mprom = true; - /* enable VLAN promisc mode for SI0 */ - if (!enetc_si_vlan_promisc_is_on(pf, 0)) - enetc_enable_si_vlan_promisc(pf, 0); - + /* Enable VLAN promiscuous mode for SI0 (PF) */ + vlan_promisc_simap |= BIT(0); } else if (ndev->flags & IFF_ALLMULTI) { /* enable multi cast promisc mode for SI0 (PF) */ psipmr = ENETC_PSIPMR_SET_MP(0); mprom = true; } + enetc_set_vlan_promisc(&pf->si->hw, vlan_promisc_simap); + /* first 2 filter entries belong to PF */ if (!uprom) { /* Update unicast filters */ @@ -306,9 +292,6 @@ static int enetc_vlan_rx_add_vid(struct net_device *ndev, __be16 prot, u16 vid) struct enetc_pf *pf = enetc_si_priv(priv->si); int idx; - if (enetc_si_vlan_promisc_is_on(pf, 0)) - enetc_disable_si_vlan_promisc(pf, 0); - __set_bit(vid, pf->active_vlans); idx = enetc_vid_hash_idx(vid); @@ -326,9 +309,6 @@ static int enetc_vlan_rx_del_vid(struct net_device *ndev, __be16 prot, u16 vid) __clear_bit(vid, pf->active_vlans); enetc_sync_vlan_ht_filter(pf, true); - if (!enetc_vlan_filter_is_on(pf)) - enetc_enable_si_vlan_promisc(pf, 0); - return 0; } @@ -677,6 +657,15 @@ static int enetc_pf_set_features(struct net_device *ndev, enetc_enable_txvlan(&priv->si->hw, 0, !!(features & NETIF_F_HW_VLAN_CTAG_TX)); + if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) { + struct enetc_pf *pf = enetc_si_priv(priv->si); + + if (!!(features & NETIF_F_HW_VLAN_CTAG_FILTER)) + enetc_disable_si_vlan_promisc(pf, 0); + else + enetc_enable_si_vlan_promisc(pf, 0); + } + if (changed & NETIF_F_LOOPBACK) enetc_set_loopback(ndev, !!(features & NETIF_F_LOOPBACK)); @@ -719,12 +708,11 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev, ndev->hw_features = NETIF_F_SG | NETIF_F_RXCSUM | NETIF_F_HW_CSUM | NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_LOOPBACK; + NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_LOOPBACK; ndev->features = NETIF_F_HIGHDMA | NETIF_F_SG | NETIF_F_RXCSUM | NETIF_F_HW_CSUM | NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_FILTER; + NETIF_F_HW_VLAN_CTAG_RX; if (si->num_rss) ndev->hw_features |= NETIF_F_RXHASH; @@ -739,6 +727,12 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev, if (si->hw_features & ENETC_SI_F_QBV) priv->active_offloads |= ENETC_F_QBV; + if (si->hw_features & ENETC_SI_F_PSFP && !enetc_psfp_enable(priv)) { + priv->active_offloads |= ENETC_F_QCI; + ndev->features |= NETIF_F_HW_TC; + ndev->hw_features |= NETIF_F_HW_TC; + } + /* pick up primary MAC address from SI */ enetc_get_primary_mac_addr(&si->hw, ndev->dev_addr); } diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c index 0c6bf3a55a9a..fd3df19eaa32 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c @@ -5,6 +5,9 @@ #include <net/pkt_sched.h> #include <linux/math64.h> +#include <linux/refcount.h> +#include <net/pkt_cls.h> +#include <net/tc_act/tc_gate.h> static u16 enetc_get_max_gcl_len(struct enetc_hw *hw) { @@ -331,3 +334,1103 @@ int enetc_setup_tc_txtime(struct net_device *ndev, void *type_data) return 0; } + +enum streamid_type { + STREAMID_TYPE_RESERVED = 0, + STREAMID_TYPE_NULL, + STREAMID_TYPE_SMAC, +}; + +enum streamid_vlan_tagged { + STREAMID_VLAN_RESERVED = 0, + STREAMID_VLAN_TAGGED, + STREAMID_VLAN_UNTAGGED, + STREAMID_VLAN_ALL, +}; + +#define ENETC_PSFP_WILDCARD -1 +#define HANDLE_OFFSET 100 + +enum forward_type { + FILTER_ACTION_TYPE_PSFP = BIT(0), + FILTER_ACTION_TYPE_ACL = BIT(1), + FILTER_ACTION_TYPE_BOTH = GENMASK(1, 0), +}; + +/* This is for limit output type for input actions */ +struct actions_fwd { + u64 actions; + u64 keys; /* include the must needed keys */ + enum forward_type output; +}; + +struct psfp_streamfilter_counters { + u64 matching_frames_count; + u64 passing_frames_count; + u64 not_passing_frames_count; + u64 passing_sdu_count; + u64 not_passing_sdu_count; + u64 red_frames_count; +}; + +struct enetc_streamid { + u32 index; + union { + u8 src_mac[6]; + u8 dst_mac[6]; + }; + u8 filtertype; + u16 vid; + u8 tagged; + s32 handle; +}; + +struct enetc_psfp_filter { + u32 index; + s32 handle; + s8 prio; + u32 gate_id; + s32 meter_id; + refcount_t refcount; + struct hlist_node node; +}; + +struct enetc_psfp_gate { + u32 index; + s8 init_ipv; + u64 basetime; + u64 cycletime; + u64 cycletimext; + u32 num_entries; + refcount_t refcount; + struct hlist_node node; + struct action_gate_entry entries[0]; +}; + +struct enetc_stream_filter { + struct enetc_streamid sid; + u32 sfi_index; + u32 sgi_index; + struct flow_stats stats; + struct hlist_node node; +}; + +struct enetc_psfp { + unsigned long dev_bitmap; + unsigned long *psfp_sfi_bitmap; + struct hlist_head stream_list; + struct hlist_head psfp_filter_list; + struct hlist_head psfp_gate_list; + spinlock_t psfp_lock; /* spinlock for the struct enetc_psfp r/w */ +}; + +static struct actions_fwd enetc_act_fwd[] = { + { + BIT(FLOW_ACTION_GATE), + BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS), + FILTER_ACTION_TYPE_PSFP + }, + /* example for ACL actions */ + { + BIT(FLOW_ACTION_DROP), + 0, + FILTER_ACTION_TYPE_ACL + } +}; + +static struct enetc_psfp epsfp = { + .psfp_sfi_bitmap = NULL, +}; + +static LIST_HEAD(enetc_block_cb_list); + +static inline int enetc_get_port(struct enetc_ndev_priv *priv) +{ + return priv->si->pdev->devfn & 0x7; +} + +/* Stream Identity Entry Set Descriptor */ +static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv, + struct enetc_streamid *sid, + u8 enable) +{ + struct enetc_cbd cbd = {.cmd = 0}; + struct streamid_data *si_data; + struct streamid_conf *si_conf; + u16 data_size; + dma_addr_t dma; + int err; + + if (sid->index >= priv->psfp_cap.max_streamid) + return -EINVAL; + + if (sid->filtertype != STREAMID_TYPE_NULL && + sid->filtertype != STREAMID_TYPE_SMAC) + return -EOPNOTSUPP; + + /* Disable operation before enable */ + cbd.index = cpu_to_le16((u16)sid->index); + cbd.cls = BDCR_CMD_STREAM_IDENTIFY; + cbd.status_flags = 0; + + data_size = sizeof(struct streamid_data); + si_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL); + cbd.length = cpu_to_le16(data_size); + + dma = dma_map_single(&priv->si->pdev->dev, si_data, + data_size, DMA_FROM_DEVICE); + if (dma_mapping_error(&priv->si->pdev->dev, dma)) { + netdev_err(priv->si->ndev, "DMA mapping failed!\n"); + kfree(si_data); + return -ENOMEM; + } + + cbd.addr[0] = lower_32_bits(dma); + cbd.addr[1] = upper_32_bits(dma); + memset(si_data->dmac, 0xff, ETH_ALEN); + si_data->vid_vidm_tg = + cpu_to_le16(ENETC_CBDR_SID_VID_MASK + + ((0x3 << 14) | ENETC_CBDR_SID_VIDM)); + + si_conf = &cbd.sid_set; + /* Only one port supported for one entry, set itself */ + si_conf->iports = 1 << enetc_get_port(priv); + si_conf->id_type = 1; + si_conf->oui[2] = 0x0; + si_conf->oui[1] = 0x80; + si_conf->oui[0] = 0xC2; + + err = enetc_send_cmd(priv->si, &cbd); + if (err) + return -EINVAL; + + if (!enable) { + kfree(si_data); + return 0; + } + + /* Enable the entry overwrite again incase space flushed by hardware */ + memset(&cbd, 0, sizeof(cbd)); + + cbd.index = cpu_to_le16((u16)sid->index); + cbd.cmd = 0; + cbd.cls = BDCR_CMD_STREAM_IDENTIFY; + cbd.status_flags = 0; + + si_conf->en = 0x80; + si_conf->stream_handle = cpu_to_le32(sid->handle); + si_conf->iports = 1 << enetc_get_port(priv); + si_conf->id_type = sid->filtertype; + si_conf->oui[2] = 0x0; + si_conf->oui[1] = 0x80; + si_conf->oui[0] = 0xC2; + + memset(si_data, 0, data_size); + + cbd.length = cpu_to_le16(data_size); + + cbd.addr[0] = lower_32_bits(dma); + cbd.addr[1] = upper_32_bits(dma); + + /* VIDM default to be 1. + * VID Match. If set (b1) then the VID must match, otherwise + * any VID is considered a match. VIDM setting is only used + * when TG is set to b01. + */ + if (si_conf->id_type == STREAMID_TYPE_NULL) { + ether_addr_copy(si_data->dmac, sid->dst_mac); + si_data->vid_vidm_tg = + cpu_to_le16((sid->vid & ENETC_CBDR_SID_VID_MASK) + + ((((u16)(sid->tagged) & 0x3) << 14) + | ENETC_CBDR_SID_VIDM)); + } else if (si_conf->id_type == STREAMID_TYPE_SMAC) { + ether_addr_copy(si_data->smac, sid->src_mac); + si_data->vid_vidm_tg = + cpu_to_le16((sid->vid & ENETC_CBDR_SID_VID_MASK) + + ((((u16)(sid->tagged) & 0x3) << 14) + | ENETC_CBDR_SID_VIDM)); + } + + err = enetc_send_cmd(priv->si, &cbd); + kfree(si_data); + + return err; +} + +/* Stream Filter Instance Set Descriptor */ +static int enetc_streamfilter_hw_set(struct enetc_ndev_priv *priv, + struct enetc_psfp_filter *sfi, + u8 enable) +{ + struct enetc_cbd cbd = {.cmd = 0}; + struct sfi_conf *sfi_config; + + cbd.index = cpu_to_le16(sfi->index); + cbd.cls = BDCR_CMD_STREAM_FILTER; + cbd.status_flags = 0x80; + cbd.length = cpu_to_le16(1); + + sfi_config = &cbd.sfi_conf; + if (!enable) + goto exit; + + sfi_config->en = 0x80; + + if (sfi->handle >= 0) { + sfi_config->stream_handle = + cpu_to_le32(sfi->handle); + sfi_config->sthm |= 0x80; + } + + sfi_config->sg_inst_table_index = cpu_to_le16(sfi->gate_id); + sfi_config->input_ports = 1 << enetc_get_port(priv); + + /* The priority value which may be matched against the + * frame’s priority value to determine a match for this entry. + */ + if (sfi->prio >= 0) + sfi_config->multi |= (sfi->prio & 0x7) | 0x8; + + /* Filter Type. Identifies the contents of the MSDU/FM_INST_INDEX + * field as being either an MSDU value or an index into the Flow + * Meter Instance table. + * TODO: no limit max sdu + */ + + if (sfi->meter_id >= 0) { + sfi_config->fm_inst_table_index = cpu_to_le16(sfi->meter_id); + sfi_config->multi |= 0x80; + } + +exit: + return enetc_send_cmd(priv->si, &cbd); +} + +static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv, + u32 index, + struct psfp_streamfilter_counters *cnt) +{ + struct enetc_cbd cbd = { .cmd = 2 }; + struct sfi_counter_data *data_buf; + dma_addr_t dma; + u16 data_size; + int err; + + cbd.index = cpu_to_le16((u16)index); + cbd.cmd = 2; + cbd.cls = BDCR_CMD_STREAM_FILTER; + cbd.status_flags = 0; + + data_size = sizeof(struct sfi_counter_data); + data_buf = kzalloc(data_size, __GFP_DMA | GFP_KERNEL); + if (!data_buf) + return -ENOMEM; + + dma = dma_map_single(&priv->si->pdev->dev, data_buf, + data_size, DMA_FROM_DEVICE); + if (dma_mapping_error(&priv->si->pdev->dev, dma)) { + netdev_err(priv->si->ndev, "DMA mapping failed!\n"); + err = -ENOMEM; + goto exit; + } + cbd.addr[0] = lower_32_bits(dma); + cbd.addr[1] = upper_32_bits(dma); + + cbd.length = cpu_to_le16(data_size); + + err = enetc_send_cmd(priv->si, &cbd); + if (err) + goto exit; + + cnt->matching_frames_count = + ((u64)le32_to_cpu(data_buf->matchh) << 32) + + data_buf->matchl; + + cnt->not_passing_sdu_count = + ((u64)le32_to_cpu(data_buf->msdu_droph) << 32) + + data_buf->msdu_dropl; + + cnt->passing_sdu_count = cnt->matching_frames_count + - cnt->not_passing_sdu_count; + + cnt->not_passing_frames_count = + ((u64)le32_to_cpu(data_buf->stream_gate_droph) << 32) + + le32_to_cpu(data_buf->stream_gate_dropl); + + cnt->passing_frames_count = cnt->matching_frames_count + - cnt->not_passing_sdu_count + - cnt->not_passing_frames_count; + + cnt->red_frames_count = + ((u64)le32_to_cpu(data_buf->flow_meter_droph) << 32) + + le32_to_cpu(data_buf->flow_meter_dropl); + +exit: + kfree(data_buf); + return err; +} + +static u64 get_ptp_now(struct enetc_hw *hw) +{ + u64 now_lo, now_hi, now; + + now_lo = enetc_rd(hw, ENETC_SICTR0); + now_hi = enetc_rd(hw, ENETC_SICTR1); + now = now_lo | now_hi << 32; + + return now; +} + +static int get_start_ns(u64 now, u64 cycle, u64 *start) +{ + u64 n; + + if (!cycle) + return -EFAULT; + + n = div64_u64(now, cycle); + + *start = (n + 1) * cycle; + + return 0; +} + +/* Stream Gate Instance Set Descriptor */ +static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv, + struct enetc_psfp_gate *sgi, + u8 enable) +{ + struct enetc_cbd cbd = { .cmd = 0 }; + struct sgi_table *sgi_config; + struct sgcl_conf *sgcl_config; + struct sgcl_data *sgcl_data; + struct sgce *sgce; + dma_addr_t dma; + u16 data_size; + int err, i; + u64 now; + + cbd.index = cpu_to_le16(sgi->index); + cbd.cmd = 0; + cbd.cls = BDCR_CMD_STREAM_GCL; + cbd.status_flags = 0x80; + + /* disable */ + if (!enable) + return enetc_send_cmd(priv->si, &cbd); + + if (!sgi->num_entries) + return 0; + + if (sgi->num_entries > priv->psfp_cap.max_psfp_gatelist || + !sgi->cycletime) + return -EINVAL; + + /* enable */ + sgi_config = &cbd.sgi_table; + + /* Keep open before gate list start */ + sgi_config->ocgtst = 0x80; + + sgi_config->oipv = (sgi->init_ipv < 0) ? + 0x0 : ((sgi->init_ipv & 0x7) | 0x8); + + sgi_config->en = 0x80; + + /* Basic config */ + err = enetc_send_cmd(priv->si, &cbd); + if (err) + return -EINVAL; + + memset(&cbd, 0, sizeof(cbd)); + + cbd.index = cpu_to_le16(sgi->index); + cbd.cmd = 1; + cbd.cls = BDCR_CMD_STREAM_GCL; + cbd.status_flags = 0; + + sgcl_config = &cbd.sgcl_conf; + + sgcl_config->acl_len = (sgi->num_entries - 1) & 0x3; + + data_size = struct_size(sgcl_data, sgcl, sgi->num_entries); + + sgcl_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL); + if (!sgcl_data) + return -ENOMEM; + + cbd.length = cpu_to_le16(data_size); + + dma = dma_map_single(&priv->si->pdev->dev, + sgcl_data, data_size, + DMA_FROM_DEVICE); + if (dma_mapping_error(&priv->si->pdev->dev, dma)) { + netdev_err(priv->si->ndev, "DMA mapping failed!\n"); + kfree(sgcl_data); + return -ENOMEM; + } + + cbd.addr[0] = lower_32_bits(dma); + cbd.addr[1] = upper_32_bits(dma); + + sgce = &sgcl_data->sgcl[0]; + + sgcl_config->agtst = 0x80; + + sgcl_data->ct = cpu_to_le32(sgi->cycletime); + sgcl_data->cte = cpu_to_le32(sgi->cycletimext); + + if (sgi->init_ipv >= 0) + sgcl_config->aipv = (sgi->init_ipv & 0x7) | 0x8; + + for (i = 0; i < sgi->num_entries; i++) { + struct action_gate_entry *from = &sgi->entries[i]; + struct sgce *to = &sgce[i]; + + if (from->gate_state) + to->multi |= 0x10; + + if (from->ipv >= 0) + to->multi |= ((from->ipv & 0x7) << 5) | 0x08; + + if (from->maxoctets >= 0) { + to->multi |= 0x01; + to->msdu[0] = from->maxoctets & 0xFF; + to->msdu[1] = (from->maxoctets >> 8) & 0xFF; + to->msdu[2] = (from->maxoctets >> 16) & 0xFF; + } + + to->interval = cpu_to_le32(from->interval); + } + + /* If basetime is less than now, calculate start time */ + now = get_ptp_now(&priv->si->hw); + + if (sgi->basetime < now) { + u64 start; + + err = get_start_ns(now, sgi->cycletime, &start); + if (err) + goto exit; + sgcl_data->btl = cpu_to_le32(lower_32_bits(start)); + sgcl_data->bth = cpu_to_le32(upper_32_bits(start)); + } else { + u32 hi, lo; + + hi = upper_32_bits(sgi->basetime); + lo = lower_32_bits(sgi->basetime); + sgcl_data->bth = cpu_to_le32(hi); + sgcl_data->btl = cpu_to_le32(lo); + } + + err = enetc_send_cmd(priv->si, &cbd); + +exit: + kfree(sgcl_data); + + return err; +} + +static struct enetc_stream_filter *enetc_get_stream_by_index(u32 index) +{ + struct enetc_stream_filter *f; + + hlist_for_each_entry(f, &epsfp.stream_list, node) + if (f->sid.index == index) + return f; + + return NULL; +} + +static struct enetc_psfp_gate *enetc_get_gate_by_index(u32 index) +{ + struct enetc_psfp_gate *g; + + hlist_for_each_entry(g, &epsfp.psfp_gate_list, node) + if (g->index == index) + return g; + + return NULL; +} + +static struct enetc_psfp_filter *enetc_get_filter_by_index(u32 index) +{ + struct enetc_psfp_filter *s; + + hlist_for_each_entry(s, &epsfp.psfp_filter_list, node) + if (s->index == index) + return s; + + return NULL; +} + +static struct enetc_psfp_filter + *enetc_psfp_check_sfi(struct enetc_psfp_filter *sfi) +{ + struct enetc_psfp_filter *s; + + hlist_for_each_entry(s, &epsfp.psfp_filter_list, node) + if (s->gate_id == sfi->gate_id && + s->prio == sfi->prio && + s->meter_id == sfi->meter_id) + return s; + + return NULL; +} + +static int enetc_get_free_index(struct enetc_ndev_priv *priv) +{ + u32 max_size = priv->psfp_cap.max_psfp_filter; + unsigned long index; + + index = find_first_zero_bit(epsfp.psfp_sfi_bitmap, max_size); + if (index == max_size) + return -1; + + return index; +} + +static void stream_filter_unref(struct enetc_ndev_priv *priv, u32 index) +{ + struct enetc_psfp_filter *sfi; + u8 z; + + sfi = enetc_get_filter_by_index(index); + WARN_ON(!sfi); + z = refcount_dec_and_test(&sfi->refcount); + + if (z) { + enetc_streamfilter_hw_set(priv, sfi, false); + hlist_del(&sfi->node); + kfree(sfi); + clear_bit(index, epsfp.psfp_sfi_bitmap); + } +} + +static void stream_gate_unref(struct enetc_ndev_priv *priv, u32 index) +{ + struct enetc_psfp_gate *sgi; + u8 z; + + sgi = enetc_get_gate_by_index(index); + WARN_ON(!sgi); + z = refcount_dec_and_test(&sgi->refcount); + if (z) { + enetc_streamgate_hw_set(priv, sgi, false); + hlist_del(&sgi->node); + kfree(sgi); + } +} + +static void remove_one_chain(struct enetc_ndev_priv *priv, + struct enetc_stream_filter *filter) +{ + stream_gate_unref(priv, filter->sgi_index); + stream_filter_unref(priv, filter->sfi_index); + + hlist_del(&filter->node); + kfree(filter); +} + +static int enetc_psfp_hw_set(struct enetc_ndev_priv *priv, + struct enetc_streamid *sid, + struct enetc_psfp_filter *sfi, + struct enetc_psfp_gate *sgi) +{ + int err; + + err = enetc_streamid_hw_set(priv, sid, true); + if (err) + return err; + + if (sfi) { + err = enetc_streamfilter_hw_set(priv, sfi, true); + if (err) + goto revert_sid; + } + + err = enetc_streamgate_hw_set(priv, sgi, true); + if (err) + goto revert_sfi; + + return 0; + +revert_sfi: + if (sfi) + enetc_streamfilter_hw_set(priv, sfi, false); +revert_sid: + enetc_streamid_hw_set(priv, sid, false); + return err; +} + +static struct actions_fwd *enetc_check_flow_actions(u64 acts, + unsigned int inputkeys) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(enetc_act_fwd); i++) + if (acts == enetc_act_fwd[i].actions && + inputkeys & enetc_act_fwd[i].keys) + return &enetc_act_fwd[i]; + + return NULL; +} + +static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv, + struct flow_cls_offload *f) +{ + struct flow_rule *rule = flow_cls_offload_flow_rule(f); + struct netlink_ext_ack *extack = f->common.extack; + struct enetc_stream_filter *filter, *old_filter; + struct enetc_psfp_filter *sfi, *old_sfi; + struct enetc_psfp_gate *sgi, *old_sgi; + struct flow_action_entry *entry; + struct action_gate_entry *e; + u8 sfi_overwrite = 0; + int entries_size; + int i, err; + + if (f->common.chain_index >= priv->psfp_cap.max_streamid) { + NL_SET_ERR_MSG_MOD(extack, "No Stream identify resource!"); + return -ENOSPC; + } + + flow_action_for_each(i, entry, &rule->action) + if (entry->id == FLOW_ACTION_GATE) + break; + + if (entry->id != FLOW_ACTION_GATE) + return -EINVAL; + + filter = kzalloc(sizeof(*filter), GFP_KERNEL); + if (!filter) + return -ENOMEM; + + filter->sid.index = f->common.chain_index; + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { + struct flow_match_eth_addrs match; + + flow_rule_match_eth_addrs(rule, &match); + + if (!is_zero_ether_addr(match.mask->dst) && + !is_zero_ether_addr(match.mask->src)) { + NL_SET_ERR_MSG_MOD(extack, + "Cannot match on both source and destination MAC"); + err = EINVAL; + goto free_filter; + } + + if (!is_zero_ether_addr(match.mask->dst)) { + if (!is_broadcast_ether_addr(match.mask->dst)) { + NL_SET_ERR_MSG_MOD(extack, + "Masked matching on destination MAC not supported"); + err = EINVAL; + goto free_filter; + } + ether_addr_copy(filter->sid.dst_mac, match.key->dst); + filter->sid.filtertype = STREAMID_TYPE_NULL; + } + + if (!is_zero_ether_addr(match.mask->src)) { + if (!is_broadcast_ether_addr(match.mask->src)) { + NL_SET_ERR_MSG_MOD(extack, + "Masked matching on source MAC not supported"); + err = EINVAL; + goto free_filter; + } + ether_addr_copy(filter->sid.src_mac, match.key->src); + filter->sid.filtertype = STREAMID_TYPE_SMAC; + } + } else { + NL_SET_ERR_MSG_MOD(extack, "Unsupported, must include ETH_ADDRS"); + err = EINVAL; + goto free_filter; + } + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { + struct flow_match_vlan match; + + flow_rule_match_vlan(rule, &match); + if (match.mask->vlan_priority) { + if (match.mask->vlan_priority != + (VLAN_PRIO_MASK >> VLAN_PRIO_SHIFT)) { + NL_SET_ERR_MSG_MOD(extack, "Only full mask is supported for VLAN priority"); + err = -EINVAL; + goto free_filter; + } + } + + if (match.mask->vlan_id) { + if (match.mask->vlan_id != VLAN_VID_MASK) { + NL_SET_ERR_MSG_MOD(extack, "Only full mask is supported for VLAN id"); + err = -EINVAL; + goto free_filter; + } + + filter->sid.vid = match.key->vlan_id; + if (!filter->sid.vid) + filter->sid.tagged = STREAMID_VLAN_UNTAGGED; + else + filter->sid.tagged = STREAMID_VLAN_TAGGED; + } + } else { + filter->sid.tagged = STREAMID_VLAN_ALL; + } + + /* parsing gate action */ + if (entry->gate.index >= priv->psfp_cap.max_psfp_gate) { + NL_SET_ERR_MSG_MOD(extack, "No Stream Gate resource!"); + err = -ENOSPC; + goto free_filter; + } + + if (entry->gate.num_entries >= priv->psfp_cap.max_psfp_gatelist) { + NL_SET_ERR_MSG_MOD(extack, "No Stream Gate resource!"); + err = -ENOSPC; + goto free_filter; + } + + entries_size = struct_size(sgi, entries, entry->gate.num_entries); + sgi = kzalloc(entries_size, GFP_KERNEL); + if (!sgi) { + err = -ENOMEM; + goto free_filter; + } + + refcount_set(&sgi->refcount, 1); + sgi->index = entry->gate.index; + sgi->init_ipv = entry->gate.prio; + sgi->basetime = entry->gate.basetime; + sgi->cycletime = entry->gate.cycletime; + sgi->num_entries = entry->gate.num_entries; + + e = sgi->entries; + for (i = 0; i < entry->gate.num_entries; i++) { + e[i].gate_state = entry->gate.entries[i].gate_state; + e[i].interval = entry->gate.entries[i].interval; + e[i].ipv = entry->gate.entries[i].ipv; + e[i].maxoctets = entry->gate.entries[i].maxoctets; + } + + filter->sgi_index = sgi->index; + + sfi = kzalloc(sizeof(*sfi), GFP_KERNEL); + if (!sfi) { + err = -ENOMEM; + goto free_gate; + } + + refcount_set(&sfi->refcount, 1); + sfi->gate_id = sgi->index; + + /* flow meter not support yet */ + sfi->meter_id = ENETC_PSFP_WILDCARD; + + /* prio ref the filter prio */ + if (f->common.prio && f->common.prio <= BIT(3)) + sfi->prio = f->common.prio - 1; + else + sfi->prio = ENETC_PSFP_WILDCARD; + + old_sfi = enetc_psfp_check_sfi(sfi); + if (!old_sfi) { + int index; + + index = enetc_get_free_index(priv); + if (sfi->handle < 0) { + NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!"); + err = -ENOSPC; + goto free_sfi; + } + + sfi->index = index; + sfi->handle = index + HANDLE_OFFSET; + /* Update the stream filter handle also */ + filter->sid.handle = sfi->handle; + filter->sfi_index = sfi->index; + sfi_overwrite = 0; + } else { + filter->sfi_index = old_sfi->index; + filter->sid.handle = old_sfi->handle; + sfi_overwrite = 1; + } + + err = enetc_psfp_hw_set(priv, &filter->sid, + sfi_overwrite ? NULL : sfi, sgi); + if (err) + goto free_sfi; + + spin_lock(&epsfp.psfp_lock); + /* Remove the old node if exist and update with a new node */ + old_sgi = enetc_get_gate_by_index(filter->sgi_index); + if (old_sgi) { + refcount_set(&sgi->refcount, + refcount_read(&old_sgi->refcount) + 1); + hlist_del(&old_sgi->node); + kfree(old_sgi); + } + + hlist_add_head(&sgi->node, &epsfp.psfp_gate_list); + + if (!old_sfi) { + hlist_add_head(&sfi->node, &epsfp.psfp_filter_list); + set_bit(sfi->index, epsfp.psfp_sfi_bitmap); + } else { + kfree(sfi); + refcount_inc(&old_sfi->refcount); + } + + old_filter = enetc_get_stream_by_index(filter->sid.index); + if (old_filter) + remove_one_chain(priv, old_filter); + + filter->stats.lastused = jiffies; + hlist_add_head(&filter->node, &epsfp.stream_list); + + spin_unlock(&epsfp.psfp_lock); + + return 0; + +free_sfi: + kfree(sfi); +free_gate: + kfree(sgi); +free_filter: + kfree(filter); + + return err; +} + +static int enetc_config_clsflower(struct enetc_ndev_priv *priv, + struct flow_cls_offload *cls_flower) +{ + struct flow_rule *rule = flow_cls_offload_flow_rule(cls_flower); + struct netlink_ext_ack *extack = cls_flower->common.extack; + struct flow_dissector *dissector = rule->match.dissector; + struct flow_action *action = &rule->action; + struct flow_action_entry *entry; + struct actions_fwd *fwd; + u64 actions = 0; + int i, err; + + if (!flow_action_has_entries(action)) { + NL_SET_ERR_MSG_MOD(extack, "At least one action is needed"); + return -EINVAL; + } + + flow_action_for_each(i, entry, action) + actions |= BIT(entry->id); + + fwd = enetc_check_flow_actions(actions, dissector->used_keys); + if (!fwd) { + NL_SET_ERR_MSG_MOD(extack, "Unsupported filter type!"); + return -EOPNOTSUPP; + } + + if (fwd->output & FILTER_ACTION_TYPE_PSFP) { + err = enetc_psfp_parse_clsflower(priv, cls_flower); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "Invalid PSFP inputs"); + return err; + } + } else { + NL_SET_ERR_MSG_MOD(extack, "Unsupported actions"); + return -EOPNOTSUPP; + } + + return 0; +} + +static int enetc_psfp_destroy_clsflower(struct enetc_ndev_priv *priv, + struct flow_cls_offload *f) +{ + struct enetc_stream_filter *filter; + struct netlink_ext_ack *extack = f->common.extack; + int err; + + if (f->common.chain_index >= priv->psfp_cap.max_streamid) { + NL_SET_ERR_MSG_MOD(extack, "No Stream identify resource!"); + return -ENOSPC; + } + + filter = enetc_get_stream_by_index(f->common.chain_index); + if (!filter) + return -EINVAL; + + err = enetc_streamid_hw_set(priv, &filter->sid, false); + if (err) + return err; + + remove_one_chain(priv, filter); + + return 0; +} + +static int enetc_destroy_clsflower(struct enetc_ndev_priv *priv, + struct flow_cls_offload *f) +{ + return enetc_psfp_destroy_clsflower(priv, f); +} + +static int enetc_psfp_get_stats(struct enetc_ndev_priv *priv, + struct flow_cls_offload *f) +{ + struct psfp_streamfilter_counters counters = {}; + struct enetc_stream_filter *filter; + struct flow_stats stats = {}; + int err; + + filter = enetc_get_stream_by_index(f->common.chain_index); + if (!filter) + return -EINVAL; + + err = enetc_streamcounter_hw_get(priv, filter->sfi_index, &counters); + if (err) + return -EINVAL; + + spin_lock(&epsfp.psfp_lock); + stats.pkts = counters.matching_frames_count - filter->stats.pkts; + stats.lastused = filter->stats.lastused; + filter->stats.pkts += stats.pkts; + spin_unlock(&epsfp.psfp_lock); + + flow_stats_update(&f->stats, 0x0, stats.pkts, stats.lastused, + FLOW_ACTION_HW_STATS_DELAYED); + + return 0; +} + +static int enetc_setup_tc_cls_flower(struct enetc_ndev_priv *priv, + struct flow_cls_offload *cls_flower) +{ + switch (cls_flower->command) { + case FLOW_CLS_REPLACE: + return enetc_config_clsflower(priv, cls_flower); + case FLOW_CLS_DESTROY: + return enetc_destroy_clsflower(priv, cls_flower); + case FLOW_CLS_STATS: + return enetc_psfp_get_stats(priv, cls_flower); + default: + return -EOPNOTSUPP; + } +} + +static inline void clean_psfp_sfi_bitmap(void) +{ + bitmap_free(epsfp.psfp_sfi_bitmap); + epsfp.psfp_sfi_bitmap = NULL; +} + +static void clean_stream_list(void) +{ + struct enetc_stream_filter *s; + struct hlist_node *tmp; + + hlist_for_each_entry_safe(s, tmp, &epsfp.stream_list, node) { + hlist_del(&s->node); + kfree(s); + } +} + +static void clean_sfi_list(void) +{ + struct enetc_psfp_filter *sfi; + struct hlist_node *tmp; + + hlist_for_each_entry_safe(sfi, tmp, &epsfp.psfp_filter_list, node) { + hlist_del(&sfi->node); + kfree(sfi); + } +} + +static void clean_sgi_list(void) +{ + struct enetc_psfp_gate *sgi; + struct hlist_node *tmp; + + hlist_for_each_entry_safe(sgi, tmp, &epsfp.psfp_gate_list, node) { + hlist_del(&sgi->node); + kfree(sgi); + } +} + +static void clean_psfp_all(void) +{ + /* Disable all list nodes and free all memory */ + clean_sfi_list(); + clean_sgi_list(); + clean_stream_list(); + epsfp.dev_bitmap = 0; + clean_psfp_sfi_bitmap(); +} + +int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data, + void *cb_priv) +{ + struct net_device *ndev = cb_priv; + + if (!tc_can_offload(ndev)) + return -EOPNOTSUPP; + + switch (type) { + case TC_SETUP_CLSFLOWER: + return enetc_setup_tc_cls_flower(netdev_priv(ndev), type_data); + default: + return -EOPNOTSUPP; + } +} + +int enetc_psfp_init(struct enetc_ndev_priv *priv) +{ + if (epsfp.psfp_sfi_bitmap) + return 0; + + epsfp.psfp_sfi_bitmap = bitmap_zalloc(priv->psfp_cap.max_psfp_filter, + GFP_KERNEL); + if (!epsfp.psfp_sfi_bitmap) + return -ENOMEM; + + spin_lock_init(&epsfp.psfp_lock); + + if (list_empty(&enetc_block_cb_list)) + epsfp.dev_bitmap = 0; + + return 0; +} + +int enetc_psfp_clean(struct enetc_ndev_priv *priv) +{ + if (!list_empty(&enetc_block_cb_list)) + return -EBUSY; + + clean_psfp_all(); + + return 0; +} + +int enetc_setup_tc_psfp(struct net_device *ndev, void *type_data) +{ + struct enetc_ndev_priv *priv = netdev_priv(ndev); + struct flow_block_offload *f = type_data; + int err; + + err = flow_block_cb_setup_simple(f, &enetc_block_cb_list, + enetc_setup_tc_block_cb, + ndev, ndev, true); + if (err) + return err; + + switch (f->command) { + case FLOW_BLOCK_BIND: + set_bit(enetc_get_port(priv), &epsfp.dev_bitmap); + break; + case FLOW_BLOCK_UNBIND: + clear_bit(enetc_get_port(priv), &epsfp.dev_bitmap); + if (!epsfp.dev_bitmap) + clean_psfp_all(); + break; + } + + return 0; +} diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h index e74dd1f86bba..a6cdd5b61921 100644 --- a/drivers/net/ethernet/freescale/fec.h +++ b/drivers/net/ethernet/freescale/fec.h @@ -376,8 +376,7 @@ struct bufdesc_ex { #define FEC_ENET_TS_AVAIL ((uint)0x00010000) #define FEC_ENET_TS_TIMER ((uint)0x00008000) -#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF | FEC_ENET_MII) -#define FEC_NAPI_IMASK FEC_ENET_MII +#define FEC_DEFAULT_IMASK (FEC_ENET_TXF | FEC_ENET_RXF) #define FEC_RX_DISABLED_IMASK (FEC_DEFAULT_IMASK & (~FEC_ENET_RXF)) /* ENET interrupt coalescing macro define */ @@ -543,7 +542,6 @@ struct fec_enet_private { int link; int full_duplex; int speed; - struct completion mdio_done; int irq[FEC_IRQ_NUM]; bool bufdesc_ex; int pause_flag; diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index dc6f8763a5d4..2d0d313ee7c5 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -88,8 +88,6 @@ static void fec_enet_itr_coal_init(struct net_device *ndev); struct fec_devinfo { u32 quirks; - u8 stop_gpr_reg; - u8 stop_gpr_bit; }; static const struct fec_devinfo fec_imx25_info = { @@ -112,8 +110,6 @@ static const struct fec_devinfo fec_imx6q_info = { FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM | FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 | FEC_QUIRK_HAS_RACC, - .stop_gpr_reg = 0x34, - .stop_gpr_bit = 27, }; static const struct fec_devinfo fec_mvf600_info = { @@ -976,8 +972,8 @@ fec_restart(struct net_device *ndev) writel((__force u32)cpu_to_be32(temp_mac[1]), fep->hwp + FEC_ADDR_HIGH); - /* Clear any outstanding interrupt. */ - writel(0xffffffff, fep->hwp + FEC_IEVENT); + /* Clear any outstanding interrupt, except MDIO. */ + writel((0xffffffff & ~FEC_ENET_MII), fep->hwp + FEC_IEVENT); fec_enet_bd_init(ndev); @@ -1123,7 +1119,7 @@ fec_restart(struct net_device *ndev) if (fep->link) writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK); else - writel(FEC_ENET_MII, fep->hwp + FEC_IMASK); + writel(0, fep->hwp + FEC_IMASK); /* Init the interrupt coalescing */ fec_enet_itr_coal_init(ndev); @@ -1652,6 +1648,10 @@ fec_enet_interrupt(int irq, void *dev_id) irqreturn_t ret = IRQ_NONE; int_events = readl(fep->hwp + FEC_IEVENT); + + /* Don't clear MDIO events, we poll for those */ + int_events &= ~FEC_ENET_MII; + writel(int_events, fep->hwp + FEC_IEVENT); fec_enet_collect_events(fep, int_events); @@ -1659,16 +1659,12 @@ fec_enet_interrupt(int irq, void *dev_id) ret = IRQ_HANDLED; if (napi_schedule_prep(&fep->napi)) { - /* Disable the NAPI interrupts */ - writel(FEC_NAPI_IMASK, fep->hwp + FEC_IMASK); + /* Disable interrupts */ + writel(0, fep->hwp + FEC_IMASK); __napi_schedule(&fep->napi); } } - if (int_events & FEC_ENET_MII) { - ret = IRQ_HANDLED; - complete(&fep->mdio_done); - } return ret; } @@ -1818,11 +1814,24 @@ static void fec_enet_adjust_link(struct net_device *ndev) phy_print_status(phy_dev); } +static int fec_enet_mdio_wait(struct fec_enet_private *fep) +{ + uint ievent; + int ret; + + ret = readl_poll_timeout_atomic(fep->hwp + FEC_IEVENT, ievent, + ievent & FEC_ENET_MII, 2, 30000); + + if (!ret) + writel(FEC_ENET_MII, fep->hwp + FEC_IEVENT); + + return ret; +} + static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) { struct fec_enet_private *fep = bus->priv; struct device *dev = &fep->pdev->dev; - unsigned long time_left; int ret = 0, frame_start, frame_addr, frame_op; bool is_c45 = !!(regnum & MII_ADDR_C45); @@ -1830,8 +1839,6 @@ static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) if (ret < 0) return ret; - reinit_completion(&fep->mdio_done); - if (is_c45) { frame_start = FEC_MMFR_ST_C45; @@ -1843,11 +1850,9 @@ static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) fep->hwp + FEC_MII_DATA); /* wait for end of transfer */ - time_left = wait_for_completion_timeout(&fep->mdio_done, - usecs_to_jiffies(FEC_MII_TIMEOUT)); - if (time_left == 0) { + ret = fec_enet_mdio_wait(fep); + if (ret) { netdev_err(fep->netdev, "MDIO address write timeout\n"); - ret = -ETIMEDOUT; goto out; } @@ -1866,11 +1871,9 @@ static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) FEC_MMFR_TA, fep->hwp + FEC_MII_DATA); /* wait for end of transfer */ - time_left = wait_for_completion_timeout(&fep->mdio_done, - usecs_to_jiffies(FEC_MII_TIMEOUT)); - if (time_left == 0) { + ret = fec_enet_mdio_wait(fep); + if (ret) { netdev_err(fep->netdev, "MDIO read timeout\n"); - ret = -ETIMEDOUT; goto out; } @@ -1888,7 +1891,6 @@ static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, { struct fec_enet_private *fep = bus->priv; struct device *dev = &fep->pdev->dev; - unsigned long time_left; int ret, frame_start, frame_addr; bool is_c45 = !!(regnum & MII_ADDR_C45); @@ -1898,8 +1900,6 @@ static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, else ret = 0; - reinit_completion(&fep->mdio_done); - if (is_c45) { frame_start = FEC_MMFR_ST_C45; @@ -1911,11 +1911,9 @@ static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, fep->hwp + FEC_MII_DATA); /* wait for end of transfer */ - time_left = wait_for_completion_timeout(&fep->mdio_done, - usecs_to_jiffies(FEC_MII_TIMEOUT)); - if (time_left == 0) { + ret = fec_enet_mdio_wait(fep); + if (ret) { netdev_err(fep->netdev, "MDIO address write timeout\n"); - ret = -ETIMEDOUT; goto out; } } else { @@ -1931,12 +1929,9 @@ static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, fep->hwp + FEC_MII_DATA); /* wait for end of transfer */ - time_left = wait_for_completion_timeout(&fep->mdio_done, - usecs_to_jiffies(FEC_MII_TIMEOUT)); - if (time_left == 0) { + ret = fec_enet_mdio_wait(fep); + if (ret) netdev_err(fep->netdev, "MDIO write timeout\n"); - ret = -ETIMEDOUT; - } out: pm_runtime_mark_last_busy(dev); @@ -1986,8 +1981,12 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable) return 0; failed_clk_ref: - if (fep->clk_ref) - clk_disable_unprepare(fep->clk_ref); + if (fep->clk_ptp) { + mutex_lock(&fep->ptp_clk_mutex); + clk_disable_unprepare(fep->clk_ptp); + fep->ptp_clk_on = false; + mutex_unlock(&fep->ptp_clk_mutex); + } failed_clk_ptp: if (fep->clk_enet_out) clk_disable_unprepare(fep->clk_enet_out); @@ -2065,9 +2064,11 @@ static int fec_enet_mii_init(struct platform_device *pdev) static struct mii_bus *fec0_mii_bus; struct net_device *ndev = platform_get_drvdata(pdev); struct fec_enet_private *fep = netdev_priv(ndev); + bool suppress_preamble = false; struct device_node *node; int err = -ENXIO; u32 mii_speed, holdtime; + u32 bus_freq; /* * The i.MX28 dual fec interfaces are not equal. @@ -2095,15 +2096,23 @@ static int fec_enet_mii_init(struct platform_device *pdev) return -ENOENT; } + bus_freq = 2500000; /* 2.5MHz by default */ + node = of_get_child_by_name(pdev->dev.of_node, "mdio"); + if (node) { + of_property_read_u32(node, "clock-frequency", &bus_freq); + suppress_preamble = of_property_read_bool(node, + "suppress-preamble"); + } + /* - * Set MII speed to 2.5 MHz (= clk_get_rate() / 2 * phy_speed) + * Set MII speed (= clk_get_rate() / 2 * phy_speed) * * The formula for FEC MDC is 'ref_freq / (MII_SPEED x 2)' while * for ENET-MAC is 'ref_freq / ((MII_SPEED + 1) x 2)'. The i.MX28 * Reference Manual has an error on this, and gets fixed on i.MX6Q * document. */ - mii_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); + mii_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), bus_freq * 2); if (fep->quirks & FEC_QUIRK_ENET_MAC) mii_speed--; if (mii_speed > 63) { @@ -2130,8 +2139,24 @@ static int fec_enet_mii_init(struct platform_device *pdev) fep->phy_speed = mii_speed << 1 | holdtime << 8; + if (suppress_preamble) + fep->phy_speed |= BIT(7); + + /* Clear MMFR to avoid to generate MII event by writing MSCR. + * MII event generation condition: + * - writing MSCR: + * - mmfr[31:0]_not_zero & mscr[7:0]_is_zero & + * mscr_reg_data_in[7:0] != 0 + * - writing MMFR: + * - mscr[7:0]_not_zero + */ + writel(0, fep->hwp + FEC_MII_DATA); + writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); + /* Clear any pending transaction complete indication */ + writel(FEC_ENET_MII, fep->hwp + FEC_IEVENT); + fep->mii_bus = mdiobus_alloc(); if (fep->mii_bus == NULL) { err = -ENOMEM; @@ -2146,7 +2171,6 @@ static int fec_enet_mii_init(struct platform_device *pdev) fep->mii_bus->priv = fep; fep->mii_bus->parent = &pdev->dev; - node = of_get_child_by_name(pdev->dev.of_node, "mdio"); err = of_mdiobus_register(fep->mii_bus, node); of_node_put(node); if (err) @@ -3452,19 +3476,23 @@ static int fec_enet_get_irq_cnt(struct platform_device *pdev) } static int fec_enet_init_stop_mode(struct fec_enet_private *fep, - struct fec_devinfo *dev_info, struct device_node *np) { struct device_node *gpr_np; + u32 out_val[3]; int ret = 0; - if (!dev_info) - return 0; - - gpr_np = of_parse_phandle(np, "gpr", 0); + gpr_np = of_parse_phandle(np, "fsl,stop-mode", 0); if (!gpr_np) return 0; + ret = of_property_read_u32_array(np, "fsl,stop-mode", out_val, + ARRAY_SIZE(out_val)); + if (ret) { + dev_dbg(&fep->pdev->dev, "no stop mode property\n"); + return ret; + } + fep->stop_gpr.gpr = syscon_node_to_regmap(gpr_np); if (IS_ERR(fep->stop_gpr.gpr)) { dev_err(&fep->pdev->dev, "could not find gpr regmap\n"); @@ -3473,8 +3501,8 @@ static int fec_enet_init_stop_mode(struct fec_enet_private *fep, goto out; } - fep->stop_gpr.reg = dev_info->stop_gpr_reg; - fep->stop_gpr.bit = dev_info->stop_gpr_bit; + fep->stop_gpr.reg = out_val[1]; + fep->stop_gpr.bit = out_val[2]; out: of_node_put(gpr_np); @@ -3551,7 +3579,7 @@ fec_probe(struct platform_device *pdev) if (of_get_property(np, "fsl,magic-packet", NULL)) fep->wol_flag |= FEC_WOL_HAS_MAGIC_PACKET; - ret = fec_enet_init_stop_mode(fep, dev_info, np); + ret = fec_enet_init_stop_mode(fep, np); if (ret) goto failed_stop_mode; @@ -3674,7 +3702,6 @@ fec_probe(struct platform_device *pdev) fep->irq[i] = irq; } - init_completion(&fep->mdio_done); ret = fec_enet_mii_init(pdev); if (ret) goto failed_mii_init; |