diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-01 00:51:09 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-01 00:51:09 +0200 |
commit | dbe69e43372212527abf48609aba7fc39a6daa27 (patch) | |
tree | 96cfafdf70f5325ceeac1054daf7deca339c9730 /drivers/net/ethernet/qualcomm | |
parent | Merge tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/ker... (diff) | |
parent | Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net (diff) | |
download | linux-dbe69e43372212527abf48609aba7fc39a6daa27.tar.xz linux-dbe69e43372212527abf48609aba7fc39a6daa27.zip |
Merge tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"Core:
- BPF:
- add syscall program type and libbpf support for generating
instructions and bindings for in-kernel BPF loaders (BPF loaders
for BPF), this is a stepping stone for signed BPF programs
- infrastructure to migrate TCP child sockets from one listener to
another in the same reuseport group/map to improve flexibility
of service hand-off/restart
- add broadcast support to XDP redirect
- allow bypass of the lockless qdisc to improving performance (for
pktgen: +23% with one thread, +44% with 2 threads)
- add a simpler version of "DO_ONCE()" which does not require jump
labels, intended for slow-path usage
- virtio/vsock: introduce SOCK_SEQPACKET support
- add getsocketopt to retrieve netns cookie
- ip: treat lowest address of a IPv4 subnet as ordinary unicast
address allowing reclaiming of precious IPv4 addresses
- ipv6: use prandom_u32() for ID generation
- ip: add support for more flexible field selection for hashing
across multi-path routes (w/ offload to mlxsw)
- icmp: add support for extended RFC 8335 PROBE (ping)
- seg6: add support for SRv6 End.DT46 behavior
- mptcp:
- DSS checksum support (RFC 8684) to detect middlebox meddling
- support Connection-time 'C' flag
- time stamping support
- sctp: packetization Layer Path MTU Discovery (RFC 8899)
- xfrm: speed up state addition with seq set
- WiFi:
- hidden AP discovery on 6 GHz and other HE 6 GHz improvements
- aggregation handling improvements for some drivers
- minstrel improvements for no-ack frames
- deferred rate control for TXQs to improve reaction times
- switch from round robin to virtual time-based airtime scheduler
- add trace points:
- tcp checksum errors
- openvswitch - action execution, upcalls
- socket errors via sk_error_report
Device APIs:
- devlink: add rate API for hierarchical control of max egress rate
of virtual devices (VFs, SFs etc.)
- don't require RCU read lock to be held around BPF hooks in NAPI
context
- page_pool: generic buffer recycling
New hardware/drivers:
- mobile:
- iosm: PCIe Driver for Intel M.2 Modem
- support for Qualcomm MSM8998 (ipa)
- WiFi: Qualcomm QCN9074 and WCN6855 PCI devices
- sparx5: Microchip SparX-5 family of Enterprise Ethernet switches
- Mellanox BlueField Gigabit Ethernet (control NIC of the DPU)
- NXP SJA1110 Automotive Ethernet 10-port switch
- Qualcomm QCA8327 switch support (qca8k)
- Mikrotik 10/25G NIC (atl1c)
Driver changes:
- ACPI support for some MDIO, MAC and PHY devices from Marvell and
NXP (our first foray into MAC/PHY description via ACPI)
- HW timestamping (PTP) support: bnxt_en, ice, sja1105, hns3, tja11xx
- Mellanox/Nvidia NIC (mlx5)
- NIC VF offload of L2 bridging
- support IRQ distribution to Sub-functions
- Marvell (prestera):
- add flower and match all
- devlink trap
- link aggregation
- Netronome (nfp): connection tracking offload
- Intel 1GE (igc): add AF_XDP support
- Marvell DPU (octeontx2): ingress ratelimit offload
- Google vNIC (gve): new ring/descriptor format support
- Qualcomm mobile (rmnet & ipa): inline checksum offload support
- MediaTek WiFi (mt76)
- mt7915 MSI support
- mt7915 Tx status reporting
- mt7915 thermal sensors support
- mt7921 decapsulation offload
- mt7921 enable runtime pm and deep sleep
- Realtek WiFi (rtw88)
- beacon filter support
- Tx antenna path diversity support
- firmware crash information via devcoredump
- Qualcomm WiFi (wcn36xx)
- Wake-on-WLAN support with magic packets and GTK rekeying
- Micrel PHY (ksz886x/ksz8081): add cable test support"
* tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2168 commits)
tcp: change ICSK_CA_PRIV_SIZE definition
tcp_yeah: check struct yeah size at compile time
gve: DQO: Fix off by one in gve_rx_dqo()
stmmac: intel: set PCI_D3hot in suspend
stmmac: intel: Enable PHY WOL option in EHL
net: stmmac: option to enable PHY WOL with PMT enabled
net: say "local" instead of "static" addresses in ndo_dflt_fdb_{add,del}
net: use netdev_info in ndo_dflt_fdb_{add,del}
ptp: Set lookup cookie when creating a PTP PPS source.
net: sock: add trace for socket errors
net: sock: introduce sk_error_report
net: dsa: replay the local bridge FDB entries pointing to the bridge dev too
net: dsa: ensure during dsa_fdb_offload_notify that dev_hold and dev_put are on the same dev
net: dsa: include fdb entries pointing to bridge in the host fdb list
net: dsa: include bridge addresses which are local in the host fdb list
net: dsa: sync static FDB entries on foreign interfaces to hardware
net: dsa: install the host MDB and FDB entries in the master's RX filter
net: dsa: reference count the FDB addresses at the cross-chip notifier level
net: dsa: introduce a separate cross-chip notifier type for host FDBs
net: dsa: reference count the MDB entries at the cross-chip notifier level
...
Diffstat (limited to 'drivers/net/ethernet/qualcomm')
-rw-r--r-- | drivers/net/ethernet/qualcomm/qca_debug.c | 1 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/qca_spi.c | 10 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/qca_spi.h | 1 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c | 6 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h | 5 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 43 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h | 11 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 434 | ||||
-rw-r--r-- | drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c | 2 |
9 files changed, 326 insertions, 187 deletions
diff --git a/drivers/net/ethernet/qualcomm/qca_debug.c b/drivers/net/ethernet/qualcomm/qca_debug.c index 702aa217a27a..d59fff2fbcc6 100644 --- a/drivers/net/ethernet/qualcomm/qca_debug.c +++ b/drivers/net/ethernet/qualcomm/qca_debug.c @@ -62,6 +62,7 @@ static const char qcaspi_gstrings_stats[][ETH_GSTRING_LEN] = { "SPI errors", "Write verify errors", "Buffer available errors", + "Bad signature", }; #ifdef CONFIG_DEBUG_FS diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c index 0a6b8112b535..b64c254e00ba 100644 --- a/drivers/net/ethernet/qualcomm/qca_spi.c +++ b/drivers/net/ethernet/qualcomm/qca_spi.c @@ -504,8 +504,12 @@ qcaspi_qca7k_sync(struct qcaspi *qca, int event) qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); if (signature != QCASPI_GOOD_SIGNATURE) { + if (qca->sync == QCASPI_SYNC_READY) + qca->stats.bad_signature++; + qca->sync = QCASPI_SYNC_UNKNOWN; netdev_dbg(qca->net_dev, "sync: got CPU on, but signature was invalid, restart\n"); + return; } else { /* ensure that the WRBUF is empty */ qcaspi_read_register(qca, SPI_REG_WRBUF_SPC_AVA, @@ -523,10 +527,14 @@ qcaspi_qca7k_sync(struct qcaspi *qca, int event) switch (qca->sync) { case QCASPI_SYNC_READY: - /* Read signature, if not valid go to unknown state. */ + /* Check signature twice, if not valid go to unknown state. */ qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); + if (signature != QCASPI_GOOD_SIGNATURE) + qcaspi_read_register(qca, SPI_REG_SIGNATURE, &signature); + if (signature != QCASPI_GOOD_SIGNATURE) { qca->sync = QCASPI_SYNC_UNKNOWN; + qca->stats.bad_signature++; netdev_dbg(qca->net_dev, "sync: bad signature, restart\n"); /* don't reset right away */ return; diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h index d13a67e20d65..3067356106f0 100644 --- a/drivers/net/ethernet/qualcomm/qca_spi.h +++ b/drivers/net/ethernet/qualcomm/qca_spi.h @@ -75,6 +75,7 @@ struct qcaspi_stats { u64 spi_err; u64 write_verify_failed; u64 buf_avail_err; + u64 bad_signature; }; struct qcaspi { diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c index 8d51b0cb545c..27b1663c476e 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c @@ -163,7 +163,8 @@ static int rmnet_newlink(struct net *src_net, struct net_device *dev, struct ifla_rmnet_flags *flags; flags = nla_data(data[IFLA_RMNET_FLAGS]); - data_format = flags->flags & flags->mask; + data_format &= ~flags->mask; + data_format |= flags->flags & flags->mask; } netdev_dbg(dev, "data format [0x%08X]\n", data_format); @@ -336,7 +337,8 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[], old_data_format = port->data_format; flags = nla_data(data[IFLA_RMNET_FLAGS]); - port->data_format = flags->flags & flags->mask; + port->data_format &= ~flags->mask; + port->data_format |= flags->flags & flags->mask; if (rmnet_vnd_update_dev_mtu(port, real_dev)) { port->data_format = old_data_format; diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h index 8d8d4690a074..3d3cba56c516 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (c) 2013-2014, 2016-2018 The Linux Foundation. All rights reserved. +/* Copyright (c) 2013-2014, 2016-2018, 2021 The Linux Foundation. + * All rights reserved. * * RMNET Data configuration engine */ @@ -48,6 +49,7 @@ struct rmnet_pcpu_stats { struct rmnet_priv_stats { u64 csum_ok; + u64 csum_ip4_header_bad; u64 csum_valid_unset; u64 csum_validation_failed; u64 csum_err_bad_buffer; @@ -56,6 +58,7 @@ struct rmnet_priv_stats { u64 csum_fragmented_pkt; u64 csum_skipped; u64 csum_sw; + u64 csum_hw; }; struct rmnet_priv { diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c index 0be5ac7ab261..bfbd7847f946 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0-only -/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved. * * RMNET Data ingress/egress handler */ @@ -82,12 +82,18 @@ __rmnet_map_ingress_handler(struct sk_buff *skb, skb->dev = ep->egress_dev; - /* Subtract MAP header */ - skb_pull(skb, sizeof(struct rmnet_map_header)); - rmnet_set_skb_proto(skb); - - if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4) { - if (!rmnet_map_checksum_downlink_packet(skb, len + pad)) + if ((port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV5) && + (map_header->flags & MAP_NEXT_HEADER_FLAG)) { + if (rmnet_map_process_next_hdr_packet(skb, len)) + goto free_skb; + skb_pull(skb, sizeof(*map_header)); + rmnet_set_skb_proto(skb); + } else { + /* Subtract MAP header */ + skb_pull(skb, sizeof(*map_header)); + rmnet_set_skb_proto(skb); + if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4 && + !rmnet_map_checksum_downlink_packet(skb, len + pad)) skb->ip_summed = CHECKSUM_UNNECESSARY; } @@ -128,7 +134,7 @@ static int rmnet_map_egress_handler(struct sk_buff *skb, struct rmnet_port *port, u8 mux_id, struct net_device *orig_dev) { - int required_headroom, additional_header_len; + int required_headroom, additional_header_len, csum_type = 0; struct rmnet_map_header *map_header; additional_header_len = 0; @@ -136,18 +142,23 @@ static int rmnet_map_egress_handler(struct sk_buff *skb, if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4) { additional_header_len = sizeof(struct rmnet_map_ul_csum_header); - required_headroom += additional_header_len; + csum_type = RMNET_FLAGS_EGRESS_MAP_CKSUMV4; + } else if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5) { + additional_header_len = sizeof(struct rmnet_map_v5_csum_header); + csum_type = RMNET_FLAGS_EGRESS_MAP_CKSUMV5; } - if (skb_headroom(skb) < required_headroom) { - if (pskb_expand_head(skb, required_headroom, 0, GFP_ATOMIC)) - return -ENOMEM; - } + required_headroom += additional_header_len; + + if (skb_cow_head(skb, required_headroom) < 0) + return -ENOMEM; - if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4) - rmnet_map_checksum_uplink_packet(skb, orig_dev); + if (csum_type) + rmnet_map_checksum_uplink_packet(skb, port, orig_dev, + csum_type); - map_header = rmnet_map_add_map_header(skb, additional_header_len, 0); + map_header = rmnet_map_add_map_header(skb, additional_header_len, + port, 0); if (!map_header) return -ENOMEM; diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h index 2aea153f4247..e5a0b38f7dbe 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0-only */ -/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved. */ #ifndef _RMNET_MAP_H_ @@ -43,10 +43,15 @@ enum rmnet_map_commands { struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb, struct rmnet_port *port); struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb, - int hdrlen, int pad); + int hdrlen, + struct rmnet_port *port, + int pad); void rmnet_map_command(struct sk_buff *skb, struct rmnet_port *port); int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len); void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, - struct net_device *orig_dev); + struct rmnet_port *port, + struct net_device *orig_dev, + int csum_type); +int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, u16 len); #endif /* _RMNET_MAP_H_ */ diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c index 0ac2ff828320..3676976c875b 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0-only -/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2013-2018, 2021, The Linux Foundation. All rights reserved. * * RMNET Data MAP protocol */ @@ -8,6 +8,7 @@ #include <linux/ip.h> #include <linux/ipv6.h> #include <net/ip6_checksum.h> +#include <linux/bitfield.h> #include "rmnet_config.h" #include "rmnet_map.h" #include "rmnet_private.h" @@ -18,23 +19,13 @@ static __sum16 *rmnet_map_get_csum_field(unsigned char protocol, const void *txporthdr) { - __sum16 *check = NULL; + if (protocol == IPPROTO_TCP) + return &((struct tcphdr *)txporthdr)->check; - switch (protocol) { - case IPPROTO_TCP: - check = &(((struct tcphdr *)txporthdr)->check); - break; - - case IPPROTO_UDP: - check = &(((struct udphdr *)txporthdr)->check); - break; + if (protocol == IPPROTO_UDP) + return &((struct udphdr *)txporthdr)->check; - default: - check = NULL; - break; - } - - return check; + return NULL; } static int @@ -42,71 +33,74 @@ rmnet_map_ipv4_dl_csum_trailer(struct sk_buff *skb, struct rmnet_map_dl_csum_trailer *csum_trailer, struct rmnet_priv *priv) { - __sum16 *csum_field, csum_temp, pseudo_csum, hdr_csum, ip_payload_csum; - u16 csum_value, csum_value_final; - struct iphdr *ip4h; - void *txporthdr; - __be16 addend; - - ip4h = (struct iphdr *)(skb->data); - if ((ntohs(ip4h->frag_off) & IP_MF) || - ((ntohs(ip4h->frag_off) & IP_OFFSET) > 0)) { + struct iphdr *ip4h = (struct iphdr *)skb->data; + void *txporthdr = skb->data + ip4h->ihl * 4; + __sum16 *csum_field, pseudo_csum; + __sum16 ip_payload_csum; + + /* Computing the checksum over just the IPv4 header--including its + * checksum field--should yield 0. If it doesn't, the IP header + * is bad, so return an error and let the IP layer drop it. + */ + if (ip_fast_csum(ip4h, ip4h->ihl)) { + priv->stats.csum_ip4_header_bad++; + return -EINVAL; + } + + /* We don't support checksum offload on IPv4 fragments */ + if (ip_is_fragment(ip4h)) { priv->stats.csum_fragmented_pkt++; return -EOPNOTSUPP; } - txporthdr = skb->data + ip4h->ihl * 4; - + /* Checksum offload is only supported for UDP and TCP protocols */ csum_field = rmnet_map_get_csum_field(ip4h->protocol, txporthdr); - if (!csum_field) { priv->stats.csum_err_invalid_transport++; return -EPROTONOSUPPORT; } - /* RFC 768 - Skip IPv4 UDP packets where sender checksum field is 0 */ - if (*csum_field == 0 && ip4h->protocol == IPPROTO_UDP) { + /* RFC 768: UDP checksum is optional for IPv4, and is 0 if unused */ + if (!*csum_field && ip4h->protocol == IPPROTO_UDP) { priv->stats.csum_skipped++; return 0; } - csum_value = ~ntohs(csum_trailer->csum_value); - hdr_csum = ~ip_fast_csum(ip4h, (int)ip4h->ihl); - ip_payload_csum = csum16_sub((__force __sum16)csum_value, - (__force __be16)hdr_csum); - - pseudo_csum = ~csum_tcpudp_magic(ip4h->saddr, ip4h->daddr, - ntohs(ip4h->tot_len) - ip4h->ihl * 4, - ip4h->protocol, 0); - addend = (__force __be16)ntohs((__force __be16)pseudo_csum); - pseudo_csum = csum16_add(ip_payload_csum, addend); - - addend = (__force __be16)ntohs((__force __be16)*csum_field); - csum_temp = ~csum16_sub(pseudo_csum, addend); - csum_value_final = (__force u16)csum_temp; - - if (unlikely(csum_value_final == 0)) { - switch (ip4h->protocol) { - case IPPROTO_UDP: - /* RFC 768 - DL4 1's complement rule for UDP csum 0 */ - csum_value_final = ~csum_value_final; - break; - - case IPPROTO_TCP: - /* DL4 Non-RFC compliant TCP checksum found */ - if (*csum_field == (__force __sum16)0xFFFF) - csum_value_final = ~csum_value_final; - break; - } - } - - if (csum_value_final == ntohs((__force __be16)*csum_field)) { - priv->stats.csum_ok++; - return 0; - } else { + /* The checksum value in the trailer is computed over the entire + * IP packet, including the IP header and payload. To derive the + * transport checksum from this, we first subract the contribution + * of the IP header from the trailer checksum. We then add the + * checksum computed over the pseudo header. + * + * We verified above that the IP header contributes zero to the + * trailer checksum. Therefore the checksum in the trailer is + * just the checksum computed over the IP payload. + + * If the IP payload arrives intact, adding the pseudo header + * checksum to the IP payload checksum will yield 0xffff (negative + * zero). This means the trailer checksum and the pseudo checksum + * are additive inverses of each other. Put another way, the + * message passes the checksum test if the trailer checksum value + * is the negated pseudo header checksum. + * + * Knowing this, we don't even need to examine the transport + * header checksum value; it is already accounted for in the + * checksum value found in the trailer. + */ + ip_payload_csum = csum_trailer->csum_value; + + pseudo_csum = csum_tcpudp_magic(ip4h->saddr, ip4h->daddr, + ntohs(ip4h->tot_len) - ip4h->ihl * 4, + ip4h->protocol, 0); + + /* The cast is required to ensure only the low 16 bits are examined */ + if (ip_payload_csum != (__sum16)~pseudo_csum) { priv->stats.csum_validation_failed++; return -EINVAL; } + + priv->stats.csum_ok++; + return 0; } #if IS_ENABLED(CONFIG_IPV6) @@ -115,76 +109,66 @@ rmnet_map_ipv6_dl_csum_trailer(struct sk_buff *skb, struct rmnet_map_dl_csum_trailer *csum_trailer, struct rmnet_priv *priv) { - __sum16 *csum_field, ip6_payload_csum, pseudo_csum, csum_temp; - u16 csum_value, csum_value_final; - __be16 ip6_hdr_csum, addend; - struct ipv6hdr *ip6h; - void *txporthdr; - u32 length; - - ip6h = (struct ipv6hdr *)(skb->data); - - txporthdr = skb->data + sizeof(struct ipv6hdr); + struct ipv6hdr *ip6h = (struct ipv6hdr *)skb->data; + void *txporthdr = skb->data + sizeof(*ip6h); + __sum16 *csum_field, pseudo_csum; + __sum16 ip6_payload_csum; + __be16 ip_header_csum; + + /* Checksum offload is only supported for UDP and TCP protocols; + * the packet cannot include any IPv6 extension headers + */ csum_field = rmnet_map_get_csum_field(ip6h->nexthdr, txporthdr); - if (!csum_field) { priv->stats.csum_err_invalid_transport++; return -EPROTONOSUPPORT; } - csum_value = ~ntohs(csum_trailer->csum_value); - ip6_hdr_csum = (__force __be16) - ~ntohs((__force __be16)ip_compute_csum(ip6h, - (int)(txporthdr - (void *)(skb->data)))); - ip6_payload_csum = csum16_sub((__force __sum16)csum_value, - ip6_hdr_csum); - - length = (ip6h->nexthdr == IPPROTO_UDP) ? - ntohs(((struct udphdr *)txporthdr)->len) : - ntohs(ip6h->payload_len); - pseudo_csum = ~(csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr, - length, ip6h->nexthdr, 0)); - addend = (__force __be16)ntohs((__force __be16)pseudo_csum); - pseudo_csum = csum16_add(ip6_payload_csum, addend); - - addend = (__force __be16)ntohs((__force __be16)*csum_field); - csum_temp = ~csum16_sub(pseudo_csum, addend); - csum_value_final = (__force u16)csum_temp; - - if (unlikely(csum_value_final == 0)) { - switch (ip6h->nexthdr) { - case IPPROTO_UDP: - /* RFC 2460 section 8.1 - * DL6 One's complement rule for UDP checksum 0 - */ - csum_value_final = ~csum_value_final; - break; - - case IPPROTO_TCP: - /* DL6 Non-RFC compliant TCP checksum found */ - if (*csum_field == (__force __sum16)0xFFFF) - csum_value_final = ~csum_value_final; - break; - } - } - - if (csum_value_final == ntohs((__force __be16)*csum_field)) { - priv->stats.csum_ok++; - return 0; - } else { + /* The checksum value in the trailer is computed over the entire + * IP packet, including the IP header and payload. To derive the + * transport checksum from this, we first subract the contribution + * of the IP header from the trailer checksum. We then add the + * checksum computed over the pseudo header. + */ + ip_header_csum = (__force __be16)ip_fast_csum(ip6h, sizeof(*ip6h) / 4); + ip6_payload_csum = csum16_sub(csum_trailer->csum_value, ip_header_csum); + + pseudo_csum = csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr, + ntohs(ip6h->payload_len), + ip6h->nexthdr, 0); + + /* It's sufficient to compare the IP payload checksum with the + * negated pseudo checksum to determine whether the packet + * checksum was good. (See further explanation in comments + * in rmnet_map_ipv4_dl_csum_trailer()). + * + * The cast is required to ensure only the low 16 bits are + * examined. + */ + if (ip6_payload_csum != (__sum16)~pseudo_csum) { priv->stats.csum_validation_failed++; return -EINVAL; } + + priv->stats.csum_ok++; + return 0; +} +#else +static int +rmnet_map_ipv6_dl_csum_trailer(struct sk_buff *skb, + struct rmnet_map_dl_csum_trailer *csum_trailer, + struct rmnet_priv *priv) +{ + return 0; } #endif -static void rmnet_map_complement_ipv4_txporthdr_csum_field(void *iphdr) +static void rmnet_map_complement_ipv4_txporthdr_csum_field(struct iphdr *ip4h) { - struct iphdr *ip4h = (struct iphdr *)iphdr; void *txphdr; u16 *csum; - txphdr = iphdr + ip4h->ihl * 4; + txphdr = (void *)ip4h + ip4h->ihl * 4; if (ip4h->protocol == IPPROTO_TCP || ip4h->protocol == IPPROTO_UDP) { csum = (u16 *)rmnet_map_get_csum_field(ip4h->protocol, txphdr); @@ -193,15 +177,14 @@ static void rmnet_map_complement_ipv4_txporthdr_csum_field(void *iphdr) } static void -rmnet_map_ipv4_ul_csum_header(void *iphdr, +rmnet_map_ipv4_ul_csum_header(struct iphdr *iphdr, struct rmnet_map_ul_csum_header *ul_header, struct sk_buff *skb) { - struct iphdr *ip4h = iphdr; u16 val; val = MAP_CSUM_UL_ENABLED_FLAG; - if (ip4h->protocol == IPPROTO_UDP) + if (iphdr->protocol == IPPROTO_UDP) val |= MAP_CSUM_UL_UDP_FLAG; val |= skb->csum_offset & MAP_CSUM_UL_OFFSET_MASK; @@ -214,13 +197,13 @@ rmnet_map_ipv4_ul_csum_header(void *iphdr, } #if IS_ENABLED(CONFIG_IPV6) -static void rmnet_map_complement_ipv6_txporthdr_csum_field(void *ip6hdr) +static void +rmnet_map_complement_ipv6_txporthdr_csum_field(struct ipv6hdr *ip6h) { - struct ipv6hdr *ip6h = (struct ipv6hdr *)ip6hdr; void *txphdr; u16 *csum; - txphdr = ip6hdr + sizeof(struct ipv6hdr); + txphdr = ip6h + 1; if (ip6h->nexthdr == IPPROTO_TCP || ip6h->nexthdr == IPPROTO_UDP) { csum = (u16 *)rmnet_map_get_csum_field(ip6h->nexthdr, txphdr); @@ -229,15 +212,14 @@ static void rmnet_map_complement_ipv6_txporthdr_csum_field(void *ip6hdr) } static void -rmnet_map_ipv6_ul_csum_header(void *ip6hdr, +rmnet_map_ipv6_ul_csum_header(struct ipv6hdr *ipv6hdr, struct rmnet_map_ul_csum_header *ul_header, struct sk_buff *skb) { - struct ipv6hdr *ip6h = ip6hdr; u16 val; val = MAP_CSUM_UL_ENABLED_FLAG; - if (ip6h->nexthdr == IPPROTO_UDP) + if (ipv6hdr->nexthdr == IPPROTO_UDP) val |= MAP_CSUM_UL_UDP_FLAG; val |= skb->csum_offset & MAP_CSUM_UL_OFFSET_MASK; @@ -246,16 +228,73 @@ rmnet_map_ipv6_ul_csum_header(void *ip6hdr, skb->ip_summed = CHECKSUM_NONE; - rmnet_map_complement_ipv6_txporthdr_csum_field(ip6hdr); + rmnet_map_complement_ipv6_txporthdr_csum_field(ipv6hdr); +} +#else +static void +rmnet_map_ipv6_ul_csum_header(void *ip6hdr, + struct rmnet_map_ul_csum_header *ul_header, + struct sk_buff *skb) +{ } #endif +static void rmnet_map_v5_checksum_uplink_packet(struct sk_buff *skb, + struct rmnet_port *port, + struct net_device *orig_dev) +{ + struct rmnet_priv *priv = netdev_priv(orig_dev); + struct rmnet_map_v5_csum_header *ul_header; + + ul_header = skb_push(skb, sizeof(*ul_header)); + memset(ul_header, 0, sizeof(*ul_header)); + ul_header->header_info = u8_encode_bits(RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD, + MAPV5_HDRINFO_HDR_TYPE_FMASK); + + if (skb->ip_summed == CHECKSUM_PARTIAL) { + void *iph = ip_hdr(skb); + __sum16 *check; + void *trans; + u8 proto; + + if (skb->protocol == htons(ETH_P_IP)) { + u16 ip_len = ((struct iphdr *)iph)->ihl * 4; + + proto = ((struct iphdr *)iph)->protocol; + trans = iph + ip_len; + } else if (IS_ENABLED(CONFIG_IPV6) && + skb->protocol == htons(ETH_P_IPV6)) { + u16 ip_len = sizeof(struct ipv6hdr); + + proto = ((struct ipv6hdr *)iph)->nexthdr; + trans = iph + ip_len; + } else { + priv->stats.csum_err_invalid_ip_version++; + goto sw_csum; + } + + check = rmnet_map_get_csum_field(proto, trans); + if (check) { + skb->ip_summed = CHECKSUM_NONE; + /* Ask for checksum offloading */ + ul_header->csum_info |= MAPV5_CSUMINFO_VALID_FLAG; + priv->stats.csum_hw++; + return; + } + } + +sw_csum: + priv->stats.csum_sw++; +} + /* Adds MAP header to front of skb->data * Padding is calculated and set appropriately in MAP header. Mux ID is * initialized to 0. */ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb, - int hdrlen, int pad) + int hdrlen, + struct rmnet_port *port, + int pad) { struct rmnet_map_header *map_header; u32 padding, map_datalen; @@ -266,6 +305,10 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb, skb_push(skb, sizeof(struct rmnet_map_header)); memset(map_header, 0, sizeof(struct rmnet_map_header)); + /* Set next_hdr bit for csum offload packets */ + if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV5) + map_header->flags |= MAP_NEXT_HEADER_FLAG; + if (pad == RMNET_MAP_NO_PAD_BYTES) { map_header->pkt_len = htons(map_datalen); return map_header; @@ -300,8 +343,11 @@ done: struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb, struct rmnet_port *port) { + struct rmnet_map_v5_csum_header *next_hdr = NULL; struct rmnet_map_header *maph; + void *data = skb->data; struct sk_buff *skbn; + u8 nexthdr_type; u32 packet_len; if (skb->len == 0) @@ -310,8 +356,18 @@ struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb, maph = (struct rmnet_map_header *)skb->data; packet_len = ntohs(maph->pkt_len) + sizeof(*maph); - if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4) + if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV4) { packet_len += sizeof(struct rmnet_map_dl_csum_trailer); + } else if (port->data_format & RMNET_FLAGS_INGRESS_MAP_CKSUMV5) { + if (!(maph->flags & MAP_CMD_FLAG)) { + packet_len += sizeof(*next_hdr); + if (maph->flags & MAP_NEXT_HEADER_FLAG) + next_hdr = data + sizeof(*maph); + else + /* Mapv5 data pkt without csum hdr is invalid */ + return NULL; + } + } if (((int)skb->len - (int)packet_len) < 0) return NULL; @@ -320,6 +376,13 @@ struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb, if (!maph->pkt_len) return NULL; + if (next_hdr) { + nexthdr_type = u8_get_bits(next_hdr->header_info, + MAPV5_HDRINFO_HDR_TYPE_FMASK); + if (nexthdr_type != RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD) + return NULL; + } + skbn = alloc_skb(packet_len + RMNET_MAP_DEAGGR_SPACING, GFP_ATOMIC); if (!skbn) return NULL; @@ -355,28 +418,19 @@ int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len) return -EINVAL; } - if (skb->protocol == htons(ETH_P_IP)) { + if (skb->protocol == htons(ETH_P_IP)) return rmnet_map_ipv4_dl_csum_trailer(skb, csum_trailer, priv); - } else if (skb->protocol == htons(ETH_P_IPV6)) { -#if IS_ENABLED(CONFIG_IPV6) + + if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) return rmnet_map_ipv6_dl_csum_trailer(skb, csum_trailer, priv); -#else - priv->stats.csum_err_invalid_ip_version++; - return -EPROTONOSUPPORT; -#endif - } else { - priv->stats.csum_err_invalid_ip_version++; - return -EPROTONOSUPPORT; - } - return 0; + priv->stats.csum_err_invalid_ip_version++; + + return -EPROTONOSUPPORT; } -/* Generates UL checksum meta info header for IPv4 and IPv6 over TCP and UDP - * packets that are supported for UL checksum offload. - */ -void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, - struct net_device *orig_dev) +static void rmnet_map_v4_checksum_uplink_packet(struct sk_buff *skb, + struct net_device *orig_dev) { struct rmnet_priv *priv = netdev_priv(orig_dev); struct rmnet_map_ul_csum_header *ul_header; @@ -389,28 +443,80 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)))) goto sw_csum; - if (skb->ip_summed == CHECKSUM_PARTIAL) { - iphdr = (char *)ul_header + - sizeof(struct rmnet_map_ul_csum_header); + if (skb->ip_summed != CHECKSUM_PARTIAL) + goto sw_csum; - if (skb->protocol == htons(ETH_P_IP)) { - rmnet_map_ipv4_ul_csum_header(iphdr, ul_header, skb); - return; - } else if (skb->protocol == htons(ETH_P_IPV6)) { -#if IS_ENABLED(CONFIG_IPV6) - rmnet_map_ipv6_ul_csum_header(iphdr, ul_header, skb); - return; -#else - priv->stats.csum_err_invalid_ip_version++; - goto sw_csum; -#endif - } else { - priv->stats.csum_err_invalid_ip_version++; - } + iphdr = (char *)ul_header + + sizeof(struct rmnet_map_ul_csum_header); + + if (skb->protocol == htons(ETH_P_IP)) { + rmnet_map_ipv4_ul_csum_header(iphdr, ul_header, skb); + priv->stats.csum_hw++; + return; + } + + if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) { + rmnet_map_ipv6_ul_csum_header(iphdr, ul_header, skb); + priv->stats.csum_hw++; + return; } + priv->stats.csum_err_invalid_ip_version++; + sw_csum: memset(ul_header, 0, sizeof(*ul_header)); priv->stats.csum_sw++; } + +/* Generates UL checksum meta info header for IPv4 and IPv6 over TCP and UDP + * packets that are supported for UL checksum offload. + */ +void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, + struct rmnet_port *port, + struct net_device *orig_dev, + int csum_type) +{ + switch (csum_type) { + case RMNET_FLAGS_EGRESS_MAP_CKSUMV4: + rmnet_map_v4_checksum_uplink_packet(skb, orig_dev); + break; + case RMNET_FLAGS_EGRESS_MAP_CKSUMV5: + rmnet_map_v5_checksum_uplink_packet(skb, port, orig_dev); + break; + default: + break; + } +} + +/* Process a MAPv5 packet header */ +int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, + u16 len) +{ + struct rmnet_priv *priv = netdev_priv(skb->dev); + struct rmnet_map_v5_csum_header *next_hdr; + u8 nexthdr_type; + + next_hdr = (struct rmnet_map_v5_csum_header *)(skb->data + + sizeof(struct rmnet_map_header)); + + nexthdr_type = u8_get_bits(next_hdr->header_info, + MAPV5_HDRINFO_HDR_TYPE_FMASK); + + if (nexthdr_type != RMNET_MAP_HEADER_TYPE_CSUM_OFFLOAD) + return -EINVAL; + + if (unlikely(!(skb->dev->features & NETIF_F_RXCSUM))) { + priv->stats.csum_sw++; + } else if (next_hdr->csum_info & MAPV5_CSUMINFO_VALID_FLAG) { + priv->stats.csum_ok++; + skb->ip_summed = CHECKSUM_UNNECESSARY; + } else { + priv->stats.csum_valid_unset++; + } + + /* Pull csum v5 header */ + skb_pull(skb, sizeof(*next_hdr)); + + return 0; +} diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c index ab1e0fcccabb..13d8eb43a485 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c @@ -166,6 +166,7 @@ static const struct net_device_ops rmnet_vnd_ops = { static const char rmnet_gstrings_stats[][ETH_GSTRING_LEN] = { "Checksum ok", + "Bad IPv4 header checksum", "Checksum valid bit not set", "Checksum validation failed", "Checksum error bad buffer", @@ -174,6 +175,7 @@ static const char rmnet_gstrings_stats[][ETH_GSTRING_LEN] = { "Checksum skipped on ip fragment", "Checksum skipped", "Checksum computed in software", + "Checksum computed in hardware", }; static void rmnet_get_strings(struct net_device *dev, u32 stringset, u8 *buf) |