summaryrefslogtreecommitdiffstats
path: root/include/net
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2016-12-07 04:32:50 +0100
committerDavid S. Miller <davem@davemloft.net>2016-12-07 16:47:35 +0100
commit5b8e2f61b9df529ca4af057daf7bfb1de348bdf1 (patch)
treeccd2569b8eb1ab8267667f690370ee79ccf5916f /include/net
parenttun: Use netif_receive_skb instead of netif_rx (diff)
downloadlinux-5b8e2f61b9df529ca4af057daf7bfb1de348bdf1.tar.xz
linux-5b8e2f61b9df529ca4af057daf7bfb1de348bdf1.zip
net: sock_rps_record_flow() is for connected sockets
Paolo noticed a cache line miss in UDP recvmsg() to access sk_rxhash, sharing a cache line with sk_drops. sk_drops might be heavily incremented by cpus handling a flood targeting this socket. We might place sk_drops on a separate cache line, but lets try to avoid wasting 64 bytes per socket just for this, since we have other bottlenecks to take care of. sock_rps_record_flow() should only access sk_rxhash for connected flows. Testing sk_state for TCP_ESTABLISHED covers most of the cases for connected sockets, for a zero cost, since system calls using sock_rps_record_flow() also access sk->sk_prot which is on the same cache line. A follow up patch will provide a static_key (Jump Label) since most hosts do not even use RFS. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net')
-rw-r--r--include/net/sock.h12
1 files changed, 11 insertions, 1 deletions
diff --git a/include/net/sock.h b/include/net/sock.h
index 6dfe3aa22b97..1749e38d0301 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -913,7 +913,17 @@ static inline void sock_rps_record_flow_hash(__u32 hash)
static inline void sock_rps_record_flow(const struct sock *sk)
{
#ifdef CONFIG_RPS
- sock_rps_record_flow_hash(sk->sk_rxhash);
+ /* Reading sk->sk_rxhash might incur an expensive cache line miss.
+ *
+ * TCP_ESTABLISHED does cover almost all states where RFS
+ * might be useful, and is cheaper [1] than testing :
+ * IPv4: inet_sk(sk)->inet_daddr
+ * IPv6: ipv6_addr_any(&sk->sk_v6_daddr)
+ * OR an additional socket flag
+ * [1] : sk_state and sk_prot are in the same cache line.
+ */
+ if (sk->sk_state == TCP_ESTABLISHED)
+ sock_rps_record_flow_hash(sk->sk_rxhash);
#endif
}