summaryrefslogtreecommitdiffstats
path: root/drivers/net/veth.c
diff options
context:
space:
mode:
authorShawn Bohrer <sbohrer@cloudflare.com>2022-12-20 19:59:03 +0100
committerPaolo Abeni <pabeni@redhat.com>2022-12-22 15:06:10 +0100
commitfa349e396e4886d742fd6501c599ec627ef1353b (patch)
tree5f85c1b9cd47a8ce23c3007a6c2cb7b9c79ccd6b /drivers/net/veth.c
parentnet: lan966x: Fix configuration of the PCS (diff)
downloadlinux-fa349e396e4886d742fd6501c599ec627ef1353b.tar.xz
linux-fa349e396e4886d742fd6501c599ec627ef1353b.zip
veth: Fix race with AF_XDP exposing old or uninitialized descriptors
When AF_XDP is used on on a veth interface the RX ring is updated in two steps. veth_xdp_rcv() removes packet descriptors from the FILL ring fills them and places them in the RX ring updating the cached_prod pointer. Later xdp_do_flush() syncs the RX ring prod pointer with the cached_prod pointer allowing user-space to see the recently filled in descriptors. The rings are intended to be SPSC, however the existing order in veth_poll allows the xdp_do_flush() to run concurrently with another CPU creating a race condition that allows user-space to see old or uninitialized descriptors in the RX ring. This bug has been observed in production systems. To summarize, we are expecting this ordering: CPU 0 __xsk_rcv_zc() CPU 0 __xsk_map_flush() CPU 2 __xsk_rcv_zc() CPU 2 __xsk_map_flush() But we are seeing this order: CPU 0 __xsk_rcv_zc() CPU 2 __xsk_rcv_zc() CPU 0 __xsk_map_flush() CPU 2 __xsk_map_flush() This occurs because we rely on NAPI to ensure that only one napi_poll handler is running at a time for the given veth receive queue. napi_schedule_prep() will prevent multiple instances from getting scheduled. However calling napi_complete_done() signals that this napi_poll is complete and allows subsequent calls to napi_schedule_prep() and __napi_schedule() to succeed in scheduling a concurrent napi_poll before the xdp_do_flush() has been called. For the veth driver a concurrent call to napi_schedule_prep() and __napi_schedule() can occur on a different CPU because the veth xmit path can additionally schedule a napi_poll creating the race. The fix as suggested by Magnus Karlsson, is to simply move the xdp_do_flush() call before napi_complete_done(). This syncs the producer ring pointers before another instance of napi_poll can be scheduled on another CPU. It will also slightly improve performance by moving the flush closer to when the descriptors were placed in the RX ring. Fixes: d1396004dd86 ("veth: Add XDP TX and REDIRECT") Suggested-by: Magnus Karlsson <magnus.karlsson@gmail.com> Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com> Link: https://lore.kernel.org/r/20221220185903.1105011-1-sbohrer@cloudflare.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'drivers/net/veth.c')
-rw-r--r--drivers/net/veth.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index ac7c0653695f..dfc7d87fad59 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -974,6 +974,9 @@ static int veth_poll(struct napi_struct *napi, int budget)
xdp_set_return_frame_no_direct();
done = veth_xdp_rcv(rq, budget, &bq, &stats);
+ if (stats.xdp_redirect > 0)
+ xdp_do_flush();
+
if (done < budget && napi_complete_done(napi, done)) {
/* Write rx_notify_masked before reading ptr_ring */
smp_store_mb(rq->rx_notify_masked, false);
@@ -987,8 +990,6 @@ static int veth_poll(struct napi_struct *napi, int budget)
if (stats.xdp_tx > 0)
veth_xdp_flush(rq, &bq);
- if (stats.xdp_redirect > 0)
- xdp_do_flush();
xdp_clear_return_frame_no_direct();
return done;