diff options
author | Eric Dumazet <edumazet@google.com> | 2022-05-16 06:24:53 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2022-05-16 12:33:59 +0200 |
commit | 97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac (patch) | |
tree | 7e38c6a88703169d84365cb91c9aaab5aabe0adb /net/core/skbuff.c | |
parent | net: tulip: convert to devres (diff) | |
download | linux-97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac.tar.xz linux-97e719a82b43c6c2bb5eebdb3c5d479a332ac2ac.zip |
net: fix possible race in skb_attempt_defer_free()
A cpu can observe sd->defer_count reaching 128,
and call smp_call_function_single_async()
Problem is that the remote CPU can clear sd->defer_count
before the IPI is run/acknowledged.
Other cpus can queue more packets and also decide
to call smp_call_function_single_async() while the pending
IPI was not yet delivered.
This is a common issue with smp_call_function_single_async().
Callers must ensure correct synchronization and serialization.
I triggered this issue while experimenting smaller threshold.
Performing the call to smp_call_function_single_async()
under sd->defer_lock protection did not solve the problem.
Commit 5a18ceca6350 ("smp: Allow smp_call_function_single_async()
to insert locked csd") replaced an informative WARN_ON_ONCE()
with a return of -EBUSY, which is often ignored.
Test of CSD_FLAG_LOCK presence is racy anyway.
Fixes: 68822bdf76f1 ("net: generalize skb freeing deferral to per-cpu lists")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/skbuff.c')
-rw-r--r-- | net/core/skbuff.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 2fea964f09d8..b40c8cdf4785 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6516,8 +6516,7 @@ void skb_attempt_defer_free(struct sk_buff *skb) sd->defer_count++; /* kick every time queue length reaches 128. - * This should avoid blocking in smp_call_function_single_async(). - * This condition should hardly be bit under normal conditions, + * This condition should hardly be hit under normal conditions, * unless cpu suddenly stopped to receive NIC interrupts. */ kick = sd->defer_count == 128; @@ -6527,6 +6526,6 @@ void skb_attempt_defer_free(struct sk_buff *skb) /* Make sure to trigger NET_RX_SOFTIRQ on the remote CPU * if we are unlucky enough (this seems very unlikely). */ - if (unlikely(kick)) + if (unlikely(kick) && !cmpxchg(&sd->defer_ipi_scheduled, 0, 1)) smp_call_function_single_async(cpu, &sd->defer_csd); } |