diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-12-21 08:11:44 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-12-23 08:15:14 +0100 |
commit | 0fd7bac6b6157eed6cf0cb86a1e88ba29e57c033 (patch) | |
tree | bcc24e9c63587bc1e8e15ad60654de9c6f72883e /include | |
parent | rps: fix insufficient bounds checking in store_rps_dev_flow_table_cnt() (diff) | |
download | linux-0fd7bac6b6157eed6cf0cb86a1e88ba29e57c033.tar.xz linux-0fd7bac6b6157eed6cf0cb86a1e88ba29e57c033.zip |
net: relax rcvbuf limits
skb->truesize might be big even for a small packet.
Its even bigger after commit 87fb4b7b533 (net: more accurate skb
truesize) and big MTU.
We should allow queueing at least one packet per receiver, even with a
low RCVBUF setting.
Reported-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r-- | include/net/sock.h | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/include/net/sock.h b/include/net/sock.h index abb6e0f0c3c3..32e39371fba6 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -637,12 +637,14 @@ static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb) /* * Take into account size of receive queue and backlog queue + * Do not take into account this skb truesize, + * to allow even a single big packet to come. */ static inline bool sk_rcvqueues_full(const struct sock *sk, const struct sk_buff *skb) { unsigned int qsize = sk->sk_backlog.len + atomic_read(&sk->sk_rmem_alloc); - return qsize + skb->truesize > sk->sk_rcvbuf; + return qsize > sk->sk_rcvbuf; } /* The per-socket spinlock must be held here. */ |