summaryrefslogtreecommitdiffstats
path: root/net/ipv4/gre_offload.c
diff options
context:
space:
mode:
authorSabrina Dubroca <sd@queasysnail.net>2016-10-20 15:58:02 +0200
committerDavid S. Miller <davem@davemloft.net>2016-10-20 20:32:22 +0200
commitfcd91dd449867c6bfe56a81cabba76b829fd05cd (patch)
tree5bf8afb23894d48dda7ba09bdf4ab12935d81159 /net/ipv4/gre_offload.c
parentipv6: properly prevent temp_prefered_lft sysctl race (diff)
downloadlinux-fcd91dd449867c6bfe56a81cabba76b829fd05cd.tar.xz
linux-fcd91dd449867c6bfe56a81cabba76b829fd05cd.zip
net: add recursion limit to GRO
Currently, GRO can do unlimited recursion through the gro_receive handlers. This was fixed for tunneling protocols by limiting tunnel GRO to one level with encap_mark, but both VLAN and TEB still have this problem. Thus, the kernel is vulnerable to a stack overflow, if we receive a packet composed entirely of VLAN headers. This patch adds a recursion counter to the GRO layer to prevent stack overflow. When a gro_receive function hits the recursion limit, GRO is aborted for this skb and it is processed normally. This recursion counter is put in the GRO CB, but could be turned into a percpu counter if we run out of space in the CB. Thanks to Vladimír Beneš <vbenes@redhat.com> for the initial bug report. Fixes: CVE-2016-7039 Fixes: 9b174d88c257 ("net: Add Transparent Ethernet Bridging GRO support.") Fixes: 66e5133f19e9 ("vlan: Add GRO support for non hardware accelerated vlan") Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Jiri Benc <jbenc@redhat.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/gre_offload.c')
-rw-r--r--net/ipv4/gre_offload.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c
index 96e0efecefa6..d5cac99170b1 100644
--- a/net/ipv4/gre_offload.c
+++ b/net/ipv4/gre_offload.c
@@ -229,7 +229,7 @@ static struct sk_buff **gre_gro_receive(struct sk_buff **head,
/* Adjusted NAPI_GRO_CB(skb)->csum after skb_gro_pull()*/
skb_gro_postpull_rcsum(skb, greh, grehlen);
- pp = ptype->callbacks.gro_receive(head, skb);
+ pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);
flush = 0;
out_unlock: