diff options
author | Herbert Xu <herbert@gondor.apana.org.au> | 2017-12-15 06:40:44 +0100 |
---|---|---|
committer | Steffen Klassert <steffen.klassert@secunet.com> | 2017-12-19 08:23:21 +0100 |
commit | acf568ee859f098279eadf551612f103afdacb4e (patch) | |
tree | 2ca6509d139079ad95e37bdfb94bf570fc094a6d /net/ipv4 | |
parent | xfrm: put policies when reusing pcpu xdst entry (diff) | |
download | linux-acf568ee859f098279eadf551612f103afdacb4e.tar.xz linux-acf568ee859f098279eadf551612f103afdacb4e.zip |
xfrm: Reinject transport-mode packets through tasklet
This is an old bugbear of mine:
https://www.mail-archive.com/netdev@vger.kernel.org/msg03894.html
By crafting special packets, it is possible to cause recursion
in our kernel when processing transport-mode packets at levels
that are only limited by packet size.
The easiest one is with DNAT, but an even worse one is where
UDP encapsulation is used in which case you just have to insert
an UDP encapsulation header in between each level of recursion.
This patch avoids this problem by reinjecting tranport-mode packets
through a tasklet.
Fixes: b05e106698d9 ("[IPV4/6]: Netfilter IPsec input hooks")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Diffstat (limited to 'net/ipv4')
-rw-r--r-- | net/ipv4/xfrm4_input.c | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c index e50b7fea57ee..bcfc00e88756 100644 --- a/net/ipv4/xfrm4_input.c +++ b/net/ipv4/xfrm4_input.c @@ -23,6 +23,12 @@ int xfrm4_extract_input(struct xfrm_state *x, struct sk_buff *skb) return xfrm4_extract_header(skb); } +static int xfrm4_rcv_encap_finish2(struct net *net, struct sock *sk, + struct sk_buff *skb) +{ + return dst_input(skb); +} + static inline int xfrm4_rcv_encap_finish(struct net *net, struct sock *sk, struct sk_buff *skb) { @@ -33,7 +39,11 @@ static inline int xfrm4_rcv_encap_finish(struct net *net, struct sock *sk, iph->tos, skb->dev)) goto drop; } - return dst_input(skb); + + if (xfrm_trans_queue(skb, xfrm4_rcv_encap_finish2)) + goto drop; + + return 0; drop: kfree_skb(skb); return NET_RX_DROP; |