diff options
author | Marcelo Leitner <mleitner@redhat.com> | 2014-12-11 13:02:22 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-12-11 20:57:08 +0100 |
commit | 00c83b01d58068dfeb2e1351cca6fccf2a83fa8f (patch) | |
tree | 2b00c1d5a6ba84dc4fb9f8c6f9aee3de7aaa6799 /net/ipv4/tcp.c | |
parent | net/macb: fix compilation warning for print_hex_dump() called with skb->mac_h... (diff) | |
download | linux-00c83b01d58068dfeb2e1351cca6fccf2a83fa8f.tar.xz linux-00c83b01d58068dfeb2e1351cca6fccf2a83fa8f.zip |
Fix race condition between vxlan_sock_add and vxlan_sock_release
Currently, when trying to reuse a socket, vxlan_sock_add will grab
vn->sock_lock, locate a reusable socket, inc refcount and release
vn->sock_lock.
But vxlan_sock_release() will first decrement refcount, and then grab
that lock. refcnt operations are atomic but as currently we have
deferred works which hold vs->refcnt each, this might happen, leading to
a use after free (specially after vxlan_igmp_leave):
CPU 1 CPU 2
deferred work vxlan_sock_add
... ...
spin_lock(&vn->sock_lock)
vs = vxlan_find_sock();
vxlan_sock_release
dec vs->refcnt, reaches 0
spin_lock(&vn->sock_lock)
vxlan_sock_hold(vs), refcnt=1
spin_unlock(&vn->sock_lock)
hlist_del_rcu(&vs->hlist);
vxlan_notify_del_rx_port(vs)
spin_unlock(&vn->sock_lock)
So when we look for a reusable socket, we check if it wasn't freed
already before reusing it.
Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Fixes: 7c47cedf43a8b3 ("vxlan: move IGMP join/leave to work queue")
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp.c')
0 files changed, 0 insertions, 0 deletions