summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorFlorian Westphal <fw@strlen.de>2018-12-28 01:24:45 +0100
committerPablo Neira Ayuso <pablo@netfilter.org>2018-12-29 02:45:20 +0100
commite8cfb372b38a1b8979aa7f7631fb5e7b11c3793c (patch)
treef909dc3dafddbba1af23dcfdf81302171ba1e9d9 /net
parentnetfilter: nf_conncount: split gc in two phases (diff)
downloadlinux-e8cfb372b38a1b8979aa7f7631fb5e7b11c3793c.tar.xz
linux-e8cfb372b38a1b8979aa7f7631fb5e7b11c3793c.zip
netfilter: nf_conncount: restart search when nodes have been erased
Shawn Bohrer reported a following crash: |RIP: 0010:rb_erase+0xae/0x360 [..] Call Trace: nf_conncount_destroy+0x59/0xc0 [nf_conncount] cleanup_match+0x45/0x70 [ip_tables] ... Shawn tracked this down to bogus 'parent' pointer: Problem is that when we insert a new node, then there is a chance that the 'parent' that we found was also passed to tree_nodes_free() (because that node was empty) for erase+free. Instead of trying to be clever and detect when this happens, restart the search if we have evicted one or more nodes. To prevent frequent restarts, do not perform gc on the second round. Also, unconditionally schedule the gc worker. The condition gc_count > ARRAY_SIZE(gc_nodes)) cannot be true unless tree grows very large, as the height of the tree will be low even with hundreds of nodes present. Fixes: 5c789e131cbb9 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search") Reported-by: Shawn Bohrer <sbohrer@cloudflare.com> Reviewed-by: Shawn Bohrer <sbohrer@cloudflare.com> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Diffstat (limited to 'net')
-rw-r--r--net/netfilter/nf_conncount.c18
1 files changed, 7 insertions, 11 deletions
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 753132e4afa8..0a83c694a8f1 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -346,9 +346,10 @@ insert_tree(struct net *net,
struct nf_conncount_tuple *conn;
unsigned int count = 0, gc_count = 0;
bool node_found = false;
+ bool do_gc = true;
spin_lock_bh(&nf_conncount_locks[hash]);
-
+restart:
parent = NULL;
rbnode = &(root->rb_node);
while (*rbnode) {
@@ -381,21 +382,16 @@ insert_tree(struct net *net,
if (gc_count >= ARRAY_SIZE(gc_nodes))
continue;
- if (nf_conncount_gc_list(net, &rbconn->list))
+ if (do_gc && nf_conncount_gc_list(net, &rbconn->list))
gc_nodes[gc_count++] = rbconn;
}
if (gc_count) {
tree_nodes_free(root, gc_nodes, gc_count);
- /* tree_node_free before new allocation permits
- * allocator to re-use newly free'd object.
- *
- * This is a rare event; in most cases we will find
- * existing node to re-use. (or gc_count is 0).
- */
-
- if (gc_count >= ARRAY_SIZE(gc_nodes))
- schedule_gc_worker(data, hash);
+ schedule_gc_worker(data, hash);
+ gc_count = 0;
+ do_gc = false;
+ goto restart;
}
if (node_found)