diff options
author | Florian Westphal <fw@strlen.de> | 2018-12-28 01:24:44 +0100 |
---|---|---|
committer | Pablo Neira Ayuso <pablo@netfilter.org> | 2018-12-29 02:45:19 +0100 |
commit | f7fcc98dfc2d136722007fec0debbed761679b94 (patch) | |
tree | abac7ea27b0b8414e427da2ed4995d1c14a85b80 /net | |
parent | netfilter: nf_conncount: don't skip eviction when age is negative (diff) | |
download | linux-f7fcc98dfc2d136722007fec0debbed761679b94.tar.xz linux-f7fcc98dfc2d136722007fec0debbed761679b94.zip |
netfilter: nf_conncount: split gc in two phases
The lockless workqueue garbage collector can race with packet path
garbage collector to delete list nodes, as it calls tree_nodes_free()
with the addresses of nodes that might have been free'd already from
another cpu.
To fix this, split gc into two phases.
One phase to perform gc on the connections: From a locking perspective,
this is the same as count_tree(): we hold rcu lock, but we do not
change the tree, we only change the nodes' contents.
The second phase acquires the tree lock and reaps empty nodes.
This avoids a race condition of the garbage collection vs. packet path:
If a node has been free'd already, the second phase won't find it anymore.
This second phase is, from locking perspective, same as insert_tree().
The former only modifies nodes (list content, count), latter modifies
the tree itself (rb_erase or rb_insert).
Fixes: 5c789e131cbb9 ("netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search")
Reviewed-by: Shawn Bohrer <sbohrer@cloudflare.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Diffstat (limited to 'net')
-rw-r--r-- | net/netfilter/nf_conncount.c | 22 |
1 files changed, 19 insertions, 3 deletions
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c index 8bb4ed85c262..753132e4afa8 100644 --- a/net/netfilter/nf_conncount.c +++ b/net/netfilter/nf_conncount.c @@ -500,16 +500,32 @@ static void tree_gc_worker(struct work_struct *work) for (node = rb_first(root); node != NULL; node = rb_next(node)) { rbconn = rb_entry(node, struct nf_conncount_rb, node); if (nf_conncount_gc_list(data->net, &rbconn->list)) - gc_nodes[gc_count++] = rbconn; + gc_count++; } rcu_read_unlock(); spin_lock_bh(&nf_conncount_locks[tree]); + if (gc_count < ARRAY_SIZE(gc_nodes)) + goto next; /* do not bother */ - if (gc_count) { - tree_nodes_free(root, gc_nodes, gc_count); + gc_count = 0; + node = rb_first(root); + while (node != NULL) { + rbconn = rb_entry(node, struct nf_conncount_rb, node); + node = rb_next(node); + + if (rbconn->list.count > 0) + continue; + + gc_nodes[gc_count++] = rbconn; + if (gc_count >= ARRAY_SIZE(gc_nodes)) { + tree_nodes_free(root, gc_nodes, gc_count); + gc_count = 0; + } } + tree_nodes_free(root, gc_nodes, gc_count); +next: clear_bit(tree, data->pending_trees); next_tree = (tree + 1) % CONNCOUNT_SLOTS; |