diff options
author | Daniel Borkmann <daniel@iogearbox.net> | 2016-11-04 00:01:19 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-11-07 19:20:52 +0100 |
commit | 483bed2b0ddd12ec33fc9407e0c6e1088e77a97c (patch) | |
tree | aa01c5eb2cc793ea5e3629ccf59c5977c28c0264 /kernel | |
parent | sctp: assign assoc_id earlier in __sctp_connect (diff) | |
download | linux-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.tar.xz linux-483bed2b0ddd12ec33fc9407e0c6e1088e77a97c.zip |
bpf: fix htab map destruction when extra reserve is in use
Commit a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem")
added an extra per-cpu reserve to the hash table map to restore old
behaviour from pre prealloc times. When non-prealloc is in use for a
map, then problem is that once a hash table extra element has been
linked into the hash-table, and the hash table is destroyed due to
refcount dropping to zero, then htab_map_free() -> delete_all_elements()
will walk the whole hash table and drop all elements via htab_elem_free().
The problem is that the element from the extra reserve is first fed
to the wrong backend allocator and eventually freed twice.
Fixes: a6ed3ea65d98 ("bpf: restore behavior of bpf_map_update_elem")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/bpf/hashtab.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 570eeca7bdfa..ad1bc67aff1b 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -687,7 +687,8 @@ static void delete_all_elements(struct bpf_htab *htab) hlist_for_each_entry_safe(l, n, head, hash_node) { hlist_del_rcu(&l->hash_node); - htab_elem_free(htab, l); + if (l->state != HTAB_EXTRA_ELEM_USED) + htab_elem_free(htab, l); } } } |