diff options
author | David S. Miller <davem@davemloft.net> | 2016-03-08 21:28:33 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-03-08 21:28:33 +0100 |
commit | f14b488d50b7dc234ddaed53ce4293c9eac47457 (patch) | |
tree | ff802293cd7a0020211818c8039cd9f117f30a35 /include/net | |
parent | Merge branch 'ipv6-per-netns-gc' (diff) | |
parent | samples/bpf: test both pre-alloc and normal maps (diff) | |
download | linux-f14b488d50b7dc234ddaed53ce4293c9eac47457.tar.xz linux-f14b488d50b7dc234ddaed53ce4293c9eac47457.zip |
Merge branch 'bpf-map-prealloc'
Alexei Starovoitov says:
====================
bpf: map pre-alloc
v1->v2:
. fix few issues spotted by Daniel
. converted stackmap into pre-allocation as well
. added a workaround for lockdep false positive
. added pcpu_freelist_populate to be used by hashmap and stackmap
this path set switches bpf hash map to use pre-allocation by default
and introduces BPF_F_NO_PREALLOC flag to keep old behavior for cases
where full map pre-allocation is too memory expensive.
Some time back Daniel Wagner reported crashes when bpf hash map is
used to compute time intervals between preempt_disable->preempt_enable
and recently Tom Zanussi reported a dead lock in iovisor/bcc/funccount
tool if it's used to count the number of invocations of kernel
'*spin*' functions. Both problems are due to the recursive use of
slub and can only be solved by pre-allocating all map elements.
A lot of different solutions were considered. Many implemented,
but at the end pre-allocation seems to be the only feasible answer.
As far as pre-allocation goes it also was implemented 4 different ways:
- simple free-list with single lock
- percpu_ida with optimizations
- blk-mq-tag variant customized for bpf use case
- percpu_freelist
For bpf style of alloc/free patterns percpu_freelist is the best
and implemented in this patch set.
Detailed performance numbers in patch 3.
Patch 2 introduces percpu_freelist
Patch 1 fixes simple deadlocks due to missing recursion checks
Patch 5: converts stackmap to pre-allocation
Patches 6-9: prepare test infra
Patch 10: stress test for hash map infra. It attaches to spin_lock
functions and bpf_map_update/delete are called from different contexts
Patch 11: stress for bpf_get_stackid
Patch 12: map performance test
Reported-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Reported-by: Tom Zanussi <tom.zanussi@linux.intel.com>
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net')
0 files changed, 0 insertions, 0 deletions