summaryrefslogtreecommitdiffstats
path: root/kernel/bpf/xskmap.c
diff options
context:
space:
mode:
authorJesper Dangaard Brouer <brouer@redhat.com>2019-04-12 17:07:43 +0200
committerAlexei Starovoitov <ast@kernel.org>2019-04-18 04:09:25 +0200
commit8f0504a97e1ba6b70e1c8b5a88255c280f263287 (patch)
tree762bb6478b65c42ba2e0e79f52d8421cc0dcb707 /kernel/bpf/xskmap.c
parentnet: core: introduce build_skb_around (diff)
downloadlinux-8f0504a97e1ba6b70e1c8b5a88255c280f263287.tar.xz
linux-8f0504a97e1ba6b70e1c8b5a88255c280f263287.zip
bpf: cpumap do bulk allocation of SKBs
As cpumap now batch consume xdp_frame's from the ptr_ring, it knows how many SKBs it need to allocate. Thus, lets bulk allocate these SKBs via kmem_cache_alloc_bulk() API, and use the previously introduced function build_skb_around(). Notice that the flag __GFP_ZERO asks the slab/slub allocator to clear the memory for us. This does clear a larger area than needed, but my micro benchmarks on Intel CPUs show that this is slightly faster due to being a cacheline aligned area is cleared for the SKBs. (For SLUB allocator, there is a future optimization potential, because SKBs will with high probability originate from same page. If we can find/identify continuous memory areas then the Intel CPU memset rep stos will have a real performance gain.) Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel/bpf/xskmap.c')
0 files changed, 0 insertions, 0 deletions