diff options
author | Lorenz Bauer <lmb@cloudflare.com> | 2021-09-28 11:30:59 +0200 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2021-09-29 23:10:05 +0200 |
commit | de21d8bf777240c6d6dfefa39b4925729e32c0fd (patch) | |
tree | a62f83fdc7bfe7b37db6455234c9992a31136834 /net/bpf | |
parent | libbpf: Make gen_loader data aligned. (diff) | |
download | linux-de21d8bf777240c6d6dfefa39b4925729e32c0fd.tar.xz linux-de21d8bf777240c6d6dfefa39b4925729e32c0fd.zip |
bpf: Do not invoke the XDP dispatcher for PROG_RUN with single repeat
We have a unit test that invokes an XDP program with 1m different
inputs, aka 1m BPF_PROG_RUN syscalls. We run this test concurrently
with slight variations in how we generated the input.
Since commit f23c4b3924d2 ("bpf: Start using the BPF dispatcher in BPF_TEST_RUN")
the unit test has slowed down significantly. Digging deeper reveals that
the concurrent tests are serialised in the kernel on the XDP dispatcher.
This is a global resource that is protected by a mutex, on which we contend.
Fix this by not calling into the XDP dispatcher if we only want to perform
a single run of the BPF program.
See: https://lore.kernel.org/bpf/CACAyw9_y4QumOW35qpgTbLsJ532uGq-kVW-VESJzGyiZkypnvw@mail.gmail.com/
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210928093100.27124-1-lmb@cloudflare.com
Diffstat (limited to 'net/bpf')
-rw-r--r-- | net/bpf/test_run.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index fcb2f493f710..6593a71dba5f 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -803,7 +803,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, if (ret) goto free_data; - bpf_prog_change_xdp(NULL, prog); + if (repeat > 1) + bpf_prog_change_xdp(NULL, prog); ret = bpf_test_run(prog, &xdp, repeat, &retval, &duration, true); /* We convert the xdp_buff back to an xdp_md before checking the return * code so the reference count of any held netdevice will be decremented @@ -824,7 +825,8 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr, sizeof(struct xdp_md)); out: - bpf_prog_change_xdp(prog, NULL); + if (repeat > 1) + bpf_prog_change_xdp(prog, NULL); free_data: kfree(data); free_ctx: |