diff options
author | David S. Miller <davem@davemloft.net> | 2021-07-10 00:22:45 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2021-07-10 00:22:45 +0200 |
commit | 5d52c906f059b9ee11747557aaaf1fd85a3b6c3d (patch) | |
tree | 6d8a1f863940a1cc6dbb6e7e9968cfa956c9abbb /arch | |
parent | net: validate lwtstate->data before returning from skb_tunnel_info() (diff) | |
parent | bpf: Selftest to verify mixing bpf2bpf calls and tailcalls with insn patch (diff) | |
download | linux-5d52c906f059b9ee11747557aaaf1fd85a3b6c3d.tar.xz linux-5d52c906f059b9ee11747557aaaf1fd85a3b6c3d.zip |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says:
====================
pull-request: bpf 2021-07-09
The following pull-request contains BPF updates for your *net* tree.
We've added 9 non-merge commits during the last 9 day(s) which contain
a total of 13 files changed, 118 insertions(+), 62 deletions(-).
The main changes are:
1) Fix runqslower task->state access from BPF, from SanjayKumar Jeyakumar.
2) Fix subprog poke descriptor tracking use-after-free, from John Fastabend.
3) Fix sparse complaint from prior devmap RCU conversion, from Toke Høiland-Jørgensen.
4) Fix missing va_end in bpftool JIT json dump's error path, from Gu Shengxian.
5) Fix tools/bpf install target from missing runqslower install, from Wei Li.
6) Fix xdpsock BPF sample to unload program on shared umem option, from Wang Hai.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/net/bpf_jit_comp.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index e835164189f1..4b951458c9fc 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -570,6 +570,9 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog) for (i = 0; i < prog->aux->size_poke_tab; i++) { poke = &prog->aux->poke_tab[i]; + if (poke->aux && poke->aux != prog->aux) + continue; + WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable)); if (poke->reason != BPF_POKE_REASON_TAIL_CALL) |