summaryrefslogtreecommitdiffstats
path: root/kernel/bpf
diff options
context:
space:
mode:
authorAndrii Nakryiko <andriin@fb.com>2020-05-14 07:51:37 +0200
committerAlexei Starovoitov <ast@kernel.org>2020-05-15 03:37:32 +0200
commitc70f34a8ac66c2cb05593ef5760142e5f862a9b4 (patch)
tree8c0045ceb528c231cc94e10534e305f9d196cc3c /kernel/bpf
parentselftests/bpf: Test narrow loads for bpf_sock_addr.user_port (diff)
downloadlinux-c70f34a8ac66c2cb05593ef5760142e5f862a9b4.tar.xz
linux-c70f34a8ac66c2cb05593ef5760142e5f862a9b4.zip
bpf: Fix bpf_iter's task iterator logic
task_seq_get_next might stop prematurely if get_pid_task() fails to get task_struct. Failure to do so doesn't mean that there are no more tasks with higher pids. Procfs's iteration algorithm (see next_tgid in fs/proc/base.c) does a retry in such case. After this fix, instead of stopping prematurely after about 300 tasks on my server, bpf_iter program now returns >4000, which sounds much closer to reality. Fixes: eaaacd23910f ("bpf: Add task and task/file iterator targets") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200514055137.1564581-1-andriin@fb.com
Diffstat (limited to 'kernel/bpf')
-rw-r--r--kernel/bpf/task_iter.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
index a9b7264dda08..4dbf2b6035f8 100644
--- a/kernel/bpf/task_iter.c
+++ b/kernel/bpf/task_iter.c
@@ -27,9 +27,15 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
struct pid *pid;
rcu_read_lock();
+retry:
pid = idr_get_next(&ns->idr, tid);
- if (pid)
+ if (pid) {
task = get_pid_task(pid, PIDTYPE_PID);
+ if (!task) {
+ ++*tid;
+ goto retry;
+ }
+ }
rcu_read_unlock();
return task;