diff options
author | Jens Axboe <axboe@kernel.dk> | 2023-01-27 17:50:31 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-01-29 23:17:41 +0100 |
commit | f58680085478dd292435727210122960d38e8014 (patch) | |
tree | 0df3ac9140ae15f514a0d775484243c26b1ae098 /io_uring | |
parent | io_uring: add a conditional reschedule to the IOPOLL cancelation loop (diff) | |
download | linux-f58680085478dd292435727210122960d38e8014.tar.xz linux-f58680085478dd292435727210122960d38e8014.zip |
io_uring: add reschedule point to handle_tw_list()
If CONFIG_PREEMPT_NONE is set and the task_work chains are long, we
could be running into issues blocking others for too long. Add a
reschedule check in handle_tw_list(), and flush the ctx if we need to
reschedule.
Cc: stable@vger.kernel.org # 5.10+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring')
-rw-r--r-- | io_uring/io_uring.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index fab581a31dc1..acf6d9680d76 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1179,10 +1179,16 @@ static unsigned int handle_tw_list(struct llist_node *node, /* if not contended, grab and improve batching */ *locked = mutex_trylock(&(*ctx)->uring_lock); percpu_ref_get(&(*ctx)->refs); - } + } else if (!*locked) + *locked = mutex_trylock(&(*ctx)->uring_lock); req->io_task_work.func(req, locked); node = next; count++; + if (unlikely(need_resched())) { + ctx_flush_and_put(*ctx, locked); + *ctx = NULL; + cond_resched(); + } } return count; |