diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2021-07-08 14:37:06 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-07-08 22:07:43 +0200 |
commit | 8f487ef2cbb2d4f6ca8c113d70da63baaf68c91a (patch) | |
tree | d5bbc637a1d26e833810b7c52bed9d353bf99fd6 /fs/io_uring.c | |
parent | io_uring: fix drain alloc fail return code (diff) | |
download | linux-8f487ef2cbb2d4f6ca8c113d70da63baaf68c91a.tar.xz linux-8f487ef2cbb2d4f6ca8c113d70da63baaf68c91a.zip |
io_uring: mitigate unlikely iopoll lag
We have requests like IORING_OP_FILES_UPDATE that don't go through
->iopoll_list but get completed in place under ->uring_lock, and so
after dropping the lock io_iopoll_check() should expect that some CQEs
might have get completed in a meanwhile.
Currently such events won't be accounted in @nr_events, and the loop
will continue to poll even if there is enough of CQEs. It shouldn't be a
problem as it's not likely to happen and so, but not nice either. Just
return earlier in this case, it should be enough.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/66ef932cc66a34e3771bbae04b2953a8058e9d05.1625747741.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io_uring.c')
-rw-r--r-- | fs/io_uring.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index 8f2a66903f5a..7167c61c6d1b 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2356,11 +2356,15 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min) * very same mutex. */ if (list_empty(&ctx->iopoll_list)) { + u32 tail = ctx->cached_cq_tail; + mutex_unlock(&ctx->uring_lock); io_run_task_work(); mutex_lock(&ctx->uring_lock); - if (list_empty(&ctx->iopoll_list)) + /* some requests don't go through iopoll_list */ + if (tail != ctx->cached_cq_tail || + list_empty(&ctx->iopoll_list)) break; } ret = io_do_iopoll(ctx, &nr_events, min); |