diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2020-07-23 19:25:20 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-07-24 21:00:46 +0200 |
commit | ae34817bd93e373a03203a4c6892735c430a14e1 (patch) | |
tree | 36e31c3d6eb575289ab4ad10a293c661fa3b8396 | |
parent | io_uring: clear IORING_SQ_NEED_WAKEUP after executing task works (diff) | |
download | linux-ae34817bd93e373a03203a4c6892735c430a14e1.tar.xz linux-ae34817bd93e373a03203a4c6892735c430a14e1.zip |
io_uring: don't do opcode prep twice
Calling into opcode prep handlers may be dangerous, as they re-read
SQE but might not re-initialise requests completely. If io_req_defer()
passed fast checks and is done with preparations, punt it async.
As all other cases are covered with nulling @sqe, this guarantees that
io_[opcode]_prep() are visited only once per request.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r-- | fs/io_uring.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index 6f3f18a99f4f..38e4c3902963 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -5447,7 +5447,8 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) { spin_unlock_irq(&ctx->completion_lock); kfree(de); - return 0; + io_queue_async_work(req); + return -EIOCBQUEUED; } trace_io_uring_defer(ctx, req, req->user_data); |