summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPavel Begunkov <asml.silence@gmail.com>2021-03-04 14:59:25 +0100
committerJens Axboe <axboe@kernel.dk>2021-03-04 23:45:01 +0100
commitdd59a3d595cc10230ded4c8b727b096e16bceeb5 (patch)
tree137825498fc28aa2199bcf7350561fc7506f8eed
parentio_uring: cancel-match based on flags (diff)
downloadlinux-dd59a3d595cc10230ded4c8b727b096e16bceeb5.tar.xz
linux-dd59a3d595cc10230ded4c8b727b096e16bceeb5.zip
io_uring: reliably cancel linked timeouts
Linked timeouts are fired asynchronously (i.e. soft-irq), and use generic cancellation paths to do its stuff, including poking into io-wq. The problem is that it's racy to access tctx->io_wq, as io_uring_task_cancel() and others may be happening at this exact moment. Mark linked timeouts with REQ_F_INLIFGHT for now, making sure there are no timeouts before io-wq destraction. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r--fs/io_uring.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c
index fb4abea1e5d6..e55369555e5c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5500,6 +5500,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
data->mode = io_translate_timeout_mode(flags);
hrtimer_init(&data->timer, CLOCK_MONOTONIC, data->mode);
+ io_req_track_inflight(req);
return 0;
}