diff options
author | Keith Busch <kbusch@kernel.org> | 2021-05-18 00:36:43 +0200 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2021-05-19 08:33:42 +0200 |
commit | a0fdd1418007f83565d3f2e04b47923ba93a9b8c (patch) | |
tree | f605ad8eb9589ecf11e75d8c1b0853b99ba74749 /drivers/nvme/host/tcp.c | |
parent | nvme-tcp: fix possible use-after-completion (diff) | |
download | linux-a0fdd1418007f83565d3f2e04b47923ba93a9b8c.tar.xz linux-a0fdd1418007f83565d3f2e04b47923ba93a9b8c.zip |
nvme-tcp: rerun io_work if req_list is not empty
A possible race condition exists where the request to send data is
enqueued from nvme_tcp_handle_r2t()'s will not be observed by
nvme_tcp_send_all() if it happens to be running. The driver relies on
io_work to send the enqueued request when it is runs again, but the
concurrently running nvme_tcp_send_all() may not have released the
send_mutex at that time. If no future commands are enqueued to re-kick
the io_work, the request will timeout in the SEND_H2C state, resulting
in a timeout error like:
nvme nvme0: queue 1: timeout request 0x3 type 6
Ensure the io_work continues to run as long as the req_list is not empty.
Fixes: db5ad6b7f8cdd ("nvme-tcp: try to send request in queue_rq context")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/nvme/host/tcp.c')
-rw-r--r-- | drivers/nvme/host/tcp.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index b97d2732a80f..34f4b3402f7c 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1140,7 +1140,8 @@ static void nvme_tcp_io_work(struct work_struct *w) pending = true; else if (unlikely(result < 0)) break; - } + } else + pending = !llist_empty(&queue->req_list); result = nvme_tcp_try_recv(queue); if (result > 0) |