diff options
author | Jens Axboe <axboe@kernel.dk> | 2022-09-22 19:41:51 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2022-09-30 15:49:11 +0200 |
commit | 851eb780decb7180bcf09fad0035cba9aae669df (patch) | |
tree | d089afb600e24f9fdca9c0e2bbe2d2871b832b12 /drivers/nvme | |
parent | nvme: split out metadata vs non metadata end_io uring_cmd completions (diff) | |
download | linux-851eb780decb7180bcf09fad0035cba9aae669df.tar.xz linux-851eb780decb7180bcf09fad0035cba9aae669df.zip |
nvme: enable batched completions of passthrough IO
Now that the normal passthrough end_io path doesn't need the request
anymore, we can kill the explicit blk_mq_free_request() and just pass
back RQ_END_IO_FREE instead. This enables the batched completion from
freeing batches of requests at the time.
This brings passthrough IO performance at least on par with bdev based
O_DIRECT with io_uring. With this and batche allocations, peak performance
goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also
about 10% faster than previously, going from ~61M to ~67M IOPS.
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Co-developed-by: Stefan Roesch <shr@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'drivers/nvme')
-rw-r--r-- | drivers/nvme/host/ioctl.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index f9d1f7e4d6d1..914b142b6f2b 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -430,8 +430,7 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req, else io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb); - blk_mq_free_request(req); - return RQ_END_IO_NONE; + return RQ_END_IO_FREE; } static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req, |