diff options
author | Max Gurtovoy <mgurtovoy@nvidia.com> | 2021-09-22 23:55:35 +0200 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2021-10-20 19:16:01 +0200 |
commit | 44c3c6257e99c6284f312206de73783575fc8906 (patch) | |
tree | dad6ff990003101ecc36fbe1dcaa87492073c5d3 /drivers/nvme/host | |
parent | nvmet-tcp: fix use-after-free when a port is removed (diff) | |
download | linux-44c3c6257e99c6284f312206de73783575fc8906.tar.xz linux-44c3c6257e99c6284f312206de73783575fc8906.zip |
nvme-rdma: limit the maximal queue size for RDMA controllers
Corrent limit of 1024 isn't valid for some of the RDMA based ctrls. In
case the target expose a cap of larger amount of entries (e.g. 1024),
the initiator may fail to create a QP with this size. Thus limit to a
value that works for all RDMA adapters.
Future general solution should use RDMA/core API to calculate this size
according to device capabilities and number of WRs needed per NVMe IO
request.
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/nvme/host')
-rw-r--r-- | drivers/nvme/host/rdma.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 1624da3702d4..027ee57cbdb0 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1112,6 +1112,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new) ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1); } + if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) { + dev_warn(ctrl->ctrl.device, + "ctrl sqsize %u > max queue size %u, clamping down\n", + ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE); + ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1; + } + if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) { dev_warn(ctrl->ctrl.device, "sqsize %u > ctrl maxcmd %u, clamping down\n", |