diff options
author | Chuck Lever <chuck.lever@oracle.com> | 2017-08-28 21:06:14 +0200 |
---|---|---|
committer | J. Bruce Fields <bfields@redhat.com> | 2017-09-05 21:15:30 +0200 |
commit | 0062818298662d0d05061949d12880146b5ebd65 (patch) | |
tree | 50f9a9d6b223eba747b65857157a87bdb1356c2c /include/rdma | |
parent | svcrdma: Limit RQ depth (diff) | |
download | linux-0062818298662d0d05061949d12880146b5ebd65.tar.xz linux-0062818298662d0d05061949d12880146b5ebd65.zip |
rdma core: Add rdma_rw_mr_payload()
The amount of payload per MR depends on device capabilities and
the memory registration mode in use. The new rdma_rw API hides both,
making it difficult for ULPs to determine how large their transport
send queues need to be.
Expose the MR payload information via a new API.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Acked-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Diffstat (limited to 'include/rdma')
-rw-r--r-- | include/rdma/rw.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/rdma/rw.h b/include/rdma/rw.h index 377d865e506d..a3cbbc7b6417 100644 --- a/include/rdma/rw.h +++ b/include/rdma/rw.h @@ -81,6 +81,8 @@ struct ib_send_wr *rdma_rw_ctx_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp, int rdma_rw_ctx_post(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, struct ib_cqe *cqe, struct ib_send_wr *chain_wr); +unsigned int rdma_rw_mr_factor(struct ib_device *device, u8 port_num, + unsigned int maxpages); void rdma_rw_init_qp(struct ib_device *dev, struct ib_qp_init_attr *attr); int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr); void rdma_rw_cleanup_mrs(struct ib_qp *qp); |