summaryrefslogtreecommitdiffstats
path: root/crypto
diff options
context:
space:
mode:
authorChuck Lever <chuck.lever@oracle.com>2015-10-24 23:27:02 +0200
committerAnna Schumaker <Anna.Schumaker@Netapp.com>2015-11-02 19:45:15 +0100
commit1e465fd4ff475cc29c866ee75496c941b3908e69 (patch)
tree71f282609d76e42b8426c1c78f576f5dd9805d4c /crypto
parentxprtrdma: Refactor reply handler error handling (diff)
downloadlinux-1e465fd4ff475cc29c866ee75496c941b3908e69.tar.xz
linux-1e465fd4ff475cc29c866ee75496c941b3908e69.zip
xprtrdma: Replace send and receive arrays
The rb_send_bufs and rb_recv_bufs arrays are used to implement a pair of stacks for keeping track of free rpcrdma_req and rpcrdma_rep structs. Replace those arrays with free lists. To allow more than 512 RPCs in-flight at once, each of these arrays would be larger than a page (assuming 8-byte addresses and 4KB pages). Allowing up to 64K in-flight RPCs (as TCP now does), each buffer array would have to be 128 pages. That's an order-6 allocation. (Not that we're going there.) A list is easier to expand dynamically. Instead of allocating a larger array of pointers and copying the existing pointers to the new array, simply append more buffers to each list. This also makes it simpler to manage receive buffers that might catch backwards-direction calls, or to post receive buffers in bulk to amortize the overhead of ib_post_recv. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Reviewed-by: Devesh Sharma <devesh.sharma@avagotech.com> Tested-By: Devesh Sharma <devesh.sharma@avagotech.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Diffstat (limited to 'crypto')
0 files changed, 0 insertions, 0 deletions