diff options
author | Shiraz Saleem <shiraz.saleem@intel.com> | 2022-04-25 20:17:02 +0200 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2022-05-02 16:10:33 +0200 |
commit | 2df6d895907b2f5dfbc558cbff7801bba82cb3cc (patch) | |
tree | 85c7791af81a3ef750e882b65558a40dd008f3aa /drivers | |
parent | RDMA/irdma: Flush iWARP QP if modified to ERR from RTR state (diff) | |
download | linux-2df6d895907b2f5dfbc558cbff7801bba82cb3cc.tar.xz linux-2df6d895907b2f5dfbc558cbff7801bba82cb3cc.zip |
RDMA/irdma: Reduce iWARP QP destroy time
QP destroy is synchronous and waits for its refcnt to be decremented in
irdma_cm_node_free_cb (for iWARP) which fires after the RCU grace period
elapses.
Applications running a large number of connections are exposed to high
wait times on destroy QP for events like SIGABORT.
The long pole for this wait time is the firing of the call_rcu callback
during a CM node destroy which can be slow. It holds the QP reference
count and blocks the destroy QP from completing.
call_rcu only needs to make sure that list walkers have a reference to the
cm_node object before freeing it and thus need to wait for grace period
elapse. The rest of the connection teardown in irdma_cm_node_free_cb is
moved out of the grace period wait in irdma_destroy_connection. Also,
replace call_rcu with a simple kfree_rcu as it just needs to do a kfree on
the cm_node
Fixes: 146b9756f14c ("RDMA/irdma: Add connection manager")
Link: https://lore.kernel.org/r/20220425181703.1634-3-shiraz.saleem@intel.com
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/infiniband/hw/irdma/cm.c | 10 |
1 files changed, 4 insertions, 6 deletions
diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c index 90b4113e7071..638bf4a1ed94 100644 --- a/drivers/infiniband/hw/irdma/cm.c +++ b/drivers/infiniband/hw/irdma/cm.c @@ -2308,10 +2308,8 @@ err: return NULL; } -static void irdma_cm_node_free_cb(struct rcu_head *rcu_head) +static void irdma_destroy_connection(struct irdma_cm_node *cm_node) { - struct irdma_cm_node *cm_node = - container_of(rcu_head, struct irdma_cm_node, rcu_head); struct irdma_cm_core *cm_core = cm_node->cm_core; struct irdma_qp *iwqp; struct irdma_cm_info nfo; @@ -2359,7 +2357,6 @@ static void irdma_cm_node_free_cb(struct rcu_head *rcu_head) } cm_core->cm_free_ah(cm_node); - kfree(cm_node); } /** @@ -2387,8 +2384,9 @@ void irdma_rem_ref_cm_node(struct irdma_cm_node *cm_node) spin_unlock_irqrestore(&cm_core->ht_lock, flags); - /* wait for all list walkers to exit their grace period */ - call_rcu(&cm_node->rcu_head, irdma_cm_node_free_cb); + irdma_destroy_connection(cm_node); + + kfree_rcu(cm_node, rcu_head); } /** |