summaryrefslogtreecommitdiffstats
path: root/drivers/infiniband/sw/rdmavt/qp.c
diff options
context:
space:
mode:
authorNiels Dossche <dossche.niels@gmail.com>2022-02-28 17:53:30 +0100
committerJason Gunthorpe <jgg@nvidia.com>2022-04-04 19:45:02 +0200
commit4d809f69695d4e7d1378b3a072fa9aef23123018 (patch)
treea3292f1cb804d46dbd39a9303a75eee194c28f12 /drivers/infiniband/sw/rdmavt/qp.c
parentIB/cm: Cancel mad on the DREQ event when the state is MRA_REP_RCVD (diff)
downloadlinux-4d809f69695d4e7d1378b3a072fa9aef23123018.tar.xz
linux-4d809f69695d4e7d1378b3a072fa9aef23123018.zip
IB/rdmavt: add lock to call to rvt_error_qp to prevent a race condition
The documentation of the function rvt_error_qp says both r_lock and s_lock need to be held when calling that function. It also asserts using lockdep that both of those locks are held. However, the commit I referenced in Fixes accidentally makes the call to rvt_error_qp in rvt_ruc_loopback no longer covered by r_lock. This results in the lockdep assertion failing and also possibly in a race condition. Fixes: d757c60eca9b ("IB/rdmavt: Fix concurrency panics in QP post_send and modify to error") Link: https://lore.kernel.org/r/20220228165330.41546-1-dossche.niels@gmail.com Signed-off-by: Niels Dossche <dossche.niels@gmail.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers/infiniband/sw/rdmavt/qp.c')
-rw-r--r--drivers/infiniband/sw/rdmavt/qp.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index ae50b56e8913..8ef112f883a7 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -3190,7 +3190,11 @@ serr_no_r_lock:
spin_lock_irqsave(&sqp->s_lock, flags);
rvt_send_complete(sqp, wqe, send_status);
if (sqp->ibqp.qp_type == IB_QPT_RC) {
- int lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
+ int lastwqe;
+
+ spin_lock(&sqp->r_lock);
+ lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
+ spin_unlock(&sqp->r_lock);
sqp->s_flags &= ~RVT_S_BUSY;
spin_unlock_irqrestore(&sqp->s_lock, flags);