diff options
author | Chuck Lever <chuck.lever@oracle.com> | 2021-05-01 21:38:02 +0200 |
---|---|---|
committer | Trond Myklebust <trond.myklebust@hammerspace.com> | 2021-05-02 01:42:22 +0200 |
commit | 9e895cd9649abe4392c59d14e31b0f5667d082d2 (patch) | |
tree | 65806dfb3557a48362d1548ab031ff1498345d32 /net/sunrpc | |
parent | sunrpc: Fix misplaced barrier in call_decode (diff) | |
download | linux-9e895cd9649abe4392c59d14e31b0f5667d082d2.tar.xz linux-9e895cd9649abe4392c59d14e31b0f5667d082d2.zip |
xprtrdma: Fix a NULL dereference in frwr_unmap_sync()
The normal mechanism that invalidates and unmaps MRs is
frwr_unmap_async(). frwr_unmap_sync() is used only when an RPC
Reply bearing Write or Reply chunks has been lost (ie, almost
never).
Coverity found that after commit 9a301cafc861 ("xprtrdma: Move
fr_linv_done field to struct rpcrdma_mr"), the while() loop in
frwr_unmap_sync() exits only once @mr is NULL, unconditionally
causing subsequent dereferences of @mr to Oops.
I've tested this fix by creating a client that skips invoking
frwr_unmap_async() when RPC Replies complete. That forces all
invalidation tasks to fall upon frwr_unmap_sync(). Simple workloads
with this fix applied to the adulterated client work as designed.
Reported-by: coverity-bot <keescook+coverity-bot@chromium.org>
Addresses-Coverity-ID: 1504556 ("Null pointer dereferences")
Fixes: 9a301cafc861 ("xprtrdma: Move fr_linv_done field to struct rpcrdma_mr")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Diffstat (limited to 'net/sunrpc')
-rw-r--r-- | net/sunrpc/xprtrdma/frwr_ops.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index 285d73246fc2..229fcc9a9064 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -530,6 +530,7 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req) *prev = last; prev = &last->next; } + mr = container_of(last, struct rpcrdma_mr, mr_invwr); /* Strong send queue ordering guarantees that when the * last WR in the chain completes, all WRs in the chain |