summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorPaolo Abeni <pabeni@redhat.com>2023-03-09 15:49:57 +0100
committerJakub Kicinski <kuba@kernel.org>2023-03-11 06:42:56 +0100
commitb7a679ba7c652587b85294f4953f33ac0b756d40 (patch)
tree726eb073c3ce555b940826bf4e9c28dd177fdd6d /net
parentMerge branch 'update-xdp_features-flag-according-to-nic-re-configuration' (diff)
downloadlinux-b7a679ba7c652587b85294f4953f33ac0b756d40.tar.xz
linux-b7a679ba7c652587b85294f4953f33ac0b756d40.zip
mptcp: fix possible deadlock in subflow_error_report
Christoph reported a possible deadlock while the TCP stack destroys an unaccepted subflow due to an incoming reset: the MPTCP socket error path tries to acquire the msk-level socket lock while TCP still owns the listener socket accept queue spinlock, and the reverse dependency already exists in the TCP stack. Note that the above is actually a lockdep false positive, as the chain involves two separate sockets. A different per-socket lockdep key will address the issue, but such a change will be quite invasive. Instead, we can simply stop earlier the socket error handling for orphaned or unaccepted subflows, breaking the critical lockdep chain. Error handling in such a scenario is a no-op. Reported-and-tested-by: Christoph Paasch <cpaasch@apple.com> Fixes: 15cc10453398 ("mptcp: deliver ssk errors to msk") Cc: stable@vger.kernel.org Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/355 Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net')
-rw-r--r--net/mptcp/subflow.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index 4ae1a7304cf0..5070dc33675d 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -1432,6 +1432,13 @@ static void subflow_error_report(struct sock *ssk)
{
struct sock *sk = mptcp_subflow_ctx(ssk)->conn;
+ /* bail early if this is a no-op, so that we avoid introducing a
+ * problematic lockdep dependency between TCP accept queue lock
+ * and msk socket spinlock
+ */
+ if (!sk->sk_socket)
+ return;
+
mptcp_data_lock(sk);
if (!sock_owned_by_user(sk))
__mptcp_error_report(sk);