diff options
author | Jens Axboe <axboe@kernel.dk> | 2020-02-02 16:23:03 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-02-04 01:27:38 +0100 |
commit | b5e683d5cab8cd433b06ae178621f083cabd4f63 (patch) | |
tree | ff80c1fcdd40441ae015b35d67897c1f63129a9c /fs/eventfd.c | |
parent | io_uring: add BUILD_BUG_ON() to assert the layout of struct io_uring_sqe (diff) | |
download | linux-b5e683d5cab8cd433b06ae178621f083cabd4f63.tar.xz linux-b5e683d5cab8cd433b06ae178621f083cabd4f63.zip |
eventfd: track eventfd_signal() recursion depth
eventfd use cases from aio and io_uring can deadlock due to circular
or resursive calling, when eventfd_signal() tries to grab the waitqueue
lock. On top of that, it's also possible to construct notification
chains that are deep enough that we could blow the stack.
Add a percpu counter that tracks the percpu recursion depth, warn if we
exceed it. The counter is also exposed so that users of eventfd_signal()
can do the right thing if it's non-zero in the context where it is
called.
Cc: stable@vger.kernel.org # 4.19+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/eventfd.c')
-rw-r--r-- | fs/eventfd.c | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/fs/eventfd.c b/fs/eventfd.c index 8aa0ea8c55e8..78e41c7c3d05 100644 --- a/fs/eventfd.c +++ b/fs/eventfd.c @@ -24,6 +24,8 @@ #include <linux/seq_file.h> #include <linux/idr.h> +DEFINE_PER_CPU(int, eventfd_wake_count); + static DEFINE_IDA(eventfd_ida); struct eventfd_ctx { @@ -60,12 +62,25 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) { unsigned long flags; + /* + * Deadlock or stack overflow issues can happen if we recurse here + * through waitqueue wakeup handlers. If the caller users potentially + * nested waitqueues with custom wakeup handlers, then it should + * check eventfd_signal_count() before calling this function. If + * it returns true, the eventfd_signal() call should be deferred to a + * safe context. + */ + if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) + return 0; + spin_lock_irqsave(&ctx->wqh.lock, flags); + this_cpu_inc(eventfd_wake_count); if (ULLONG_MAX - ctx->count < n) n = ULLONG_MAX - ctx->count; ctx->count += n; if (waitqueue_active(&ctx->wqh)) wake_up_locked_poll(&ctx->wqh, EPOLLIN); + this_cpu_dec(eventfd_wake_count); spin_unlock_irqrestore(&ctx->wqh.lock, flags); return n; |