summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2010-08-24 14:22:47 +0200
committerTejun Heo <tj@kernel.org>2010-08-24 18:01:32 +0200
commite41e704bc4f49057fc68b643108366e6e6781aa3 (patch)
tree8cc85208970ba0c9adf533903243e28c506f23ae /include
parentworkqueue: mark lock acquisition on worker_maybe_bind_and_lock() (diff)
downloadlinux-e41e704bc4f49057fc68b643108366e6e6781aa3.tar.xz
linux-e41e704bc4f49057fc68b643108366e6e6781aa3.zip
workqueue: improve destroy_workqueue() debuggability
Now that the worklist is global, having works pending after wq destruction can easily lead to oops and destroy_workqueue() have several BUG_ON()s to catch these cases. Unfortunately, BUG_ON() doesn't tell much about how the work became pending after the final flush_workqueue(). This patch adds WQ_DYING which is set before the final flush begins. If a work is requested to be queued on a dying workqueue, WARN_ON_ONCE() is triggered and the request is ignored. This clearly indicates which caller is trying to queue a work on a dying workqueue and keeps the system working in most cases. Locking rule comment is updated such that the 'I' rule includes modifying the field from destruction path. Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/workqueue.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 4f9d277bcd9a..c959666eafca 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -241,6 +241,8 @@ enum {
WQ_HIGHPRI = 1 << 4, /* high priority */
WQ_CPU_INTENSIVE = 1 << 5, /* cpu instensive workqueue */
+ WQ_DYING = 1 << 6, /* internal: workqueue is dying */
+
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */
WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,