summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorLai Jiangshan <jiangshan.ljs@antgroup.com>2024-07-03 11:27:41 +0200
committerTejun Heo <tj@kernel.org>2024-07-15 06:20:19 +0200
commit58629d4871e8eb2c385b16a73a8451669db59f39 (patch)
treeb8df40049d0db437705e08cf9e2bab2a45c2ab99 /kernel
parentworkqueue: Rename wq_update_pod() to unbound_wq_update_pwq() (diff)
downloadlinux-58629d4871e8eb2c385b16a73a8451669db59f39.tar.xz
linux-58629d4871e8eb2c385b16a73a8451669db59f39.zip
workqueue: Always queue work items to the newest PWQ for order workqueues
To ensure non-reentrancy, __queue_work() attempts to enqueue a work item to the pool of the currently executing worker. This is not only unnecessary for an ordered workqueue, where order inherently suggests non-reentrancy, but it could also disrupt the sequence if the item is not enqueued on the newest PWQ. Just queue it to the newest PWQ and let order management guarantees non-reentrancy. Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com> Fixes: 4c065dbce1e8 ("workqueue: Enable unbound cpumask update on ordered workqueues") Cc: stable@vger.kernel.org # v6.9+ Signed-off-by: Tejun Heo <tj@kernel.org> (cherry picked from commit 74347be3edfd11277799242766edf844c43dd5d3)
Diffstat (limited to 'kernel')
-rw-r--r--kernel/workqueue.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 79a178500a77..e3ab09e70ba9 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2274,9 +2274,13 @@ retry:
* If @work was previously on a different pool, it might still be
* running there, in which case the work needs to be queued on that
* pool to guarantee non-reentrancy.
+ *
+ * For ordered workqueue, work items must be queued on the newest pwq
+ * for accurate order management. Guaranteed order also guarantees
+ * non-reentrancy. See the comments above unplug_oldest_pwq().
*/
last_pool = get_work_pool(work);
- if (last_pool && last_pool != pool) {
+ if (last_pool && last_pool != pool && !(wq->flags & __WQ_ORDERED)) {
struct worker *worker;
raw_spin_lock(&last_pool->lock);