diff options
author | Andrea Parri <parri.andrea@gmail.com> | 2020-01-22 19:39:52 +0100 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2020-02-12 21:59:40 +0100 |
commit | dbb92f88648d6206bf22fcb764fb9fe2939d401a (patch) | |
tree | 6bf6e7a14ec2810a47c1fb841691feceacae3c85 /include | |
parent | linux/pipe_fs_i.h: fix kernel-doc warnings after @wait was split (diff) | |
download | linux-dbb92f88648d6206bf22fcb764fb9fe2939d401a.tar.xz linux-dbb92f88648d6206bf22fcb764fb9fe2939d401a.zip |
workqueue: Document (some) memory-ordering properties of {queue,schedule}_work()
It's desirable to be able to rely on the following property: All stores
preceding (in program order) a call to a successful queue_work() will be
visible from the CPU which will execute the queued work by the time such
work executes, e.g.,
{ x is initially 0 }
CPU0 CPU1
WRITE_ONCE(x, 1); [ "work" is being executed ]
r0 = queue_work(wq, work); r1 = READ_ONCE(x);
Forbids: r0 == true && r1 == 0
The current implementation of queue_work() provides such memory-ordering
property:
- In __queue_work(), the ->lock spinlock is acquired.
- On the other side, in worker_thread(), this same ->lock is held
when dequeueing work.
So the locking ordering makes things work out.
Add this property to the DocBook headers of {queue,schedule}_work().
Suggested-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/workqueue.h | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 4261d1c6e87b..e48554e6526c 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -487,6 +487,19 @@ extern void wq_worker_comm(char *buf, size_t size, struct task_struct *task); * * We queue the work to the CPU on which it was submitted, but if the CPU dies * it can be processed by another CPU. + * + * Memory-ordering properties: If it returns %true, guarantees that all stores + * preceding the call to queue_work() in the program order will be visible from + * the CPU which will execute @work by the time such work executes, e.g., + * + * { x is initially 0 } + * + * CPU0 CPU1 + * + * WRITE_ONCE(x, 1); [ @work is being executed ] + * r0 = queue_work(wq, work); r1 = READ_ONCE(x); + * + * Forbids: r0 == true && r1 == 0 */ static inline bool queue_work(struct workqueue_struct *wq, struct work_struct *work) @@ -546,6 +559,9 @@ static inline bool schedule_work_on(int cpu, struct work_struct *work) * This puts a job in the kernel-global workqueue if it was not already * queued and leaves it in the same position on the kernel-global * workqueue otherwise. + * + * Shares the same memory-ordering properties of queue_work(), cf. the + * DocBook header of queue_work(). */ static inline bool schedule_work(struct work_struct *work) { |