diff options
author | Valentin Schneider <vschneid@redhat.com> | 2023-03-07 15:35:57 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2023-03-24 11:01:28 +0100 |
commit | 253a0fb4c62827cdcaf43afcea5d675507eaf7a3 (patch) | |
tree | fb135bbf9c7816f7a79e7f1c04b8df8c1b00bbef /kernel | |
parent | treewide: Trace IPIs sent via smp_send_reschedule() (diff) | |
download | linux-253a0fb4c62827cdcaf43afcea5d675507eaf7a3.tar.xz linux-253a0fb4c62827cdcaf43afcea5d675507eaf7a3.zip |
smp: reword smp call IPI comment
Accessing the call_single_queue hasn't involved a spinlock since 2014:
6897fc22ea01 ("kernel: use lockless list for smp_call_function_single")
The llist operations (namely cmpxchg() and xchg()) provide similar ordering
guarantees, update the comment to lessen confusion.
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20230307143558.294354-7-vschneid@redhat.com
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/smp.c | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/kernel/smp.c b/kernel/smp.c index 03e6d576295d..6bbfabbe62fc 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -312,9 +312,10 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); void __smp_call_single_queue(int cpu, struct llist_node *node) { /* - * The list addition should be visible before sending the IPI - * handler locks the list to pull the entry off it because of - * normal cache coherency rules implied by spinlocks. + * The list addition should be visible to the target CPU when it pops + * the head of the list to pull the entry off it in the IPI handler + * because of normal cache coherency rules implied by the underlying + * llist ops. * * If IPIs can go out of order to the cache coherency protocol * in an architecture, sufficient synchronisation should be added |