diff options
author | Andrea Parri (Microsoft) <parri.andrea@gmail.com> | 2022-03-28 17:44:57 +0200 |
---|---|---|
committer | Wei Liu <wei.liu@kernel.org> | 2022-04-06 15:31:58 +0200 |
commit | eaa03d34535872d29004cb5cf77dc9dec1ba9a25 (patch) | |
tree | 0fb901ee87920f9fa23ae2308e80e5df5d353a9b /drivers/hv | |
parent | Drivers: hv: balloon: Disable balloon and hot-add accordingly (diff) | |
download | linux-eaa03d34535872d29004cb5cf77dc9dec1ba9a25.tar.xz linux-eaa03d34535872d29004cb5cf77dc9dec1ba9a25.zip |
Drivers: hv: vmbus: Replace smp_store_mb() with virt_store_mb()
Following the recommendation in Documentation/memory-barriers.txt for
virtual machine guests.
Fixes: 8b6a877c060ed ("Drivers: hv: vmbus: Replace the per-CPU channel lists with a global array of channels")
Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
Link: https://lore.kernel.org/r/20220328154457.100872-1-parri.andrea@gmail.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Diffstat (limited to 'drivers/hv')
-rw-r--r-- | drivers/hv/channel_mgmt.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c index 60375879612f..67be81208a2d 100644 --- a/drivers/hv/channel_mgmt.c +++ b/drivers/hv/channel_mgmt.c @@ -380,7 +380,7 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel) * execute: * * (a) In the "normal (i.e., not resuming from hibernation)" path, - * the full barrier in smp_store_mb() guarantees that the store + * the full barrier in virt_store_mb() guarantees that the store * is propagated to all CPUs before the add_channel_work work * is queued. In turn, add_channel_work is queued before the * channel's ring buffer is allocated/initialized and the @@ -392,14 +392,14 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel) * recv_int_page before retrieving the channel pointer from the * array of channels. * - * (b) In the "resuming from hibernation" path, the smp_store_mb() + * (b) In the "resuming from hibernation" path, the virt_store_mb() * guarantees that the store is propagated to all CPUs before * the VMBus connection is marked as ready for the resume event * (cf. check_ready_for_resume_event()). The interrupt handler * of the VMBus driver and vmbus_chan_sched() can not run before * vmbus_bus_resume() has completed execution (cf. resume_noirq). */ - smp_store_mb( + virt_store_mb( vmbus_connection.channels[channel->offermsg.child_relid], channel); } |