diff options
author | Will Deacon <will.deacon@arm.com> | 2019-02-22 18:14:59 +0100 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2019-04-08 13:01:02 +0200 |
commit | fb24ea52f78e0d595852e09e3a55697c8f442189 (patch) | |
tree | 00ca29c7b0b8df6258a1ad1faf34f6e838ada26c /drivers/net/ethernet/intel/igb | |
parent | drivers: Remove useless trailing comments from mmiowb() invocations (diff) | |
download | linux-fb24ea52f78e0d595852e09e3a55697c8f442189.tar.xz linux-fb24ea52f78e0d595852e09e3a55697c8f442189.zip |
drivers: Remove explicit invocations of mmiowb()
mmiowb() is now implied by spin_unlock() on architectures that require
it, so there is no reason to call it from driver code. This patch was
generated using coccinelle:
@mmiowb@
@@
- mmiowb();
and invoked as:
$ for d in drivers include/linux/qed sound; do \
spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done
NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
spin_unlock(). However, pairing each mmiowb() removal in this patch with
the corresponding call to spin_unlock() is not at all trivial, so there
is a small chance that this change may regress any drivers incorrectly
relying on mmiowb() to order MMIO writes between CPUs using lock-free
synchronisation. If you've ended up bisecting to this commit, you can
reintroduce the mmiowb() calls using wmb() instead, which should restore
the old behaviour on all architectures other than some esoteric ia64
systems.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'drivers/net/ethernet/intel/igb')
-rw-r--r-- | drivers/net/ethernet/intel/igb/igb_main.c | 5 |
1 files changed, 0 insertions, 5 deletions
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 69b230c53fed..09ba94496742 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -6028,11 +6028,6 @@ static int igb_tx_map(struct igb_ring *tx_ring, if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) { writel(i, tx_ring->tail); - - /* we need this if more than one processor can write to our tail - * at a time, it synchronizes IO on IA64/Altix systems - */ - mmiowb(); } return 0; |