diff options
author | Håkon Bugge <Haakon.Bugge@oracle.com> | 2017-07-20 12:28:55 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-07-21 00:33:01 +0200 |
commit | e623a48ee433985f6ca0fb238f0002cc2eccdf53 (patch) | |
tree | e3dc5acbe886b14e0dde4296b6b78748ac7175cd /net | |
parent | net: ethernet: ti: cpsw: Push the request_irq function to the end of probe (diff) | |
download | linux-e623a48ee433985f6ca0fb238f0002cc2eccdf53.tar.xz linux-e623a48ee433985f6ca0fb238f0002cc2eccdf53.zip |
rds: Make sure updates to cp_send_gen can be observed
cp->cp_send_gen is treated as a normal variable, although it may be
used by different threads.
This is fixed by using {READ,WRITE}_ONCE when it is incremented and
READ_ONCE when it is read outside the {acquire,release}_in_xmit
protection.
Normative reference from the Linux-Kernel Memory Model:
Loads from and stores to shared (but non-atomic) variables should
be protected with the READ_ONCE(), WRITE_ONCE(), and
ACCESS_ONCE().
Clause 5.1.2.4/25 in the C standard is also relevant.
Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Knut Omang <knut.omang@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/rds/send.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/net/rds/send.c b/net/rds/send.c index e81aa176f4e2..41b9f0f5bb9c 100644 --- a/net/rds/send.c +++ b/net/rds/send.c @@ -170,8 +170,8 @@ restart: * The acquire_in_xmit() check above ensures that only one * caller can increment c_send_gen at any time. */ - cp->cp_send_gen++; - send_gen = cp->cp_send_gen; + send_gen = READ_ONCE(cp->cp_send_gen) + 1; + WRITE_ONCE(cp->cp_send_gen, send_gen); /* * rds_conn_shutdown() sets the conn state and then tests RDS_IN_XMIT, @@ -431,7 +431,7 @@ over_batch: smp_mb(); if ((test_bit(0, &conn->c_map_queued) || !list_empty(&cp->cp_send_queue)) && - send_gen == cp->cp_send_gen) { + send_gen == READ_ONCE(cp->cp_send_gen)) { rds_stats_inc(s_send_lock_queue_raced); if (batch_count < send_batch_count) goto restart; |