summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMauro Carvalho Chehab <mchehab@s-opensource.com>2017-05-14 19:45:35 +0200
committerJonathan Corbet <corbet@lwn.net>2017-07-14 21:51:39 +0200
commit0e95c85341b7b5be34f999b6023e3df4d03f4977 (patch)
tree0744cb3c8af677ad7e347a520c06e74c7661fbec
parentio-mapping.txt: standardize document format (diff)
downloadlinux-0e95c85341b7b5be34f999b6023e3df4d03f4977.tar.xz
linux-0e95c85341b7b5be34f999b6023e3df4d03f4977.zip
io_ordering.txt: standardize document format
Each text file under Documentation follows a different format. Some doesn't even have titles! Change its representation to follow the adopted standard, using ReST markups for it to be parseable by Sphinx: - Add a title; - mark literal-blocks as such. Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
-rw-r--r--Documentation/io_ordering.txt62
1 files changed, 33 insertions, 29 deletions
diff --git a/Documentation/io_ordering.txt b/Documentation/io_ordering.txt
index 9faae6f26d32..2ab303ce9a0d 100644
--- a/Documentation/io_ordering.txt
+++ b/Documentation/io_ordering.txt
@@ -1,3 +1,7 @@
+==============================================
+Ordering I/O writes to memory-mapped addresses
+==============================================
+
On some platforms, so-called memory-mapped I/O is weakly ordered. On such
platforms, driver writers are responsible for ensuring that I/O writes to
memory-mapped addresses on their device arrive in the order intended. This is
@@ -8,39 +12,39 @@ critical section of code protected by spinlocks. This would ensure that
subsequent writes to I/O space arrived only after all prior writes (much like a
memory barrier op, mb(), only with respect to I/O).
-A more concrete example from a hypothetical device driver:
+A more concrete example from a hypothetical device driver::
- ...
-CPU A: spin_lock_irqsave(&dev_lock, flags)
-CPU A: val = readl(my_status);
-CPU A: ...
-CPU A: writel(newval, ring_ptr);
-CPU A: spin_unlock_irqrestore(&dev_lock, flags)
- ...
-CPU B: spin_lock_irqsave(&dev_lock, flags)
-CPU B: val = readl(my_status);
-CPU B: ...
-CPU B: writel(newval2, ring_ptr);
-CPU B: spin_unlock_irqrestore(&dev_lock, flags)
- ...
+ ...
+ CPU A: spin_lock_irqsave(&dev_lock, flags)
+ CPU A: val = readl(my_status);
+ CPU A: ...
+ CPU A: writel(newval, ring_ptr);
+ CPU A: spin_unlock_irqrestore(&dev_lock, flags)
+ ...
+ CPU B: spin_lock_irqsave(&dev_lock, flags)
+ CPU B: val = readl(my_status);
+ CPU B: ...
+ CPU B: writel(newval2, ring_ptr);
+ CPU B: spin_unlock_irqrestore(&dev_lock, flags)
+ ...
In the case above, the device may receive newval2 before it receives newval,
-which could cause problems. Fixing it is easy enough though:
+which could cause problems. Fixing it is easy enough though::
- ...
-CPU A: spin_lock_irqsave(&dev_lock, flags)
-CPU A: val = readl(my_status);
-CPU A: ...
-CPU A: writel(newval, ring_ptr);
-CPU A: (void)readl(safe_register); /* maybe a config register? */
-CPU A: spin_unlock_irqrestore(&dev_lock, flags)
- ...
-CPU B: spin_lock_irqsave(&dev_lock, flags)
-CPU B: val = readl(my_status);
-CPU B: ...
-CPU B: writel(newval2, ring_ptr);
-CPU B: (void)readl(safe_register); /* maybe a config register? */
-CPU B: spin_unlock_irqrestore(&dev_lock, flags)
+ ...
+ CPU A: spin_lock_irqsave(&dev_lock, flags)
+ CPU A: val = readl(my_status);
+ CPU A: ...
+ CPU A: writel(newval, ring_ptr);
+ CPU A: (void)readl(safe_register); /* maybe a config register? */
+ CPU A: spin_unlock_irqrestore(&dev_lock, flags)
+ ...
+ CPU B: spin_lock_irqsave(&dev_lock, flags)
+ CPU B: val = readl(my_status);
+ CPU B: ...
+ CPU B: writel(newval2, ring_ptr);
+ CPU B: (void)readl(safe_register); /* maybe a config register? */
+ CPU B: spin_unlock_irqrestore(&dev_lock, flags)
Here, the reads from safe_register will cause the I/O chipset to flush any
pending writes before actually posting the read to the chipset, preventing