diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-05-27 03:39:36 +0200 |
---|---|---|
committer | Rusty Russell <rusty@rustcorp.com.au> | 2015-05-28 04:02:04 +0200 |
commit | 6695b92a60bc7160c92d6dc5b17cc79673017c2f (patch) | |
tree | daf0fd1de6cc7201be163084654a6edc0dbcc3e5 /kernel/time | |
parent | rbtree: Make lockless searches non-fatal (diff) | |
download | linux-6695b92a60bc7160c92d6dc5b17cc79673017c2f.tar.xz linux-6695b92a60bc7160c92d6dc5b17cc79673017c2f.zip |
seqlock: Better document raw_write_seqcount_latch()
Improve the documentation of the latch technique as used in the
current timekeeping code, such that it can be readily employed
elsewhere.
Borrow from the comments in timekeeping and replace those with a
reference to this more generic comment.
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Michel Lespinasse <walken@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Diffstat (limited to 'kernel/time')
-rw-r--r-- | kernel/time/timekeeping.c | 27 |
1 files changed, 1 insertions, 26 deletions
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 946acb72179f..cbfedddbf0cb 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -330,32 +330,7 @@ static inline s64 timekeeping_get_ns(struct tk_read_base *tkr) * We want to use this from any context including NMI and tracing / * instrumenting the timekeeping code itself. * - * So we handle this differently than the other timekeeping accessor - * functions which retry when the sequence count has changed. The - * update side does: - * - * smp_wmb(); <- Ensure that the last base[1] update is visible - * tkf->seq++; - * smp_wmb(); <- Ensure that the seqcount update is visible - * update(tkf->base[0], tkr); - * smp_wmb(); <- Ensure that the base[0] update is visible - * tkf->seq++; - * smp_wmb(); <- Ensure that the seqcount update is visible - * update(tkf->base[1], tkr); - * - * The reader side does: - * - * do { - * seq = tkf->seq; - * smp_rmb(); - * idx = seq & 0x01; - * now = now(tkf->base[idx]); - * smp_rmb(); - * } while (seq != tkf->seq) - * - * As long as we update base[0] readers are forced off to - * base[1]. Once base[0] is updated readers are redirected to base[0] - * and the base[1] update takes place. + * Employ the latch technique; see @raw_write_seqcount_latch. * * So if a NMI hits the update of base[0] then it will use base[1] * which is still consistent. In the worst case this can result is a |