diff options
author | Namhyung Kim <namhyung.kim@lge.com> | 2012-12-27 03:49:45 +0100 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2013-01-30 17:02:05 +0100 |
commit | 5e67b51e3fb22ad43faf9589e9019ad9c6a00413 (patch) | |
tree | 31e55312ca60e9efa447abdeaeb6ca1546a14673 | |
parent | ring-buffer: Add stats field for amount read from trace ring buffer (diff) | |
download | linux-5e67b51e3fb22ad43faf9589e9019ad9c6a00413.tar.xz linux-5e67b51e3fb22ad43faf9589e9019ad9c6a00413.zip |
tracing: Use sched_clock_cpu for trace_clock_global
For systems with an unstable sched_clock, all cpu_clock() does is enable/
disable local irq during the call to sched_clock_cpu(). And for stable
systems they are same.
trace_clock_global() already disables interrupts, so it can call
sched_clock_cpu() directly.
Link: http://lkml.kernel.org/r/1356576585-28782-2-git-send-email-namhyung@kernel.org
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-rw-r--r-- | kernel/trace/trace_clock.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c index 22b638b28e48..24bf48eabfcc 100644 --- a/kernel/trace/trace_clock.c +++ b/kernel/trace/trace_clock.c @@ -84,7 +84,7 @@ u64 notrace trace_clock_global(void) local_irq_save(flags); this_cpu = raw_smp_processor_id(); - now = cpu_clock(this_cpu); + now = sched_clock_cpu(this_cpu); /* * If in an NMI context then dont risk lockups and return the * cpu_clock() time: |