diff options
author | Andrew Vagin <avagin@openvz.org> | 2011-11-07 13:54:12 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-11-14 13:31:28 +0100 |
commit | 5d81e5cfb37a174e8ddc0413e2e70cdf05807ace (patch) | |
tree | 3190ed611a1b88092d4a0aee584b505999a26f17 /kernel/events | |
parent | perf: Carve out callchain functionality (diff) | |
download | linux-5d81e5cfb37a174e8ddc0413e2e70cdf05807ace.tar.xz linux-5d81e5cfb37a174e8ddc0413e2e70cdf05807ace.zip |
events: Don't divide events if it has field period
This patch solves the following problem:
Now some samples may be lost due to throttling. The number of samples is
restricted by sysctl_perf_event_sample_rate/HZ. A trace event is
divided on some samples according to event's period. I don't sure, that
we should generate more than one sample on each trace event. I think the
better way to use SAMPLE_PERIOD.
E.g.: I want to trace when a process sleeps. I created a process, which
sleeps for 1ms and for 4ms. perf got 100 events in both cases.
swapper 0 [000] 1141.371830: sched_stat_sleep: comm=foo pid=1801 delay=1386750 [ns]
swapper 0 [000] 1141.369444: sched_stat_sleep: comm=foo pid=1801 delay=4499585 [ns]
In the first case a kernel want to send 4499585 events and
in the second case it wants to send 1386750 events.
perf-reports shows that process sleeps in both places equal time. It's
bug.
With this patch kernel generates one event on each "sleep" and the time
slice is saved in the field "period". Perf knows how handle it.
Signed-off-by: Andrew Vagin <avagin@openvz.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1320670457-2633428-3-git-send-email-avagin@openvz.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/events')
-rw-r--r-- | kernel/events/core.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index eadac69265fc..8d9dea56c262 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4528,7 +4528,6 @@ static void perf_swevent_overflow(struct perf_event *event, u64 overflow, struct hw_perf_event *hwc = &event->hw; int throttle = 0; - data->period = event->hw.last_period; if (!overflow) overflow = perf_swevent_set_period(event); @@ -4562,6 +4561,12 @@ static void perf_swevent_event(struct perf_event *event, u64 nr, if (!is_sampling_event(event)) return; + if ((event->attr.sample_type & PERF_SAMPLE_PERIOD) && !event->attr.freq) { + data->period = nr; + return perf_swevent_overflow(event, 1, data, regs); + } else + data->period = event->hw.last_period; + if (nr == 1 && hwc->sample_period == 1 && !event->attr.freq) return perf_swevent_overflow(event, 1, data, regs); |