summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2010-03-06 13:20:40 +0100
committerIngo Molnar <mingo@elte.hu>2010-03-10 13:22:33 +0100
commit19925ce778f9fc371b9607625de3bff04c60121e (patch)
tree727f1c39252fd99e7f58d9355b19d49578f5cad6 /arch/x86
parentperf, x86: Properly account n_added (diff)
downloadlinux-19925ce778f9fc371b9607625de3bff04c60121e.tar.xz
linux-19925ce778f9fc371b9607625de3bff04c60121e.zip
perf, x86: Fix double disable calls
hw_perf_enable() would disable events that were not yet enabled. This causes problems with code that assumes that ->enable/->disable calls are balanced (like the LBR code does). What happens is that we disable newly added counters that match their previous assignment, even though they are not yet programmed on the hardware. Avoid this by only doing the first pass over the existing events. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: paulus@samba.org Cc: eranian@google.com Cc: robert.richter@amd.com Cc: fweisbec@gmail.com LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/kernel/cpu/perf_event.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 071c8405debd..045cc0bb4c17 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -802,6 +802,7 @@ void hw_perf_enable(void)
return;
if (cpuc->n_added) {
+ int n_running = cpuc->n_events - cpuc->n_added;
/*
* apply assignment obtained either from
* hw_perf_group_sched_in() or x86_pmu_enable()
@@ -809,7 +810,7 @@ void hw_perf_enable(void)
* step1: save events moving to new counters
* step2: reprogram moved events into new counters
*/
- for (i = 0; i < cpuc->n_events; i++) {
+ for (i = 0; i < n_running; i++) {
event = cpuc->event_list[i];
hwc = &event->hw;