diff options
author | Peter Zijlstra <peterz@infradead.org> | 2013-09-13 13:14:47 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-11-19 16:57:42 +0100 |
commit | 06db0b21712f878b808480ef31097637013bbf0f (patch) | |
tree | 572c8d97ef964d3e0023626e246fc6ee775b4aa2 /kernel | |
parent | ftrace, perf: Avoid infinite event generation loop (diff) | |
download | linux-06db0b21712f878b808480ef31097637013bbf0f.tar.xz linux-06db0b21712f878b808480ef31097637013bbf0f.zip |
perf: Remove fragile swevent hlist optimization
Currently we only allocate a single cpu hashtable for per-cpu
swevents; do away with this optimization for it is fragile in the face
of things like perf_pmu_migrate_context().
The easiest thing is to make sure all CPUs are consistent wrt state.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20130913111447.GN31370@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/events/core.c | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index d724e7757cd1..72348dc192c1 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5680,11 +5680,6 @@ static void swevent_hlist_put(struct perf_event *event) { int cpu; - if (event->cpu != -1) { - swevent_hlist_put_cpu(event, event->cpu); - return; - } - for_each_possible_cpu(cpu) swevent_hlist_put_cpu(event, cpu); } @@ -5718,9 +5713,6 @@ static int swevent_hlist_get(struct perf_event *event) int err; int cpu, failed_cpu; - if (event->cpu != -1) - return swevent_hlist_get_cpu(event, event->cpu); - get_online_cpus(); for_each_possible_cpu(cpu) { err = swevent_hlist_get_cpu(event, cpu); |