summaryrefslogtreecommitdiffstats
path: root/include/asm-powerpc
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2008-02-23 20:40:17 +0100
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2008-02-23 21:09:28 +0100
commit1e8352784abaedb424e63fa700e93e6c1307785f (patch)
treeeb7ba51fa40e209ed2a1636b3404cca58141bed1 /include/asm-powerpc
parentPM: Introduce PM_EVENT_HIBERNATE callback state (diff)
downloadlinux-1e8352784abaedb424e63fa700e93e6c1307785f.tar.xz
linux-1e8352784abaedb424e63fa700e93e6c1307785f.zip
percpu: fix DEBUG_PREEMPT per_cpu checking
2.6.25-rc1 percpu changes broke CONFIG_DEBUG_PREEMPT's per_cpu checking on several architectures. On s390, sparc64 and x86 it's been weakened to not checking at all; whereas on powerpc64 it's become too strict, issuing warnings from __raw_get_cpu_var in io_schedule and init_timer for example. Fix this by weakening powerpc's __my_cpu_offset to use the non-checking local_paca instead of get_paca (which itself contains such a check); and strengthening the generic my_cpu_offset to go the old slow way via smp_processor_id when CONFIG_DEBUG_PREEMPT (debug_smp_processor_id is where all the knowledge of what's correct when lives). Signed-off-by: Hugh Dickins <hugh@veritas.com> Reviewed-by: Mike Travis <travis@sgi.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/asm-powerpc')
-rw-r--r--include/asm-powerpc/percpu.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/asm-powerpc/percpu.h b/include/asm-powerpc/percpu.h
index ccb0523eb3b4..f879252b7ea6 100644
--- a/include/asm-powerpc/percpu.h
+++ b/include/asm-powerpc/percpu.h
@@ -13,7 +13,7 @@
#include <asm/paca.h>
#define __per_cpu_offset(cpu) (paca[cpu].data_offset)
-#define __my_cpu_offset get_paca()->data_offset
+#define __my_cpu_offset local_paca->data_offset
#define per_cpu_offset(x) (__per_cpu_offset(x))
#endif /* CONFIG_SMP */