summaryrefslogtreecommitdiffstats
path: root/kernel/time
diff options
context:
space:
mode:
authorjohn stultz <johnstul@us.ibm.com>2007-10-17 08:27:18 +0200
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-10-17 17:42:53 +0200
commitb2d9323d139f5c384fa1ef1d74773b4db1c09b3d (patch)
treead2705044b5b781aeb1a119d9c8548a044c7e21f /kernel/time
parentUse ERESTART_RESTARTBLOCK if poll() is interrupted by a signal (diff)
downloadlinux-b2d9323d139f5c384fa1ef1d74773b4db1c09b3d.tar.xz
linux-b2d9323d139f5c384fa1ef1d74773b4db1c09b3d.zip
Use num_possible_cpus() instead of NR_CPUS for timer distribution
To avoid lock contention, we distribute the sched_timer calls across the cpus so they do not trigger at the same instant. However, I used NR_CPUS, which can cause needless grouping on small smp systems depending on your kernel config. This patch converts to using num_possible_cpus() so we spread it as evenly as possible on every machine. Briefly tested w/ NR_CPUS=255 and verified reduced contention. Signed-off-by: John Stultz <johnstul@us.ibm.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel/time')
-rw-r--r--kernel/time/tick-sched.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 8c3fef1db09c..ce89ffb474d0 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -570,7 +570,7 @@ void tick_setup_sched_timer(void)
/* Get the next period (per cpu) */
ts->sched_timer.expires = tick_init_jiffy_update();
offset = ktime_to_ns(tick_period) >> 1;
- do_div(offset, NR_CPUS);
+ do_div(offset, num_possible_cpus());
offset *= smp_processor_id();
ts->sched_timer.expires = ktime_add_ns(ts->sched_timer.expires, offset);