summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>2008-12-18 18:56:16 +0100
committerIngo Molnar <mingo@elte.hu>2008-12-19 09:21:48 +0100
commitd5679bd11916eba5c8ee9033003e1a5ce56ece9a (patch)
treebb7c1bef4446e606ea23e4e6711a48013fed3666 /kernel
parentsched: framework for sched_mc/smt_power_savings=N (diff)
downloadlinux-d5679bd11916eba5c8ee9033003e1a5ce56ece9a.tar.xz
linux-d5679bd11916eba5c8ee9033003e1a5ce56ece9a.zip
sched: favour lower logical cpu number for sched_mc balance
Impact: change load-balancing direction to match that of irqbalanced Just in case two groups have identical load, prefer to move load to lower logical cpu number rather than the present logic of moving to higher logical number. find_busiest_group() tries to look for a group_leader that has spare capacity to take more tasks and freeup an appropriate least loaded group. Just in case there is a tie and the load is equal, then the group with higher logical number is favoured. This conflicts with user space irqbalance daemon that will move interrupts to lower logical number if the system utilisation is very low. Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 56b285cd5350..94b9d11e3312 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3241,7 +3241,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
*/
if ((sum_nr_running < min_nr_running) ||
(sum_nr_running == min_nr_running &&
- cpumask_first(sched_group_cpus(group)) <
+ cpumask_first(sched_group_cpus(group)) >
cpumask_first(sched_group_cpus(group_min)))) {
group_min = group;
min_nr_running = sum_nr_running;
@@ -3257,7 +3257,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
if (sum_nr_running <= group_capacity - 1) {
if (sum_nr_running > leader_nr_running ||
(sum_nr_running == leader_nr_running &&
- cpumask_first(sched_group_cpus(group)) >
+ cpumask_first(sched_group_cpus(group)) <
cpumask_first(sched_group_cpus(group_leader)))) {
group_leader = group;
leader_nr_running = sum_nr_running;