diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2010-09-18 00:02:32 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-11-10 23:13:56 +0100 |
commit | aae6d3ddd8b90f5b2c8d79a2b914d1706d124193 (patch) | |
tree | b993f929f4b1cc38ef01094ff4504eaf358adb31 /include | |
parent | Merge git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6 (diff) | |
download | linux-aae6d3ddd8b90f5b2c8d79a2b914d1706d124193.tar.xz linux-aae6d3ddd8b90f5b2c8d79a2b914d1706d124193.zip |
sched: Use group weight, idle cpu metrics to fix imbalances during idle
Currently we consider a sched domain to be well balanced when the imbalance
is less than the domain's imablance_pct. As the number of cores and threads
are increasing, current values of imbalance_pct (for example 25% for a
NUMA domain) are not enough to detect imbalances like:
a) On a WSM-EP system (two sockets, each having 6 cores and 12 logical threads),
24 cpu-hogging tasks get scheduled as 13 on one socket and 11 on another
socket. Leading to an idle HT cpu.
b) On a hypothetial 2 socket NHM-EX system (each socket having 8 cores and
16 logical threads), 16 cpu-hogging tasks can get scheduled as 9 on one
socket and 7 on another socket. Leaving one core in a socket idle
whereas in another socket we have a core having both its HT siblings busy.
While this issue can be fixed by decreasing the domain's imbalance_pct
(by making it a function of number of logical cpus in the domain), it
can potentially cause more task migrations across sched groups in an
overloaded case.
Fix this by using imbalance_pct only during newly_idle and busy
load balancing. And during idle load balancing, check if there
is an imbalance in number of idle cpu's across the busiest and this
sched_group or if the busiest group has more tasks than its weight that
the idle cpu in this_group can pull.
Reported-by: Nikhil Rao <ncrao@google.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1284760952.2676.11.camel@sbsiddha-MOBL3.sc.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/sched.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index d0036e52a24a..2c79e921a68b 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -862,6 +862,7 @@ struct sched_group { * single CPU. */ unsigned int cpu_power, cpu_power_orig; + unsigned int group_weight; /* * The CPUs this group covers. |