summaryrefslogtreecommitdiffstats
path: root/kernel/sched/debug.c
diff options
context:
space:
mode:
authorVincent Guittot <vincent.guittot@linaro.org>2021-01-07 11:33:25 +0100
committerPeter Zijlstra <peterz@infradead.org>2021-01-14 11:20:11 +0100
commite9b9734b74656abb585a7f6fabf1d30ce00e51ea (patch)
treeb92e6450f0f8b5f6b29145a8012dc937428aa98f /kernel/sched/debug.c
parentsched/fair: Don't set LBF_ALL_PINNED unnecessarily (diff)
downloadlinux-e9b9734b74656abb585a7f6fabf1d30ce00e51ea.tar.xz
linux-e9b9734b74656abb585a7f6fabf1d30ce00e51ea.zip
sched/fair: Reduce cases for active balance
Active balance is triggered for a number of voluntary cases like misfit or pinned tasks cases but also after that a number of load balance attempts failed to migrate a task. There is no need to use active load balance when the group is overloaded because an overloaded state means that there is at least one waiting task. Nevertheless, the waiting task is not selected and detached until the threshold becomes higher than its load. This threshold increases with the number of failed lb (see the condition if ((load >> env->sd->nr_balance_failed) > env->imbalance) in detach_tasks()) and the waiting task will end up to be selected after a number of attempts. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lkml.kernel.org/r/20210107103325.30851-4-vincent.guittot@linaro.org
Diffstat (limited to 'kernel/sched/debug.c')
0 files changed, 0 insertions, 0 deletions