diff options
author | Dietmar Eggemann <dietmar.eggemann@arm.com> | 2021-06-01 10:36:16 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-06-17 14:11:42 +0200 |
commit | 83c5e9d573e1f0757f324d01adb6ee77b49c3f0e (patch) | |
tree | 8626b42b1790e2a912c6986cfa3832e4b7da0f7f /kernel/sched | |
parent | sched/pelt: Check that *_avg are null when *_sum are (diff) | |
download | linux-83c5e9d573e1f0757f324d01adb6ee77b49c3f0e.tar.xz linux-83c5e9d573e1f0757f324d01adb6ee77b49c3f0e.zip |
sched/fair: Return early from update_tg_cfs_load() if delta == 0
In case the _avg delta is 0 there is no need to update se's _avg
(level n) nor cfs_rq's _avg (level n-1). These values stay the same.
Since cfs_rq's _avg isn't changed, i.e. no load is propagated down,
cfs_rq's _sum should stay the same as well.
So bail out after se's _sum has been updated.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20210601083616.804229-1-dietmar.eggemann@arm.com
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 198514dcbe46..06c8ba7b3400 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3502,9 +3502,12 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq load_sum = (s64)se_weight(se) * runnable_sum; load_avg = div_s64(load_sum, divider); + se->avg.load_sum = runnable_sum; + delta = load_avg - se->avg.load_avg; + if (!delta) + return; - se->avg.load_sum = runnable_sum; se->avg.load_avg = load_avg; add_positive(&cfs_rq->avg.load_avg, delta); |