diff options
author | Odin Ugedal <odin@uged.al> | 2021-06-24 13:18:15 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-06-28 15:42:24 +0200 |
commit | 1c35b07e6d3986474e5635be566e7bc79d97c64d (patch) | |
tree | 95af2f2d1be9845eb884cb67312b5a64fc80b80a /kernel/sched | |
parent | sched/doc: Update the CPU capacity asymmetry bits (diff) | |
download | linux-1c35b07e6d3986474e5635be566e7bc79d97c64d.tar.xz linux-1c35b07e6d3986474e5635be566e7bc79d97c64d.zip |
sched/fair: Ensure _sum and _avg values stay consistent
The _sum and _avg values are in general sync together with the PELT
divider. They are however not always completely in perfect sync,
resulting in situations where _sum gets to zero while _avg stays
positive. Such situations are undesirable.
This comes from the fact that PELT will increase period_contrib, also
increasing the PELT divider, without updating _sum and _avg values to
stay in perfect sync where (_sum == _avg * divider). However, such PELT
change will never lower _sum, making it impossible to end up in a
situation where _sum is zero and _avg is not.
Therefore, we need to ensure that when subtracting load outside PELT,
that when _sum is zero, _avg is also set to zero. This occurs when
(_sum < _avg * divider), and the subtracted (_avg * divider) is bigger
or equal to the current _sum, while the subtracted _avg is smaller than
the current _avg.
Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Odin Ugedal <odin@uged.al>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Link: https://lore.kernel.org/r/20210624111815.57937-1-odin@uged.al
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4a3e61a88acc..45edf61eed73 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3657,15 +3657,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) r = removed_load; sub_positive(&sa->load_avg, r); - sub_positive(&sa->load_sum, r * divider); + sa->load_sum = sa->load_avg * divider; r = removed_util; sub_positive(&sa->util_avg, r); - sub_positive(&sa->util_sum, r * divider); + sa->util_sum = sa->util_avg * divider; r = removed_runnable; sub_positive(&sa->runnable_avg, r); - sub_positive(&sa->runnable_sum, r * divider); + sa->runnable_sum = sa->runnable_avg * divider; /* * removed_runnable is the unweighted version of removed_load so we |