diff options
author | Yuyang Du <yuyang.du@intel.com> | 2015-07-15 02:04:41 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-08-03 12:24:31 +0200 |
commit | 139622343ef31941effc6de6a5a9320371a00e62 (patch) | |
tree | beace946cd02feb0cb89e784ffc73187d7a9a658 /kernel/sched/sched.h | |
parent | sched/fair: Remove task and group entity load when they are dead (diff) | |
download | linux-139622343ef31941effc6de6a5a9320371a00e62.tar.xz linux-139622343ef31941effc6de6a5a9320371a00e62.zip |
sched/fair: Provide runnable_load_avg back to cfs_rq
The cfs_rq's load_avg is composed of runnable_load_avg and blocked_load_avg.
Before this series, sometimes the runnable_load_avg is used, and sometimes
the load_avg is used. Completely replacing all uses of runnable_load_avg
with load_avg may be too big a leap, i.e., the blocked_load_avg is concerned
to result in overrated load. Therefore, we get runnable_load_avg back.
The new cfs_rq's runnable_load_avg is improved to be updated with all of the
runnable sched_eneities at the same time, so the one sched_entity updated and
the others stale problem is solved.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1436918682-4971-7-git-send-email-yuyang.du@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r-- | kernel/sched/sched.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4d139e0bc206..ab0b05cc3f37 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -368,6 +368,8 @@ struct cfs_rq { * CFS load tracking */ struct sched_avg avg; + u64 runnable_load_sum; + unsigned long runnable_load_avg; #ifdef CONFIG_FAIR_GROUP_SCHED unsigned long tg_load_avg_contrib; #endif |