summaryrefslogtreecommitdiffstats
path: root/kernel/sched/fair.c
diff options
context:
space:
mode:
authorYuyang Du <yuyang.du@intel.com>2014-12-19 01:29:56 +0100
committerIngo Molnar <mingo@kernel.org>2015-01-09 11:18:54 +0100
commit32a8df4e0b33fccc9715213b382160415b5c4008 (patch)
tree53e679c64d49f4a7dd1b23644c89517a391f7d33 /kernel/sched/fair.c
parentsched, fanotify: Deal with nested sleeps (diff)
downloadlinux-32a8df4e0b33fccc9715213b382160415b5c4008.tar.xz
linux-32a8df4e0b33fccc9715213b382160415b5c4008.zip
sched: Fix odd values in effective_load() calculations
In effective_load, we have (long w * unsigned long tg->shares) / long W, when w is negative, it is cast to unsigned long and hence the product is insanely large. Fix this by casting tg->shares to long. Reported-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Yuyang Du <yuyang.du@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Dave Jones <davej@redhat.com> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20141219002956.GA25405@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to '')
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index df2cdf77f899..6b99659cbeec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4424,7 +4424,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
* wl = S * s'_i; see (2)
*/
if (W > 0 && w < W)
- wl = (w * tg->shares) / W;
+ wl = (w * (long)tg->shares) / W;
else
wl = tg->shares;