diff options
author | Mike Galbraith <efault@gmx.de> | 2008-05-08 17:00:42 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-05-08 17:00:42 +0200 |
commit | 46151122e0a2e80e5a6b2889f595e371fe2b600d (patch) | |
tree | 61ab1993c1a94765327aeba3b36924b88b81e68f /kernel | |
parent | semaphore: fix (diff) | |
download | linux-46151122e0a2e80e5a6b2889f595e371fe2b600d.tar.xz linux-46151122e0a2e80e5a6b2889f595e371fe2b600d.zip |
sched: fix weight calculations
The conversion between virtual and real time is as follows:
dvt = rw/w * dt <=> dt = w/rw * dvt
Since we want the fair sleeper granularity to be in real time, we actually
need to do:
dvt = - rw/w * l
This bug could be related to the regression reported by Yanmin Zhang:
| Comparing with kernel 2.6.25, sysbench+mysql(oltp, readonly) has lots
| of regressions with 2.6.26-rc1:
|
| 1) 8-core stoakley: 28%;
| 2) 16-core tigerton: 20%;
| 3) Itanium Montvale: 50%.
Reported-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched_fair.c | 11 |
1 files changed, 8 insertions, 3 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index c863663d204d..e24ecd39c4b8 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -662,10 +662,15 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) if (!initial) { /* sleeps upto a single latency don't count. */ if (sched_feat(NEW_FAIR_SLEEPERS)) { + unsigned long thresh = sysctl_sched_latency; + + /* + * convert the sleeper threshold into virtual time + */ if (sched_feat(NORMALIZED_SLEEPER)) - vruntime -= calc_delta_weight(sysctl_sched_latency, se); - else - vruntime -= sysctl_sched_latency; + thresh = calc_delta_fair(thresh, se); + + vruntime -= thresh; } /* ensure we never gain time by being placed backwards. */ |