diff options
author | Peter Zijlstra <peterz@infradead.org> | 2017-10-02 14:50:33 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-10-10 11:45:27 +0200 |
commit | ed4ad1ca08a53cf1a805478678d1e7ff0d2cf251 (patch) | |
tree | df07167f99faa528b598d522cf023e8c7a699d8e /kernel/sched/topology.c | |
parent | sched/deadline: Use C bitfields for the state flags (diff) | |
download | linux-ed4ad1ca08a53cf1a805478678d1e7ff0d2cf251.tar.xz linux-ed4ad1ca08a53cf1a805478678d1e7ff0d2cf251.zip |
sched/topology: Restore SD_PREFER_SIBLING on MC domains
The normal x86_topology on NHM+ machines degenerates because the MC
and CPU domains are of the same size, therefore MC inherits
SD_PREFER_SIBLING from CPU (which then gets taken out). The result is
that we'll spread tasks across the first NUMA level in order to
maximize cache utilization.
However, for the x86_numa_in_package_topology we loose the CPU domain,
and we'll not have SD_PREFER_SIBLING set anywhere, giving a distinct
difference in behaviour.
Commit:
8e7fbcbc22c1 ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs")
made a fail by not preserving the SD_PREFER_SIBLING for the !power_saving
case on both CPU and MC.
Then commit:
6956dc568f34 ("sched/numa: Add SD_PERFER_SIBLING to CPU domain")
adds it back to the CPU but not MC.
Restore that now, such that we get consistent spreading behaviour wrt
L3 and NUMA.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/topology.c')
-rw-r--r-- | kernel/sched/topology.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index f1cf4f306a82..86e81f06d36b 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1157,6 +1157,7 @@ sd_init(struct sched_domain_topology_level *tl, sd->smt_gain = 1178; /* ~15% */ } else if (sd->flags & SD_SHARE_PKG_RESOURCES) { + sd->flags |= SD_PREFER_SIBLING; sd->imbalance_pct = 117; sd->cache_nice_tries = 1; sd->busy_idx = 2; |