diff options
author | Lauro Ramos Venancio <lvenanci@redhat.com> | 2017-04-20 21:51:42 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-05-15 10:15:27 +0200 |
commit | c20e1ea4b61c3d99a354d912f2d74822fd2a001d (patch) | |
tree | e6e35f48dce3a65a641861ce0fdc9b6e92d7bebd /kernel/sched/topology.c | |
parent | sched/topology: Optimize build_group_mask() (diff) | |
download | linux-c20e1ea4b61c3d99a354d912f2d74822fd2a001d.tar.xz linux-c20e1ea4b61c3d99a354d912f2d74822fd2a001d.zip |
sched/topology: Move comment about asymmetric node setups
Signed-off-by: Lauro Ramos Venancio <lvenanci@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: lwang@redhat.com
Cc: riel@redhat.com
Link: http://lkml.kernel.org/r/1492717903-5195-4-git-send-email-lvenanci@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/topology.c')
-rw-r--r-- | kernel/sched/topology.c | 19 |
1 files changed, 10 insertions, 9 deletions
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5a4d9aeda258..c10f44a1ab2d 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -495,14 +495,6 @@ enum s_alloc { /* * Build an iteration mask that can exclude certain CPUs from the upwards * domain traversal. - * - * Asymmetric node setups can result in situations where the domain tree is of - * unequal depth, make sure to skip domains that already cover the entire - * range. - * - * In that case build_sched_domains() will have terminated the iteration early - * and our sibling sd spans will be empty. Domains should always include the - * CPU they're built on, so check that. */ static void build_group_mask(struct sched_domain *sd, struct sched_group *sg) { @@ -590,7 +582,16 @@ build_overlap_sched_groups(struct sched_domain *sd, int cpu) sibling = *per_cpu_ptr(sdd->sd, i); - /* See the comment near build_group_mask(). */ + /* + * Asymmetric node setups can result in situations where the + * domain tree is of unequal depth, make sure to skip domains + * that already cover the entire range. + * + * In that case build_sched_domains() will have terminated the + * iteration early and our sibling sd spans will be empty. + * Domains should always include the CPU they're built on, so + * check that. + */ if (!cpumask_test_cpu(i, sched_domain_span(sibling))) continue; |