diff options
author | Alex Belits <abelits@marvell.com> | 2020-06-26 00:34:41 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-07-08 11:39:01 +0200 |
commit | 1abdfe706a579a702799fce465bceb9fb01d407c (patch) | |
tree | 20d48a585227b71d155fcaa3de3536063a8809cf /lib | |
parent | sched/uclamp: Protect uclamp fast path code with static key (diff) | |
download | linux-1abdfe706a579a702799fce465bceb9fb01d407c.tar.xz linux-1abdfe706a579a702799fce465bceb9fb01d407c.zip |
lib: Restrict cpumask_local_spread to houskeeping CPUs
The current implementation of cpumask_local_spread() does not respect the
isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
it will return it to the caller for pinning of its IRQ threads. Having
these unwanted IRQ threads on an isolated CPU adds up to a latency
overhead.
Restrict the CPUs that are returned for spreading IRQs only to the
available housekeeping CPUs.
Signed-off-by: Alex Belits <abelits@marvell.com>
Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200625223443.2684-2-nitesh@redhat.com
Diffstat (limited to 'lib')
-rw-r--r-- | lib/cpumask.c | 16 |
1 files changed, 11 insertions, 5 deletions
diff --git a/lib/cpumask.c b/lib/cpumask.c index fb22fb266f93..85da6ab4fbb5 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -6,6 +6,7 @@ #include <linux/export.h> #include <linux/memblock.h> #include <linux/numa.h> +#include <linux/sched/isolation.h> /** * cpumask_next - get the next cpu in a cpumask @@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) */ unsigned int cpumask_local_spread(unsigned int i, int node) { - int cpu; + int cpu, hk_flags; + const struct cpumask *mask; + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ; + mask = housekeeping_cpumask(hk_flags); /* Wrap: we always want a cpu. */ - i %= num_online_cpus(); + i %= cpumask_weight(mask); if (node == NUMA_NO_NODE) { - for_each_cpu(cpu, cpu_online_mask) + for_each_cpu(cpu, mask) { if (i-- == 0) return cpu; + } } else { /* NUMA first. */ - for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) + for_each_cpu_and(cpu, cpumask_of_node(node), mask) { if (i-- == 0) return cpu; + } - for_each_cpu(cpu, cpu_online_mask) { + for_each_cpu(cpu, mask) { /* Skip NUMA nodes, done above. */ if (cpumask_test_cpu(cpu, cpumask_of_node(node))) continue; |