diff options
author | Suresh Siddha <suresh.b.siddha@intel.com> | 2012-06-25 22:38:27 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2012-07-06 11:00:21 +0200 |
commit | b39f25a849d7677a7dbf183f2483fd41c201a5ce (patch) | |
tree | 87f06060976461b91d5cb196f0ed04fbcea0bb26 /arch/x86/kernel/vsmp_64.c | |
parent | x86/vsmp: Fix vector_allocation_domain's return value (diff) | |
download | linux-b39f25a849d7677a7dbf183f2483fd41c201a5ce.tar.xz linux-b39f25a849d7677a7dbf183f2483fd41c201a5ce.zip |
x86/apic: Optimize cpu traversal in __assign_irq_vector() using domain membership
Currently __assign_irq_vector() goes through each cpu in the
specified mask until it finds a free vector in all the cpu's
that are part of the same interrupt domain. We visit all the
interrupt domain sibling cpus to reserve the free vector. So,
when we fail to find a free vector in an interrupt domain, it is
safe to continue our search with a cpu belonging to a new
interrupt domain. No need to go through each cpu, if the domain
containing that cpu is already visited.
Use the irq_cfg's old_domain to track the visited domains and
optimize the cpu traversal while finding a free vector in the
given cpumask.
NOTE: We can also optimize the search by using for_each_cpu() and
skip the current cpu, if it is not the first cpu in the mask
returned by the vector_allocation_domain(). But re-using the
cfg->old_domain to track the visited domains will be slightly
faster.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Acked-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Alexander Gordeev <agordeev@redhat.com>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
Link: http://lkml.kernel.org/r/1340656709-11423-2-git-send-email-suresh.b.siddha@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/kernel/vsmp_64.c')
-rw-r--r-- | arch/x86/kernel/vsmp_64.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c index fa5adb7c228c..3f0285ac00fa 100644 --- a/arch/x86/kernel/vsmp_64.c +++ b/arch/x86/kernel/vsmp_64.c @@ -208,10 +208,9 @@ static int apicid_phys_pkg_id(int initial_apic_id, int index_msb) * In vSMP, all cpus should be capable of handling interrupts, regardless of * the APIC used. */ -static bool fill_vector_allocation_domain(int cpu, struct cpumask *retmask) +static void fill_vector_allocation_domain(int cpu, struct cpumask *retmask) { cpumask_setall(retmask); - return false; } static void vsmp_apic_post_init(void) |