diff options
author | Dongli Zhang <dongli.zhang@oracle.com> | 2024-04-23 09:34:13 +0200 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2024-04-24 20:42:57 +0200 |
commit | 88d724e2301a69c1ab805cd74fc27aa36ae529e0 (patch) | |
tree | 2b28cbeaca7d419193957264449297954f6fc18c /kernel/irq/cpuhotplug.c | |
parent | genirq/cpuhotplug: Skip suspended interrupts when restoring affinity (diff) | |
download | linux-88d724e2301a69c1ab805cd74fc27aa36ae529e0.tar.xz linux-88d724e2301a69c1ab805cd74fc27aa36ae529e0.zip |
genirq/cpuhotplug: Retry with cpu_online_mask when migration fails
When a CPU goes offline, the interrupts affine to that CPU are
re-configured.
Managed interrupts undergo either migration to other CPUs or shutdown if
all CPUs listed in the affinity are offline. The migration of managed
interrupts is guaranteed on x86 because there are interrupt vectors
reserved.
Regular interrupts are migrated to a still online CPU in the affinity mask
or if there is no online CPU to any online CPU.
This works as long as the still online CPUs in the affinity mask have
interrupt vectors available, but in case that none of those CPUs has a
vector available the migration fails and the device interrupt becomes
stale.
This is not any different from the case where the affinity mask does not
contain any online CPU, but there is no fallback operation for this.
Instead of giving up, retry the migration attempt with the online CPU mask
if the interrupt is not managed, as managed interrupts cannot be affected
by this problem.
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20240423073413.79625-1-dongli.zhang@oracle.com
Diffstat (limited to 'kernel/irq/cpuhotplug.c')
-rw-r--r-- | kernel/irq/cpuhotplug.c | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c index 43340e0b6df0..75cadbc3c232 100644 --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -130,6 +130,22 @@ static bool migrate_one_irq(struct irq_desc *desc) * CPU. */ err = irq_do_set_affinity(d, affinity, false); + + /* + * If there are online CPUs in the affinity mask, but they have no + * vectors left to make the migration work, try to break the + * affinity by migrating to any online CPU. + */ + if (err == -ENOSPC && !irqd_affinity_is_managed(d) && affinity != cpu_online_mask) { + pr_debug("IRQ%u: set affinity failed for %*pbl, re-try with online CPUs\n", + d->irq, cpumask_pr_args(affinity)); + + affinity = cpu_online_mask; + brokeaff = true; + + err = irq_do_set_affinity(d, affinity, false); + } + if (err) { pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n", d->irq, err); |