diff options
author | Cédric Le Goater <clg@kaod.org> | 2019-05-28 23:13:24 +0200 |
---|---|---|
committer | Paul Mackerras <paulus@ozlabs.org> | 2019-05-30 05:55:41 +0200 |
commit | bcaa3110d584f982a17e9ddbfc03e1130bca2bc9 (patch) | |
tree | 7d9d3ba61c6120d833400b947f7847aac31b8b67 /arch/powerpc/kvm/book3s_xive_native.c | |
parent | KVM: PPC: Book3S HV: XIVE: Take the srcu read lock when accessing memslots (diff) | |
download | linux-bcaa3110d584f982a17e9ddbfc03e1130bca2bc9.tar.xz linux-bcaa3110d584f982a17e9ddbfc03e1130bca2bc9.zip |
KVM: PPC: Book3S HV: XIVE: Fix page offset when clearing ESB pages
Under XIVE, the ESB pages of an interrupt are used for interrupt
management (EOI) and triggering. They are made available to guests
through a mapping of the XIVE KVM device.
When a device is passed-through, the passthru_irq helpers,
kvmppc_xive_set_mapped() and kvmppc_xive_clr_mapped(), clear the ESB
pages of the guest IRQ number being mapped and let the VM fault
handler repopulate with the correct page.
The ESB pages are mapped at offset 4 (KVM_XIVE_ESB_PAGE_OFFSET) in the
KVM device mapping. Unfortunately, this offset was not taken into
account when clearing the pages. This lead to issues with the
passthrough devices for which the interrupts were not functional under
some guest configuration (tg3 and single CPU) or in any configuration
(e1000e adapter).
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Diffstat (limited to 'arch/powerpc/kvm/book3s_xive_native.c')
-rw-r--r-- | arch/powerpc/kvm/book3s_xive_native.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c index 8b762e3ebbc5..5596c8ec221a 100644 --- a/arch/powerpc/kvm/book3s_xive_native.c +++ b/arch/powerpc/kvm/book3s_xive_native.c @@ -172,6 +172,7 @@ bail: static int kvmppc_xive_native_reset_mapped(struct kvm *kvm, unsigned long irq) { struct kvmppc_xive *xive = kvm->arch.xive; + pgoff_t esb_pgoff = KVM_XIVE_ESB_PAGE_OFFSET + irq * 2; if (irq >= KVMPPC_XIVE_NR_IRQS) return -EINVAL; @@ -185,7 +186,7 @@ static int kvmppc_xive_native_reset_mapped(struct kvm *kvm, unsigned long irq) mutex_lock(&xive->mapping_lock); if (xive->mapping) unmap_mapping_range(xive->mapping, - irq * (2ull << PAGE_SHIFT), + esb_pgoff << PAGE_SHIFT, 2ull << PAGE_SHIFT, 1); mutex_unlock(&xive->mapping_lock); return 0; |