summaryrefslogtreecommitdiffstats
path: root/arch/x86/kvm
diff options
context:
space:
mode:
authorSean Christopherson <sean.j.christopherson@intel.com>2019-02-05 22:01:32 +0100
committerPaolo Bonzini <pbonzini@redhat.com>2019-02-20 22:48:46 +0100
commit8a674adc11cd4cc59e51eaea6f0cc4b3d5710411 (patch)
tree1e4188486bc028bd6c05afecfe01e7d8548a81b9 /arch/x86/kvm
parentRevert "KVM: x86: use the fast way to invalidate all pages" (diff)
downloadlinux-8a674adc11cd4cc59e51eaea6f0cc4b3d5710411.tar.xz
linux-8a674adc11cd4cc59e51eaea6f0cc4b3d5710411.zip
KVM: x86/mmu: skip over invalid root pages when zapping all sptes
...to guarantee forward progress. When zapped, root pages are marked invalid and moved to the head of the active pages list until they are explicitly freed. Theoretically, having unzappable root pages at the head of the list could prevent kvm_mmu_zap_all() from making forward progress were a future patch to add a loop restart after processing a page, e.g. to drop mmu_lock on contention. Although kvm_mmu_prepare_zap_page() can theoretically take action on invalid pages, e.g. to zap unsync children, functionally it's not necessary (root pages will be re-zapped when freed) and practically speaking the odds of e.g. @unsync or @unsync_children becoming %true while zapping all pages is basically nil. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm')
-rw-r--r--arch/x86/kvm/mmu.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5cdeb88850f8..c79ad7f31fdb 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5853,9 +5853,12 @@ void kvm_mmu_zap_all(struct kvm *kvm)
spin_lock(&kvm->mmu_lock);
restart:
- list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link)
+ list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
+ if (sp->role.invalid && sp->root_count)
+ continue;
if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
goto restart;
+ }
kvm_mmu_commit_zap_page(kvm, &invalid_list);
spin_unlock(&kvm->mmu_lock);