summaryrefslogtreecommitdiffstats
path: root/arch/x86/kvm/mmu/spte.c
diff options
context:
space:
mode:
authorSean Christopherson <seanjc@google.com>2024-10-10 20:23:09 +0200
committerPaolo Bonzini <pbonzini@redhat.com>2024-10-25 18:54:42 +0200
commit4e44ab0a777865610463f90d65c758533e27e1ca (patch)
treeb749c67848ec51bdfffb1303813728a806133057 /arch/x86/kvm/mmu/spte.c
parentKVM: x86/mmu: Invert @can_unsync and renamed to @synchronizing (diff)
downloadlinux-4e44ab0a777865610463f90d65c758533e27e1ca.tar.xz
linux-4e44ab0a777865610463f90d65c758533e27e1ca.zip
KVM: x86/mmu: Mark new SPTE as Accessed when synchronizing existing SPTE
Set the Accessed bit when making a "new" SPTE during SPTE synchronization, as _clearing_ the Accessed bit is counter-productive, and even if the Accessed bit wasn't set in the old SPTE, odds are very good the guest will access the page in the near future, as the most common case where KVM synchronizes a shadow-present SPTE is when the guest is making the gPTE read-only for Copy-on-Write (CoW). Preserving the Accessed bit will allow dropping the logic that propagates the Accessed bit to the underlying struct page when overwriting an existing SPTE, without undue risk of regressing page aging. Note, KVM's current behavior is very deliberate, as SPTE synchronization was the only "speculative" access type as of commit 947da5383069 ("KVM: MMU: Set the accessed bit on non-speculative shadow ptes"). But, much has changed since 2008, and more changes are on the horizon. Spurious clearing of the Accessed (and Dirty) was mitigated by commit e6722d9211b2 ("KVM: x86/mmu: Reduce the update to the spte in FNAME(sync_spte)"), which changed FNAME(sync_spte) to only overwrite SPTEs if the protections are actually changing. I.e. KVM is already preserving Accessed information for SPTEs that aren't dropping protections. And with the aforementioned future change to NOT mark the page/folio as accessed, KVM's SPTEs will become the "source of truth" so to speak, in which case clearing the Accessed bit outside of page aging becomes very undesirable. Suggested-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-8-seanjc@google.com>
Diffstat (limited to '')
-rw-r--r--arch/x86/kvm/mmu/spte.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 0e47fea1a2d9..618059b30b8b 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -178,7 +178,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
spte |= SPTE_TDP_AD_WRPROT_ONLY;
spte |= shadow_present_mask;
- if (!prefetch)
+ if (!prefetch || synchronizing)
spte |= spte_shadow_accessed_mask(spte);
/*
@@ -259,7 +259,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
spte |= spte_shadow_dirty_mask(spte);
out:
- if (prefetch)
+ if (prefetch && !synchronizing)
spte = mark_spte_for_access_track(spte);
WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level),