diff options
author | Gleb Natapov <gleb@redhat.com> | 2012-12-27 13:44:58 +0100 |
---|---|---|
committer | Marcelo Tosatti <mtosatti@redhat.com> | 2013-01-07 23:31:35 +0100 |
commit | 908e7d7999bcce70ac52e7f390a8f5cbc55948de (patch) | |
tree | 8d0a2b246e40d49e583d3ad04dd1d9f363ab1f7a | |
parent | KVM: mmu: remove unused trace event (diff) | |
download | linux-908e7d7999bcce70ac52e7f390a8f5cbc55948de.tar.xz linux-908e7d7999bcce70ac52e7f390a8f5cbc55948de.zip |
KVM: MMU: simplify folding of dirty bit into accessed_dirty
MMU code tries to avoid if()s HW is not able to predict reliably by using
bitwise operation to streamline code execution, but in case of a dirty bit
folding this gives us nothing since write_fault is checked right before
the folding code. Lets just piggyback onto the if() to make code more clear.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-rw-r--r-- | arch/x86/kvm/paging_tmpl.h | 16 |
1 files changed, 6 insertions, 10 deletions
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 891eb6d93b8b..a7b24cf59a3c 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -249,16 +249,12 @@ retry_walk: if (!write_fault) protect_clean_gpte(&pte_access, pte); - - /* - * On a write fault, fold the dirty bit into accessed_dirty by shifting it one - * place right. - * - * On a read fault, do nothing. - */ - shift = write_fault >> ilog2(PFERR_WRITE_MASK); - shift *= PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT; - accessed_dirty &= pte >> shift; + else + /* + * On a write fault, fold the dirty bit into accessed_dirty by + * shifting it one place right. + */ + accessed_dirty &= pte >> (PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT); if (unlikely(!accessed_dirty)) { ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, write_fault); |