summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/kvm_emulate.h
diff options
context:
space:
mode:
authorPaolo Bonzini <pbonzini@redhat.com>2014-03-27 11:36:25 +0100
committerPaolo Bonzini <pbonzini@redhat.com>2014-07-11 09:13:58 +0200
commit54cfdb3e95d4f70409a7d3432a42cffc9a232be7 (patch)
tree01ad8c40d9c316b7359fb7e7605ba0c9b85a75eb /arch/x86/include/asm/kvm_emulate.h
parentKVM: emulate: protect checks on ctxt->d by a common "if (unlikely())" (diff)
downloadlinux-54cfdb3e95d4f70409a7d3432a42cffc9a232be7.tar.xz
linux-54cfdb3e95d4f70409a7d3432a42cffc9a232be7.zip
KVM: emulate: speed up emulated moves
We can just blindly move all 16 bytes of ctxt->src's value to ctxt->dst. write_register_operand will take care of writing only the lower bytes. Avoiding a call to memcpy (the compiler optimizes it out) gains about 200 cycles on kvm-unit-tests for register-to-register moves, and makes them about as fast as arithmetic instructions. We could perhaps get a larger speedup by moving all instructions _except_ moves out of x86_emulate_insn, removing opcode_len, and replacing the switch statement with an inlined em_mov. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/include/asm/kvm_emulate.h')
-rw-r--r--arch/x86/include/asm/kvm_emulate.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index 0e0151c13b2c..432447370044 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -233,7 +233,7 @@ struct operand {
union {
unsigned long val;
u64 val64;
- char valptr[sizeof(unsigned long) + 2];
+ char valptr[sizeof(sse128_t)];
sse128_t vec_val;
u64 mm_val;
void *data;