diff options
author | Ankur Arora <ankur.a.arora@oracle.com> | 2017-06-03 02:05:59 +0200 |
---|---|---|
committer | Juergen Gross <jgross@suse.com> | 2017-06-13 16:05:17 +0200 |
commit | 0b64ffb8db4e310f77a01079ca752d946a8526b5 (patch) | |
tree | 0975f7c4353148b0526752e84462945af44e7802 /arch/x86/xen/enlighten_hvm.c | |
parent | xen/vcpu: Simplify xen_vcpu related code (diff) | |
download | linux-0b64ffb8db4e310f77a01079ca752d946a8526b5.tar.xz linux-0b64ffb8db4e310f77a01079ca752d946a8526b5.zip |
xen/pvh*: Support > 32 VCPUs at domain restore
When Xen restores a PVHVM or PVH guest, its shared_info only holds
up to 32 CPUs. The hypercall VCPUOP_register_vcpu_info allows
us to setup per-page areas for VCPUs. This means we can boot
PVH* guests with more than 32 VCPUs. During restore the per-cpu
structure is allocated freshly by the hypervisor (vcpu_info_mfn is
set to INVALID_MFN) so that the newly restored guest can make a
VCPUOP_register_vcpu_info hypercall.
However, we end up triggering this condition in Xen:
/* Run this command on yourself or on other offline VCPUS. */
if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) )
which means we are unable to setup the per-cpu VCPU structures
for running VCPUS. The Linux PV code paths makes this work by
iterating over cpu_possible in xen_vcpu_restore() with:
1) is target CPU up (VCPUOP_is_up hypercall?)
2) if yes, then VCPUOP_down to pause it
3) VCPUOP_register_vcpu_info
4) if it was down, then VCPUOP_up to bring it back up
With Xen commit 192df6f9122d ("xen/x86: allow HVM guests to use
hypercalls to bring up vCPUs") this is available for non-PV guests.
As such first check if VCPUOP_is_up is actually possible before
trying this dance.
As most of this dance code is done already in xen_vcpu_restore()
let's make it callable on PV, PVH and PVHVM.
Based-on-patch-by: Konrad Wilk <konrad.wilk@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Diffstat (limited to 'arch/x86/xen/enlighten_hvm.c')
-rw-r--r-- | arch/x86/xen/enlighten_hvm.c | 20 |
1 files changed, 7 insertions, 13 deletions
diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c index eb53da6547ee..ba1afadb2512 100644 --- a/arch/x86/xen/enlighten_hvm.c +++ b/arch/x86/xen/enlighten_hvm.c @@ -20,7 +20,6 @@ void __ref xen_hvm_init_shared_info(void) { - int cpu; struct xen_add_to_physmap xatp; static struct shared_info *shared_info_page; @@ -35,18 +34,6 @@ void __ref xen_hvm_init_shared_info(void) BUG(); HYPERVISOR_shared_info = (struct shared_info *)shared_info_page; - - /* xen_vcpu is a pointer to the vcpu_info struct in the shared_info - * page, we use it in the event channel upcall and in some pvclock - * related functions. We don't need the vcpu_info placement - * optimizations because we don't use any pv_mmu or pv_irq op on - * HVM. - * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is - * online but xen_hvm_init_shared_info is run at resume time too and - * in that case multiple vcpus might be online. */ - for_each_online_cpu(cpu) { - xen_vcpu_info_reset(cpu); - } } static void __init init_hvm_pv_info(void) @@ -150,6 +137,13 @@ static void __init xen_hvm_guest_init(void) xen_hvm_init_shared_info(); + /* + * xen_vcpu is a pointer to the vcpu_info struct in the shared_info + * page, we use it in the event channel upcall and in some pvclock + * related functions. + */ + xen_vcpu_info_reset(0); + xen_panic_handler_init(); if (xen_feature(XENFEAT_hvm_callback_vector)) |