summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'x86/spinlocks' of ↵Konrad Rzeszutek Wilk2013-09-0916-380/+569
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into stable/for-linus-3.12 * 'x86/spinlocks' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/kvm/guest: Fix sparse warning: "symbol 'klock_waiting' was not declared as static" kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor kvm guest: Add configuration support to enable debug information for KVM Guests kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi xen, pvticketlock: Allow interrupts to be enabled while blocking x86, ticketlock: Add slowpath logic jump_label: Split jumplabel ratelimit x86, pvticketlock: When paravirtualizing ticket locks, increment by 2 x86, pvticketlock: Use callee-save for lock_spinning xen, pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks xen, pvticketlock: Xen implementation for PV ticket locks xen: Defer spinlock setup until boot CPU setup x86, ticketlock: Collapse a layer of functions x86, ticketlock: Don't inline _spin_unlock when using paravirt spinlocks x86, spinlock: Replace pv spinlocks with pv ticketlocks
| * x86/kvm/guest: Fix sparse warning: "symbol 'klock_waiting' was not declared ↵Raghavendra K T2013-08-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | as static" It was not declared as static since it was thought to be used by pv-flushtlb earlier. Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: <gleb@redhat.com> Cc: <pbonzini@redhat.com> Cc: Jiri Kosina <trivial@kernel.org> Link: http://lkml.kernel.org/r/1376645921-8056-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisorSrivatsa Vaddagiri2013-08-142-2/+274
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During smp_boot_cpus paravirtualied KVM guest detects if the hypervisor has required feature (KVM_FEATURE_PV_UNHALT) to support pv-ticketlocks. If so, support for pv-ticketlocks is registered via pv_lock_ops. Use KVM_HC_KICK_CPU hypercall to wakeup waiting/halted vcpu. Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20130810193849.GA25260@linux.vnet.ibm.com Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com> [Raghu: check_zero race fix, enum for kvm_contention_stat, jumplabel related changes, addition of safe_halt for irq enabled case, bailout spinning in nmi case(Gleb)] Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Gleb Natapov <gleb@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * kvm guest: Add configuration support to enable debug information for KVM GuestsSrivatsa Vaddagiri2013-08-091-0/+9
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1376058122-8248-14-git-send-email-raghavendra.kt@linux.vnet.ibm.com Signed-off-by: Suzuki Poulose <suzuki@in.ibm.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Gleb Natapov <gleb@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapiRaghavendra K T2013-08-092-0/+2
| | | | | | | | | | | | | | | | | | | | | | These are needed by both guest and host. Originally-from: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1376058122-8248-13-git-send-email-raghavendra.kt@linux.vnet.ibm.com Acked-by: Gleb Natapov <gleb@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * xen, pvticketlock: Allow interrupts to be enabled while blockingJeremy Fitzhardinge2013-08-091-6/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If interrupts were enabled when taking the spinlock, we can leave them enabled while blocking to get the lock. If we can enable interrupts while waiting for the lock to become available, and we take an interrupt before entering the poll, and the handler takes a spinlock which ends up going into the slow state (invalidating the per-cpu "lock" and "want" values), then when the interrupt handler returns the event channel will remain pending so the poll will return immediately, causing it to return out to the main spinlock loop. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-12-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, ticketlock: Add slowpath logicJeremy Fitzhardinge2013-08-095-25/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Maintain a flag in the LSB of the ticket lock tail which indicates whether anyone is in the lock slowpath and may need kicking when the current holder unlocks. The flags are set when the first locker enters the slowpath, and cleared when unlocking to an empty queue (ie, no contention). In the specific implementation of lock_spinning(), make sure to set the slowpath flags on the lock just before blocking. We must do this before the last-chance pickup test to prevent a deadlock with the unlocker: Unlocker Locker test for lock pickup -> fail unlock test slowpath -> false set slowpath flags block Whereas this works in any ordering: Unlocker Locker set slowpath flags test for lock pickup -> fail block unlock test slowpath -> true, kick If the unlocker finds that the lock has the slowpath flag set but it is actually uncontended (ie, head == tail, so nobody is waiting), then it clears the slowpath flag. The unlock code uses a locked add to update the head counter. This also acts as a full memory barrier so that its safe to subsequently read back the slowflag state, knowing that the updated lock is visible to the other CPUs. If it were an unlocked add, then the flag read may just be forwarded from the store buffer before it was visible to the other CPUs, which could result in a deadlock. Unfortunately this means we need to do a locked instruction when unlocking with PV ticketlocks. However, if PV ticketlocks are not enabled, then the old non-locked "add" is the only unlocking code. Note: this code relies on gcc making sure that unlikely() code is out of line of the fastpath, which only happens when OPTIMIZE_SIZE=n. If it doesn't the generated code isn't too bad, but its definitely suboptimal. Thanks to Srivatsa Vaddagiri for providing a bugfix to the original version of this change, which has been folded in. Thanks to Stephan Diestelhorst for commenting on some code which relied on an inaccurate reading of the x86 memory ordering rules. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-11-git-send-email-raghavendra.kt@linux.vnet.ibm.com Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * jump_label: Split jumplabel ratelimitAndrew Jones2013-08-094-27/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b202952075f62603bea9bfb6ebc6b0420db11949 ("perf, core: Rate limit perf_sched_events jump_label patching") introduced rate limiting for jump label disabling. The changes were made in the jump label code in order to be more widely available and to keep things tidier. This is all fine, except now jump_label.h includes linux/workqueue.h, which makes it impossible to include jump_label.h from anything that workqueue.h needs. For example, it's now impossible to include jump_label.h from asm/spinlock.h, which is done in proposed pv-ticketlock patches. This patch splits out the rate limiting related changes from jump_label.h into a new file, jump_label_ratelimit.h, to resolve the issue. Signed-off-by: Andrew Jones <drjones@redhat.com> Link: http://lkml.kernel.org/r/1376058122-8248-10-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, pvticketlock: When paravirtualizing ticket locks, increment by 2Jeremy Fitzhardinge2013-08-092-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Increment ticket head/tails by 2 rather than 1 to leave the LSB free to store a "is in slowpath state" bit. This halves the number of possible CPUs for a given ticket size, but this shouldn't matter in practice - kernels built for 32k+ CPU systems are probably specially built for the hardware rather than a generic distro kernel. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-9-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Attilio Rao <attilio.rao@citrix.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, pvticketlock: Use callee-save for lock_spinningJeremy Fitzhardinge2013-08-094-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although the lock_spinning calls in the spinlock code are on the uncommon path, their presence can cause the compiler to generate many more register save/restores in the function pre/postamble, which is in the fast path. To avoid this, convert it to using the pvops callee-save calling convention, which defers all the save/restores until the actual function is called, keeping the fastpath clean. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-8-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Attilio Rao <attilio.rao@citrix.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * xen, pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocksJeremy Fitzhardinge2013-08-091-0/+14
| | | | | | | | | | | | | | | | | | Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-7-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * xen, pvticketlock: Xen implementation for PV ticket locksJeremy Fitzhardinge2013-08-091-269/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the old Xen implementation of PV spinlocks with and implementation of xen_lock_spinning and xen_unlock_kick. xen_lock_spinning simply registers the cpu in its entry in lock_waiting, adds itself to the waiting_cpus set, and blocks on an event channel until the channel becomes pending. xen_unlock_kick searches the cpus in waiting_cpus looking for the one which next wants this lock with the next ticket, if any. If found, it kicks it by making its event channel pending, which wakes it up. We need to make sure interrupts are disabled while we're relying on the contents of the per-cpu lock_waiting values, otherwise an interrupt handler could come in, try to take some other lock, block, and overwrite our values. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-6-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [ Raghavendra: use function + enum instead of macro, cmpxchg for zero status reset Reintroduce break since we know the exact vCPU to send IPI as suggested by Konrad.] Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * xen: Defer spinlock setup until boot CPU setupJeremy Fitzhardinge2013-08-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | There's no need to do it at very early init, and doing it there makes it impossible to use the jump_label machinery. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-5-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, ticketlock: Collapse a layer of functionsJeremy Fitzhardinge2013-08-091-30/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the paravirtualization layer doesn't exist at the spinlock level any more, we can collapse the __ticket_ functions into the arch_ functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-4-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Attilio Rao <attilio.rao@citrix.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, ticketlock: Don't inline _spin_unlock when using paravirt spinlocksRaghavendra K T2013-08-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code size expands somewhat, and its better to just call a function rather than inline it. Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch, which is simplified. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1376058122-8248-3-git-send-email-raghavendra.kt@linux.vnet.ibm.com Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * x86, spinlock: Replace pv spinlocks with pv ticketlocksJeremy Fitzhardinge2013-08-096-61/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than outright replacing the entire spinlock implementation in order to paravirtualize it, keep the ticket lock implementation but add a couple of pvops hooks on the slow patch (long spin on lock, unlocking a contended lock). Ticket locks have a number of nice properties, but they also have some surprising behaviours in virtual environments. They enforce a strict FIFO ordering on cpus trying to take a lock; however, if the hypervisor scheduler does not schedule the cpus in the correct order, the system can waste a huge amount of time spinning until the next cpu can take the lock. (See Thomas Friebel's talk "Prevent Guests from Spinning Around" http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.) To address this, we add two hooks: - __ticket_spin_lock which is called after the cpu has been spinning on the lock for a significant number of iterations but has failed to take the lock (presumably because the cpu holding the lock has been descheduled). The lock_spinning pvop is expected to block the cpu until it has been kicked by the current lock holder. - __ticket_spin_unlock, which on releasing a contended lock (there are more cpus with tail tickets), it looks to see if the next cpu is blocked and wakes it if so. When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub functions causes all the extra code to go away. Results: ======= setup: 32 core machine with 32 vcpu KVM guest (HT off) with 8GB RAM base = 3.11-rc patched = base + pvspinlock V12 +-----------------+----------------+--------+ dbench (Throughput in MB/sec. Higher is better) +-----------------+----------------+--------+ | base (stdev %)|patched(stdev%) | %gain | +-----------------+----------------+--------+ | 15035.3 (0.3) |15150.0 (0.6) | 0.8 | | 1470.0 (2.2) | 1713.7 (1.9) | 16.6 | | 848.6 (4.3) | 967.8 (4.3) | 14.0 | | 652.9 (3.5) | 685.3 (3.7) | 5.0 | +-----------------+----------------+--------+ pvspinlock shows benefits for overcommit ratio > 1 for PLE enabled cases, and undercommits results are flat Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Link: http://lkml.kernel.org/r/1376058122-8248-2-git-send-email-raghavendra.kt@linux.vnet.ibm.com Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Attilio Rao <attilio.rao@citrix.com> [ Raghavendra: Changed SPIN_THRESHOLD, fixed redefinition of arch_spinlock_t] Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | xen/arm: disable cpuidle and cpufreq when linux is running as dom0Julien Grall2013-09-091-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | When linux is running as dom0, Xen doesn't show the physical cpu but a virtual CPU. On some ARM SOC (for instance the exynos 5250), linux registers callbacks for cpuidle and cpufreq. When these callbacks are called, they will modify directly the physical cpu not the virtual one. It can impact the whole board instead of only dom0. Signed-off-by: Julien Grall <julien.grall@linaro.org> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen/p2m: Don't call get_balloon_scratch_page() twice, keep interrupts ↵Boris Ostrovsky2013-09-091-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | disabled for multicalls m2p_remove_override() calls get_balloon_scratch_page() in MULTI_update_va_mapping() even though it already has pointer to this page from the earlier call (in scratch_page). This second call doesn't have a matching put_balloon_scratch_page() thus not restoring preempt count back. (Also, there is no put_balloon_scratch_page() in the error path.) In addition, the second multicall uses __xen_mc_entry() which does not disable interrupts. Rearrange xen_mc_* calls to keep interrupts off while performing multicalls. This commit fixes a regression introduced by: commit ee0726407feaf504dff304fb603652fb2d778b42 Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Date: Tue Jul 23 17:23:54 2013 +0000 xen/m2p: use GNTTABOP_unmap_and_replace to reinstate the original mapping Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* | ARM: xen: only set pm function ptrs for Xen guestsRob Herring2013-09-091-1/+4
| | | | | | | | | | | | | | | | | | | | | | xen_pm_init was unconditionally setting pm_power_off and arm_pm_restart function pointers. This breaks multi-platform kernels. Make this conditional on running as a Xen guest and make it a late_initcall to ensure it is setup after platform code for Dom0. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> CC: stable@vger.kernel.org
* | hvc_xen: Remove unnecessary __GFP_ZERO from kzallocJoe Perches2013-08-301-3/+3
| | | | | | | | | | | | | | kzalloc already adds this __GFP_ZERO. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | drivers/xen-tpmfront: Fix compile issue with missing option.Konrad Rzeszutek Wilk2013-08-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Randy reports: x86_64: drivers/built-in.o: In function `xen_tpmfront_init': xen-tpmfront.c:(.init.text+0x257c): undefined reference to `xenbus_register_frontend' This is nicely fixed by selecting the XenBus frontend module. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen/balloon: don't set P2M entry for auto translated guestWei Liu2013-08-301-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit cd9151e2: xen/balloon: set a mapping for ballooned out pages we have the ballooned out page's mapping set to a scratch page. That commit also sets the P2M entry of ballooned out page to the scratch page's MFN. This is necessary for PV guest but not for HVM guest. On the other hand, setting the P2M entry would trigger BUG_ON in __set_phys_to_machine. The correct thing to do here is to avoid calling __set_phys_to_machine for auto translated guest. Signed-off-by: Wei Liu <wei.liu2@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* | xen/evtchn: double free on errorDan Carpenter2013-08-301-1/+0
| | | | | | | | | | | | | | The call to del_evtchn() frees "evtchn". Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | Xen: Fix retry calls into PRIVCMD_MMAPBATCH*.Andres Lagar-Cavilla2013-08-301-20/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a foreign mapper attempts to map guest frames that are paged out, the mapper receives an ENOENT response and will have to try again while a helper process pages the target frame back in. Gating checks on PRIVCMD_MMAPBATCH* ioctl args were preventing retries of mapping calls. Permit subsequent calls to update a sub-range of the VMA, iff nothing is yet mapped in that range. Since it is now valid to call PRIVCMD_MMAPBATCH* multiple times, only set vma->vm_private_data if the parameters are valid and (if necessary) the pages for the auto_translated_physmap case have been allocated. This prevents subsequent calls from incorrectly entering the 'retry' path when there are no pages allocated etc. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen/pvhvm: Initialize xen panic handler for PVHVM guestsVaughan Cao2013-08-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | kernel use callback linked in panic_notifier_list to notice others when panic happens. NORET_TYPE void panic(const char * fmt, ...){ ... atomic_notifier_call_chain(&panic_notifier_list, 0, buf); } When Xen becomes aware of this, it will call xen_reboot(SHUTDOWN_crash) to send out an event with reason code - SHUTDOWN_crash. xen_panic_handler_init() is defined to register on panic_notifier_list but we only call it in xen_arch_setup which only be called by PV, this patch is necessary for PVHVM. Without this patch, setting 'on_crash=coredump-restart' in PVHVM guest config file won't lead a vmcore to be generate when the guest panics. It can be reproduced with 'echo c > /proc/sysrq-trigger'. Signed-off-by: Vaughan Cao <vaughan.cao@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Joe Jin <joe.jin@oracle.com>
* | xen/m2p: use GNTTABOP_unmap_and_replace to reinstate the original mappingStefano Stabellini2013-08-202-15/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GNTTABOP_unmap_grant_ref unmaps a grant and replaces it with a 0 mapping instead of reinstating the original mapping. Doing so separately would be racy. To unmap a grant and reinstate the original mapping atomically we use GNTTABOP_unmap_and_replace. GNTTABOP_unmap_and_replace doesn't work with GNTMAP_contains_pte, so don't use it for kmaps. GNTTABOP_unmap_and_replace zeroes the mapping passed in new_addr so we have to reinstate it, however that is a per-cpu mapping only used for balloon scratch pages, so we can be sure that it's not going to be accessed while the mapping is not valid. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: alex@alex.org.uk CC: dcrisan@flexiant.com [v1: Konrad fixed up the conflicts] Conflicts: arch/x86/xen/p2m.c
* | xen: fix ARM build after 6efa20e4Stefano Stabellini2013-08-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following commit: commit 6efa20e49b9cb1db1ab66870cc37323474a75a13 Author: Konrad Rzeszutek Wilk <konrad@kernel.org> Date: Fri Jul 19 11:51:31 2013 -0400 xen: Support 64-bit PV guest receiving NMIs breaks the Xen ARM build: CC drivers/xen/events.o drivers/xen/events.c: In function 'xen_send_IPI_one': drivers/xen/events.c:1218:6: error: 'XEN_NMI_VECTOR' undeclared (first use in this function) Simply ifdef the undeclared symbol in the code. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* | MAINTAINERS: Remove Jeremy from the Xen subsystem.Konrad Rzeszutek Wilk2013-08-202-1/+1
| | | | | | | | | | | | | | | | | | | | Jeremy has been a key person in making Linux work with Xen. He has been enjoying the last year working on something different so reflect that in the maintainers file. CC: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
* | xen/events: document behaviour when scanning the start word for eventsDavid Vrabel2013-08-201-5/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original comment on the scanning of the start word on the 2nd pass did not reflect the actual behaviour (the code was incorrectly masking bit_idx instead of the pending word itself). The documented behaviour is not actually required since if event were pending in the MSBs, they would be immediately scanned anyway as we go through the loop again. Update the documentation to reflect this (instead of trying to change the behaviour). Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
* | x86/xen: during early setup, only 1:1 map the ISA regionDavid Vrabel2013-08-201-5/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During early setup, when the reserved regions and MMIO holes are being setup as 1:1 in the p2m, clear any mappings instead of making them 1:1 (execept for the ISA region which is expected to be mapped). This fixes a regression introduced in 3.5 by 83d51ab473dd (xen/setup: update VA mapping when releasing memory during setup) which caused hosts with tboot to fail to boot. tboot marks a region in the e820 map as unusable and the dom0 kernel would attempt to map this region and Xen does not permit unusable regions to be mapped by guests. (XEN) 0000000000000000 - 0000000000060000 (usable) (XEN) 0000000000060000 - 0000000000068000 (reserved) (XEN) 0000000000068000 - 000000000009e000 (usable) (XEN) 0000000000100000 - 0000000000800000 (usable) (XEN) 0000000000800000 - 0000000000972000 (unusable) tboot marked this region as unusable. (XEN) 0000000000972000 - 00000000cf200000 (usable) (XEN) 00000000cf200000 - 00000000cf38f000 (reserved) (XEN) 00000000cf38f000 - 00000000cf3ce000 (ACPI data) (XEN) 00000000cf3ce000 - 00000000d0000000 (reserved) (XEN) 00000000e0000000 - 00000000f0000000 (reserved) (XEN) 00000000fe000000 - 0000000100000000 (reserved) (XEN) 0000000100000000 - 0000000630000000 (usable) Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | x86/xen: disable premption when enabling local irqsDavid Vrabel2013-08-201-13/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If CONFIG_PREEMPT is enabled then xen_enable_irq() (and xen_restore_fl()) could be preempted and rescheduled on a different VCPU in between the clear of the mask and the check for pending events. This may result in events being lost as the upcall will check for pending events on the wrong VCPU. Fix this by disabling preemption around the unmask and check for events. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | swiotlb-xen: replace dma_length with sg_dma_len() macroStefano Stabellini2013-08-091-4/+4
| | | | | | | | | | | | | | | | swiotlb-xen has an implicit dependency on CONFIG_NEED_SG_DMA_LENGTH. Remove it by replacing dma_length with sg_dma_len. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | swiotlb: replace dma_length with sg_dma_len() macroEunBong Song2013-08-091-4/+4
| | | | | | | | | | | | | | | | | | | | This patch replace dma_length in "lib/swiotlb.c" to sg_dma_len() macro, because the build error can occur if CONFIG_NEED_SG_DMA_LENGTH is not set, and CONFIG_SWIOTLB is set. Singed-off-by: EunBong Song <eunb.song@samsung.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen/balloon: set a mapping for ballooned out pagesStefano Stabellini2013-08-092-3/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently ballooned out pages are mapped to 0 and have INVALID_P2M_ENTRY in the p2m. These ballooned out pages are used to map foreign grants by gntdev and blkback (see alloc_xenballooned_pages). Allocate a page per cpu and map all the ballooned out pages to the corresponding mfn. Set the p2m accordingly. This way reading from a ballooned out page won't cause a kernel crash (see http://lists.xen.org/archives/html/xen-devel/2012-12/msg01154.html). Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> CC: alex@alex.org.uk CC: dcrisan@flexiant.com Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen/evtchn: improve scalability by using per-user locksDavid Vrabel2013-08-091-80/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | The global array of port users and the port_user_lock limits scalability of the evtchn device. Instead of the global array lookup, use a per-use (per-fd) tree of event channels bound by that user and protect the tree with a per-user lock. This is also a prerequiste for extended the number of supported event channels, by removing the fixed size, per-event channel array. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen/p2m: avoid unneccesary TLB flush in m2p_remove_override()David Vrabel2013-08-091-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | In m2p_remove_override() when removing the grant map from the kernel mapping and replacing with a mapping to the original page, the grant unmap will already have flushed the TLB and it is not necessary to do it again after updating the mapping. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* | MAINTAINERS: Add in two extra co-maintainers of the Xen tree.Konrad Rzeszutek Wilk2013-08-091-0/+2
| | | | | | | | | | | | | | | | | | | | Both Boris and David have graciously volunteered to help in maintaining the Xen subsystem tree. Cementing this in the MAINTAINERS file so they are copied on Xen related patches. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: David Vrabel <david.vrabel@citrix.com>
* | MAINTAINERS: Update the Xen subsystem's with proper mailing list.Konrad Rzeszutek Wilk2013-08-091-7/+6
| | | | | | | | | | | | | | | | And also drop the virtualization one since we don't really use it. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> [for netback] Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* | xen: replace strict_strtoul() with kstrtoul()Jingoo Han2013-08-091-18/+36
| | | | | | | | | | | | | | | | | | The usage of strict_strtoul() is not preferred, because strict_strtoul() is obsolete. Thus, kstrtoul() should be used. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen-gnt: prevent adding duplicate gnt callbacksRoger Pau Monne2013-08-091-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the current implementation, the callback in the tail of the list can be added twice, because the check done in gnttab_request_free_callback is bogus, callback->next can be NULL if it is the last callback in the list. If we add the same callback twice we end up with an infinite loop, were callback == callback->next. Replace this check with a proper one that iterates over the list to see if the callback has already been added. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Matt Wilson <msw@amazon.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> CC: stable@vger.kernel.org
* | drivers/tpm: add xen tpmfront interfaceDaniel De Graaf2013-08-095-0/+650
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a complete rewrite of the Xen TPM frontend driver, taking advantage of a simplified frontend/backend interface and adding support for cancellation and timeouts. The backend for this driver is provided by a vTPM stub domain using the interface in Xen 4.3. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Matthew Fioravante <matthew.fioravante@jhuapl.edu> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Peter Huewe <peterhuewe@gmx.de> Reviewed-by: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | xen: Support 64-bit PV guest receiving NMIsKonrad Rzeszutek Wilk2013-08-096-8/+38
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on a patch that Zhenzhong Duan had sent - which was missing some of the remaining pieces. The kernel has the logic to handle Xen-type-exceptions using the paravirt interface in the assembler code (see PARAVIRT_ADJUST_EXCEPTION_FRAME - pv_irq_ops.adjust_exception_frame and and INTERRUPT_RETURN - pv_cpu_ops.iret). That means the nmi handler (and other exception handlers) use the hypervisor iret. The other changes that would be neccessary for this would be to translate the NMI_VECTOR to one of the entries on the ipi_vector and make xen_send_IPI_mask_allbutself use different events. Fortunately for us commit 1db01b4903639fcfaec213701a494fe3fb2c490b (xen: Clean up apic ipi interface) implemented this and we piggyback on the cleanup such that the apic IPI interface will pass the right vector value for NMI. With this patch we can trigger NMIs within a PV guest (only tested x86_64). For this to work with normal PV guests (not initial domain) we need the domain to be able to use the APIC ops - they are already implemented to use the Xen event channels. For that to be turned on in a PV domU we need to remove the masking of X86_FEATURE_APIC. Incidentally that means kgdb will also now work within a PV guest without using the 'nokgdbroundup' workaround. Note that the 32-bit version is different and this patch does not enable that. CC: Lisa Nguyen <lisa@xenapiadmin.com> CC: Ben Guthro <benjamin.guthro@citrix.com> CC: Zhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v1: Fixed up per David Vrabel comments] Reviewed-by: Ben Guthro <benjamin.guthro@citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
* Linux 3.11-rc4v3.11-rc4Linus Torvalds2013-08-041-1/+1
|
* Merge branch 'fixes' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-08-042-26/+68
|\ | | | | | | | | | | | | | | | | | | | | Pull dmaengine fixes from Vinod Koul: "Two fixes for slave dmaengine. The first fixes cyclic dma transfers for pl330 and the second one makes us return the correct error code on probe" * 'fixes' of git://git.infradead.org/users/vkoul/slave-dma: dma: pl330: Fix cyclic transfers pch_dma: fix error return code in pch_dma_probe()
| * dma: pl330: Fix cyclic transfersLars-Peter Clausen2013-07-281-26/+67
| | | | | | | | | | | | | | | | | | | | | | Allocate a descriptor for each period of a cyclic transfer, not just the first. Also since the callback needs to be called for each finished period make sure to initialize the callback and callback_param fields of each descriptor in a cyclic transfer. Cc: stable@vger.kernel.org Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * pch_dma: fix error return code in pch_dma_probe()Wei Yongjun2013-07-221-0/+1
| | | | | | | | | | | | | | | | Fix to return -ENODEV when no proper base address found error handling case instead of 0, as done elsewhere in this function. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* | Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linuxLinus Torvalds2013-08-041-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull drm fix from Dave Airlie: "Just a quick fix that a few people have reported, be nice to have in asap" The drm tree seems to be very confused about 64-bit divides. Here it uses a slow 64-by-64 bit divide to divide by a small constant. Oh well. Doesn't look performance-critical, just stupid. * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: drm/radeon: fix 64 bit divide in SI spm code
| * | drm/radeon: fix 64 bit divide in SI spm codeAlex Deucher2013-08-041-1/+1
| | | | | | | | | | | | | | | | | | | | | Forgot to use the appropriate math64 function. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Dave Airlie <airlied@gmail.com>
* | | tmpfs: fix SEEK_DATA/SEEK_HOLE regressionHugh Dickins2013-08-041-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 46a1c2c7ae53 ("vfs: export lseek_execute() to modules") broke the tmpfs SEEK_DATA/SEEK_HOLE implementation, because vfs_setpos() converts the carefully prepared -ENXIO to -EINVAL. Other filesystems avoid it in error cases: do the same in tmpfs. Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Jie Liu <jeff.liu@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge tag 'sound-3.11' of ↵Linus Torvalds2013-08-048-21/+21
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound Pull sound fixes from Takashi Iwai: "All small regression or small fixes, nothing surprising at this stage. - regression fix for intel Mac Mini quirk - compress ioctl error fix - ASoC fixes for control change notifications, some UI fixes, driver-specific fixes (resource leak, build errors, etc)" * tag 'sound-3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: ALSA: hda - Fix missing fixup for Mac Mini with STAC9221 ASoC: wm0010: Fix resource leak ASoC: au1x: Fix build ASoC: bf5xx-ac97: Fix compile error with SND_BF5XX_HAVE_COLD_RESET ASoC: bfin-ac97: Fix prototype error following AC'97 refactoring ALSA: compress: fix the return value for SNDRV_COMPRESS_VERSION ASoC: dapm: Fix return value of snd_soc_dapm_put_{volsw,enum_virt}()