summaryrefslogtreecommitdiffstats
path: root/drivers/cpuidle/coupled.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* cpuidle/coupled: Convert to hotplug state machineSebastian Andrzej Siewior2016-09-061-44/+31
| | | | | | | | | | | | | | Install the callbacks via the state machine. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: linux-pm@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160824091444.brdr5zpbxjvh6n3f@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* cpuidle: coupled: remove unused define cpuidle_coupled_lockAnders Roxell2016-01-271-1/+0
| | | | | | | | | | | | | This was found with the -RT patch enabled, but the fix should apply to non-RT also. Used multi_v7_defconfig+PREEMPT_RT_FULL=y and this caused a compilation warning without this fix: ../drivers/cpuidle/coupled.c:122:21: warning: 'cpuidle_coupled_lock' defined but not used [-Wunused-variable] Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle/coupled: Add sanity check for safe_state_indexXunlei Pang2015-09-031-0/+22
| | | | | | | | | Since we are using cpuidle_driver::safe_state_index directly as the target state index, it is better to add the sanity check at the point of registering the driver. Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle/coupled: Remove redundant 'dev' argument of cpuidle_state_is_coupled()Xunlei Pang2015-08-281-3/+1
| | | | | | | For cpuidle_state_is_coupled(), 'dev' is not used, so remove it. Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle/coupled: Remove cpuidle_device::safe_state_indexXunlei Pang2015-08-281-2/+2
| | | | | | | | | | | | | cpuidle_device::safe_state_index need to be initialized before use, it should be the same as cpuidle_driver::safe_state_index. We tackled this issue by removing the safe_state_index from the cpuidle_device structure and use the one in the cpuidle_driver structure instead. Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* drivers: fix up obsolete cpu function usage.Rusty Russell2015-03-051-3/+3
| | | | | | | | | | | | | Thanks to spatch, plus manual removal of "&*". Then a sweep for for_each_cpu_mask => for_each_cpu. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: netdev@vger.kernel.org
* arch: Mass conversion of smp_mb__*()Peter Zijlstra2014-04-181-1/+1
| | | | | | | | | | | Mostly scripted conversion of the smp_mb__* barriers. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* smp: Rename __smp_call_function_single() to smp_call_function_single_async()Frederic Weisbecker2014-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | The name __smp_call_function_single() doesn't tell much about the properties of this function, especially when compared to smp_call_function_single(). The comments above the implementation are also misleading. The main point of this function is actually not to be able to embed the csd in an object. This is actually a requirement that result from the purpose of this function which is to raise an IPI asynchronously. As such it can be called with interrupts disabled. And this feature comes at the cost of the caller who then needs to serialize the IPIs on this csd. Lets rename the function and enhance the comments so that they reflect these properties. Suggested-by: Christoph Hellwig <hch@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@fb.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* smp: Remove wait argument from __smp_call_function_single()Frederic Weisbecker2014-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main point of calling __smp_call_function_single() is to send an IPI in a pure asynchronous way. By embedding a csd in an object, a caller can send the IPI without waiting for a previous one to complete as is required by smp_call_function_single() for example. As such, sending this kind of IPI can be safe even when irqs are disabled. This flexibility comes at the expense of the caller who then needs to synchronize the csd lifecycle by himself and make sure that IPIs on a single csd are serialized. This is how __smp_call_function_single() works when wait = 0 and this usecase is relevant. Now there don't seem to be any usecase with wait = 1 that can't be covered by smp_call_function_single() instead, which is safer. Lets look at the two possible scenario: 1) The user calls __smp_call_function_single(wait = 1) on a csd embedded in an object. It looks like a nice and convenient pattern at the first sight because we can then retrieve the object from the IPI handler easily. But actually it is a waste of memory space in the object since the csd can be allocated from the stack by smp_call_function_single(wait = 1) and the object can be passed an the IPI argument. Besides that, embedding the csd in an object is more error prone because the caller must take care of the serialization of the IPIs for this csd. 2) The user calls __smp_call_function_single(wait = 1) on a csd that is allocated on the stack. It's ok but smp_call_function_single() can do it as well and it already takes care of the allocation on the stack. Again it's more simple and less error prone. Therefore, using the underscore prepend API version with wait = 1 is a bad pattern and a sign that the caller can do safer and more simple. There was a single user of that which has just been converted. So lets remove this option to discourage further users. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@fb.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* cpuidle: Fix comments in cpuidle coreViresh Kumar2013-10-301-1/+1
| | | | | | | | Some comments in cpuidle core files contain trivial mistakes. This patch fixes them. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: coupled: fix race condition between pokes and safe stateColin Cross2013-08-291-6/+14
| | | | | | | | | | | | | | | | | | | | | | The coupled cpuidle waiting loop clears pending pokes before entering the safe state. If a poke arrives just before the pokes are cleared, but after the while loop condition checks, the poke will be lost and the cpu will stay in the safe state until another interrupt arrives. This may cause the cpu that sent the poke to spin in the ready loop with interrupts off until another cpu receives an interrupt, and if no other cpus have interrupts routed to them it can spin forever. Change the return value of cpuidle_coupled_clear_pokes to return if a poke was cleared, and move the need_resched() checks into the callers. In the waiting loop, if a poke was cleared restart the loop to repeat the while condition checks. Reported-by: Neil Zhang <zhangwm@marvell.com> Signed-off-by: Colin Cross <ccross@android.com> Cc: 3.6+ <stable@vger.kernel.org> # 3.6+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: coupled: abort idle if pokes are pendingColin Cross2013-08-291-25/+82
| | | | | | | | | | | | | | | | | | | | | | | Joseph Lo <josephl@nvidia.com> reported a lockup on Tegra20 caused by a race condition in coupled cpuidle. When two or more cpus enter idle at the same time, the first cpus to arrive may go to the ready loop without processing pending pokes from the last cpu to arrive. This patch adds a check for pending pokes once all cpus have been synchronized in the ready loop and resets the coupled state and retries if any cpus failed to handle their pending poke. Retrying on all cpus may trigger the same issue again, so this patch also adds a check to ensure that each cpu has received at least one poke between when it enters the waiting loop and when it moves on to the ready loop. Reported-and-tested-by: Joseph Lo <josephl@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Colin Cross <ccross@android.com> Cc: 3.6+ <stable@vger.kernel.org> # 3.6+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: coupled: disable interrupts after entering safe stateColin Cross2013-08-291-0/+2
| | | | | | | | | | | Calling cpuidle_enter_state is expected to return with interrupts enabled, but interrupts must be disabled before starting the ready loop synchronization stage. Call local_irq_disable after each call to cpuidle_enter_state for the safe state. Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle / coupled: fix ready counter decrementSivaram Nair2013-01-031-1/+1
| | | | | | | | | | | | The ready_waiting_counts atomic variable is compared against the wrong online cpu count. The latter is computed incorrectly using logical-OR instead of bit-OR. This patch fixes that. Signed-off-by: Sivaram Nair <sivaramn@nvidia.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Colin Cross <ccross@android.com> Cc: <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Prevent null pointer dereference in cpuidle_coupled_cpu_notifyJon Medhurst (Tixy)2012-08-171-1/+1
| | | | | | | | | | | | | When a kernel is built to support multiple hardware types it's possible that CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is set but the hardware the kernel is run on doesn't support cpuidle and therefore doesn't load a driver for it. In this case, when the system is shut down, cpuidle_coupled_cpu_notify() gets called with cpuidle_devices set to NULL. There are quite possibly other circumstances where this situation can also occur and we should check for it. Signed-off-by: Jon Medhurst <tixy@linaro.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* cpuidle: coupled: fix sleeping while atomic in cpu notifierColin Cross2012-08-171-0/+12
| | | | | | | | | | | The cpu hotplug notifier gets called in both atomic and non-atomic contexts, it is not always safe to lock a mutex. Filter out all events except the six necessary ones, which are all sleepable, before taking the mutex. Signed-off-by: Colin Cross <ccross@android.com> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* cpuidle: coupled: add parallel barrier functionColin Cross2012-06-021-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds cpuidle_coupled_parallel_barrier, which can be used by coupled cpuidle state enter functions to handle resynchronization after determining if any cpu needs to abort. The normal use case will be: static bool abort_flag; static atomic_t abort_barrier; int arch_cpuidle_enter(struct cpuidle_device *dev, ...) { if (arch_turn_off_irq_controller()) { /* returns an error if an irq is pending and would be lost if idle continued and turned off power */ abort_flag = true; } cpuidle_coupled_parallel_barrier(dev, &abort_barrier); if (abort_flag) { /* One of the cpus didn't turn off it's irq controller */ arch_turn_on_irq_controller(); return -EINTR; } /* continue with idle */ ... } This will cause all cpus to abort idle together if one of them needs to abort. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Len Brown <len.brown@intel.com>
* cpuidle: add support for states that affect multiple cpusColin Cross2012-06-021-0/+678
On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the cpus cannot be independently powered down, either due to sequencing restrictions (on Tegra 2, cpu 0 must be the last to power down), or due to HW bugs (on OMAP4460, a cpu powering up will corrupt the gic state unless the other cpu runs a work around). Each cpu has a power state that it can enter without coordinating with the other cpu (usually Wait For Interrupt, or WFI), and one or more "coupled" power states that affect blocks shared between the cpus (L2 cache, interrupt controller, and sometimes the whole SoC). Entering a coupled power state must be tightly controlled on both cpus. The easiest solution to implementing coupled cpu power states is to hotplug all but one cpu whenever possible, usually using a cpufreq governor that looks at cpu load to determine when to enable the secondary cpus. This causes problems, as hotplug is an expensive operation, so the number of hotplug transitions must be minimized, leading to very slow response to loads, often on the order of seconds. This file implements an alternative solution, where each cpu will wait in the WFI state until all cpus are ready to enter a coupled state, at which point the coupled state function will be called on all cpus at approximately the same time. Once all cpus are ready to enter idle, they are woken by an smp cross call. At this point, there is a chance that one of the cpus will find work to do, and choose not to enter idle. A final pass is needed to guarantee that all cpus will call the power state enter function at the same time. During this pass, each cpu will increment the ready counter, and continue once the ready counter matches the number of online coupled cpus. If any cpu exits idle, the other cpus will decrement their counter and retry. To use coupled cpuidle states, a cpuidle driver must: Set struct cpuidle_device.coupled_cpus to the mask of all coupled cpus, usually the same as cpu_possible_mask if all cpus are part of the same cluster. The coupled_cpus mask must be set in the struct cpuidle_device for each cpu. Set struct cpuidle_device.safe_state to a state that is not a coupled state. This is usually WFI. Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each state that affects multiple cpus. Provide a struct cpuidle_state.enter function for each state that affects multiple cpus. This function is guaranteed to be called on all cpus at approximately the same time. The driver should ensure that the cpus all abort together if any cpu tries to abort once the function is called. update1: cpuidle: coupled: fix count of online cpus online_count was never incremented on boot, and was also counting cpus that were not part of the coupled set. Fix both issues by introducting a new function that counts online coupled cpus, and call it from register as well as the hotplug notifier. update2: cpuidle: coupled: fix decrementing ready count cpuidle_coupled_set_not_ready sometimes refuses to decrement the ready count in order to prevent a race condition. This makes it unsuitable for use when finished with idle. Add a new function cpuidle_coupled_set_done that decrements both the ready count and waiting count, and call it after idle is complete. Cc: Amit Kucheria <amit.kucheria@linaro.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Len Brown <len.brown@intel.com>