summaryrefslogtreecommitdiffstats
path: root/drivers/cpuidle/poll_state.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* cpuidle: Fix poll_idle() noinstr annotationPeter Zijlstra2023-01-311-2/+0
| | | | | | | | | | | | | | | | The instrumentation_begin()/end() annotations in poll_idle() were complete nonsense. Specifically they caused tracing to happen in the middle of noinstr code, resulting in RCU splats. Now that local_clock() is noinstr, mark up the rest and let it rip. Fixes: 00717eb8c955 ("cpuidle: Annotate poll_idle()") Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/oe-lkp/202301192148.58ece903-oliver.sang@intel.com Link: https://lore.kernel.org/r/20230126151323.819534689@infradead.org
* cpuidle: Annotate poll_idle()Peter Zijlstra2023-01-131-1/+5
| | | | | | | | | | | | | The __cpuidle functions will become a noinstr class, as such they need explicit annotations. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.312601331@infradead.org
* cpuidle/poll: Ensure IRQs stay disabled after cpuidle_state::enter() callsPeter Zijlstra2023-01-131-1/+3
| | | | | | | | | | | | | | | Make cpuidle_state::enter() methods IRQ state invariant on exit. Additionally make sure to use raw_local_irq_*() methods since this cpuidle callback will be called with RCU already disabled. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195539.515253662@infradead.org
* cpuidle: Drop disabled field from struct cpuidle_stateRafael J. Wysocki2019-11-291-1/+0
| | | | | | | | | | | | | | | | | | After recent cpuidle updates the "disabled" field in struct cpuidle_state is only used by two drivers (intel_idle and shmobile cpuidle) for marking unusable idle states, but that may as well be achieved with the help of a state flag, so define an "unusable" idle state flag, CPUIDLE_FLAG_UNUSABLE, make the drivers in question use it instead of the "disabled" field and make the core set CPUIDLE_STATE_DISABLED_BY_DRIVER for the idle states with that flag set. After the above changes, the "disabled" field in struct cpuidle_state is not used any more, so drop it. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Use nanoseconds as the unit of timeRafael J. Wysocki2019-11-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the cpuidle subsystem uses microseconds as the unit of time which (among other things) causes the idle loop to incur some integer division overhead for no clear benefit. In order to allow cpuidle to measure time in nanoseconds, add two new fields, exit_latency_ns and target_residency_ns, to represent the exit latency and target residency of an idle state in nanoseconds, respectively, to struct cpuidle_state and initialize them with the help of the corresponding values in microseconds provided by drivers. Additionally, change cpuidle_governor_latency_req() to return the idle state exit latency constraint in nanoseconds. Also meeasure idle state residency (last_residency_ns in struct cpuidle_device and time_ns in struct cpuidle_driver) in nanoseconds and update the cpuidle core and governors accordingly. However, the menu governor still computes typical intervals in microseconds to avoid integer overflows. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Doug Smythies <dsmythies@telus.net> Tested-by: Doug Smythies <dsmythies@telus.net>
* cpuidle: add poll_limit_ns to cpuidle_device structureMarcelo Tosatti2019-07-301-9/+2
| | | | | | | | | | | | Add a poll_limit_ns variable to cpuidle_device structure. Calculate and configure it in the new cpuidle_poll_time function, in case its zero. Individual governors are allowed to override this value. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 428Thomas Gleixner2019-06-051-2/+1
| | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this file is released under the gplv2 extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 68 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Armijn Hemel <armijn@tjaldur.nl> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190531190114.292346262@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* cpuidle: poll_state: Fix default time limitDoug Smythies2019-01-301-1/+1
| | | | | | | | | | | | | | The default time is declared in units of microsecnds, but is used as nanoseconds, resulting in significant accounting errors for idle state 0 time when all idle states deeper than 0 are disabled. Under these unusual conditions, we don't really care about the poll time limit anyhow. Fixes: 800fb34a99ce ("cpuidle: poll_state: Disregard disable idle states") Signed-off-by: Doug Smythies <dsmythies@telus.net> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: poll_state: Disregard disable idle statesRafael J. Wysocki2018-12-111-1/+10
| | | | | | | | | | When computing the limit of time to spend in the loop in poll_idle(), use the target residency of the first enabled idle state deeper than state 0 instead of always using the target residency of state 1. This helps when state 1 is disabled for diagnostics, for instance. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: poll_state: Revise loop termination conditionRafael J. Wysocki2018-10-041-2/+2
| | | | | | | | | | | | | | | | | If need_resched() returns "false", breaking out of the loop in poll_idle() will cause a new idle state to be selected, so in fact it usually doesn't make sense to spin in it longer than the target residency of the second state. [Note that the "polling" state is used only if there is at least one "real" state defined in addition to it, so the second state is always there.] On the other hand, breaking out of it early (say in case the next state is disabled) shouldn't hurt as it is polling anyway. For this reason, make the loop in poll_idle() break if the CPU has been spinning longer than the target residency of the second state (the "polling" state can only be state[0]). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: menu: Fix wakeup statistics updates for polling stateRafael J. Wysocki2018-10-041-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | If the CPU exits the "polling" state due to the time limit in the loop in poll_idle(), this is not a real wakeup and it just means that the "polling" state selection was not adequate. The governor mispredicted short idle duration, but had a more suitable state been selected, the CPU might have spent more time in it. In fact, there is no reason to expect that there would have been a wakeup event earlier than the next timer in that case. Handling such cases as regular wakeups in menu_update() may cause the menu governor to make suboptimal decisions going forward, but ignoring them altogether would not be correct either, because every time menu_select() is invoked, it makes a separate new attempt to predict the idle duration taking distinct time to the closest timer event as input and the outcomes of all those attempts should be recorded. For this reason, make menu_update() always assume that if the "polling" state was exited due to the time limit, the next proper wakeup event for the CPU would be the next timer event (not including the tick). Fixes: a37b969a61c1 "cpuidle: poll_state: Add time limit to poll_idle()" Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* cpuidle: poll_state: Avoid invoking local_clock() too oftenRafael J. Wysocki2018-03-291-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | Rik reports that he sees an increase in CPU use in one benchmark due to commit 612f1a22f067 "cpuidle: poll_state: Add time limit to poll_idle()" that caused poll_idle() to call local_clock() in every iteration of the loop. Utilization increase generally means more non-idle time with respect to total CPU time (on the average) which implies reduced CPU frequency. Doug reports that limiting the rate of local_clock() invocations in there causes much less power to be drawn during a CPU-intensive parallel workload (with idle states 1 and 2 disabled to enforce more state 0 residency). These two reports together suggest that executing local_clock() on multiple CPUs in parallel at a high rate may cause chips to get hot and trigger thermal/power limits on them to kick in, so reduce the rate of local_clock() invocations in poll_idle() to avoid that issue. Fixes: 612f1a22f067 "cpuidle: poll_state: Add time limit to poll_idle()" Reported-by: Rik van Riel <riel@surriel.com> Reported-by: Doug Smythies <dsmythies@telus.net> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Rik van Riel <riel@surriel.com> Reviewed-by: Rik van Riel <riel@surriel.com>
* cpuidle: poll_state: Add time limit to poll_idle()Rafael J. Wysocki2018-03-291-1/+10
| | | | | | | | | | | | | | | If poll_idle() is allowed to spin until need_resched() returns 'true', it may actually spin for a much longer time than expected by the idle governor, since set_tsk_need_resched() is not always called by the timer interrupt handler. If that happens, the CPU may spend much more time than anticipated in the "polling" state. To prevent that from happening, limit the time of the spinning loop in poll_idle(). Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Doug Smythies <dsmythies@telus.net>
* cpuidle: Make drivers initialize polling stateRafael J. Wysocki2017-08-301-1/+2
| | | | | | | | | | | Make the drivers that want to include the polling state into their states table initialize it explicitly and drop the initialization of it (which in fact is conditional, but that is not obvious from the code) from the core. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* cpuidle: Move polling state initialization code to separate fileRafael J. Wysocki2017-08-301-0/+36
Move the polling state initialization code to a separate file built conditionally on CONFIG_ARCH_HAS_CPU_RELAX to get rid of the #ifdef in driver.c. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>