summaryrefslogtreecommitdiffstats
path: root/drivers/cpufreq (follow)
Commit message (Collapse)AuthorAgeFilesLines
* cpufreq: move policy kobj to policy->cpu at resumeViresh Kumar2014-07-171-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is only relevant to implementations with multiple clusters, where clusters have separate clock lines but all CPUs within a cluster share it. Consider a dual cluster platform with 2 cores per cluster. During suspend we start hot unplugging CPUs in order 1 to 3. When CPU2 is removed, policy->kobj would be moved to CPU3 and when CPU3 goes down we wouldn't free policy or its kobj as we want to retain permissions/values/etc. Now on resume, we will get CPU2 before CPU3 and will call __cpufreq_add_dev(). We will recover the old policy and update policy->cpu from 3 to 2 from update_policy_cpu(). But the kobj is still tied to CPU3 and isn't moved to CPU2. We wouldn't create a link for CPU2, but would try that for CPU3 while bringing it online. Which will report errors as CPU3 already has kobj assigned to it. This bug got introduced with commit 42f921a, which overlooked this scenario. To fix this, lets move kobj to the new policy->cpu while bringing first CPU of a cluster back. Also do a WARN_ON() if kobject_move failed, as we would reach here only for the first CPU of a non-boot cluster. And we can't recover from this situation, if kobject_move() fails. Fixes: 42f921a6f10c (cpufreq: remove sysfs files for CPUs which failed to come back after resume) Cc: 3.13+ <stable@vger.kernel.org> # 3.13+ Reported-and-tested-by: Bu Yitian <ybu@qti.qualcomm.com> Reported-by: Saravana Kannan <skannan@codeaurora.org> Reviewed-by: Srivatsa S. Bhat <srivatsa@mit.edu> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: cpu0: OPPs can be populated at runtimeViresh Kumar2014-07-161-5/+2
| | | | | | | | | | | | | | | | | | | | OPPs can be populated statically, via DT, or added at run time with dev_pm_opp_add(). While this driver handles the first case correctly, it would fail to populate OPPs added at runtime. Because call to of_init_opp_table() would fail as there are no OPPs in DT and probe will return early. To fix this, remove error checking and call dev_pm_opp_init_cpufreq_table() unconditionally. Update bindings as well. Suggested-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: kirkwood: Reinstate cpufreq driver for ARCH_KIRKWOODQuentin Armitage2014-07-161-1/+1
| | | | | | | | | | | | | | | | | | Commit ff1f0018cf66080d8e6f59791e552615648a033a ("drivers: Enable building of Kirkwood drivers for mach-mvebu") added Kirkwood into mach-mvebu, adding MACH_KIRKWOOD to ARCH_KIRKWOOD in the KConfig files. The change for ARM_KIRKWOOD_CPUFREQ replaced ARCH_KIRKWOOD with MACH_KIRKWOOD, whereas all the other changes were ARCH_KIRKWOOD || MACH_KIRKWOOD. As a consequence of this change, the cpufreq driver is no longer enabled for ARCH_KIRKWOOD. This patch reinstates ARM_KIRKWOOD_CPUFREQ for ARCH_KIRKWOOD. Signed-off-by: Quentin Armitage <quentin@armitage.org.uk> Acked-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: imx6q: Select PM_OPPNicolas Del Piano2014-07-161-0/+1
| | | | | | | | | | | PM_OPP is a library used by several of the existing cpufreq drivers. ARM IMX6Q cpufreq driver uses this library for its functionality. Thus, it should be selected in Kconfig. Reported-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: Nicolas Del Piano <ndel314@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: sa1110: set memory type for h3600Linus Walleij2014-07-161-1/+1
| | | | | | | | | The Compaq iPAQ h3600 also has the K4S281632b-1H memory type. Verified by prying apart a broken board. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: Makefile: fix compilation for davinci platformPrabhakar Lad2014-07-091-1/+1
| | | | | | | | | | | | | | | | | | Since commtit 8a7b1227e303 (cpufreq: davinci: move cpufreq driver to drivers/cpufreq) this added dependancy only for CONFIG_ARCH_DAVINCI_DA850 where as davinci_cpufreq_init() call is used by all davinci platform. This patch fixes following build error: arch/arm/mach-davinci/built-in.o: In function `davinci_init_late': :(.init.text+0x928): undefined reference to `davinci_cpufreq_init' make: *** [vmlinux] Error 1 Fixes: 8a7b1227e303 (cpufreq: davinci: move cpufreq driver to drivers/cpufreq) Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 3.10+ <stable@vger.kernel.org> # 3.10+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* intel_pstate: Set CPU number before accessing MSRsVincent Minet2014-07-071-2/+1
| | | | | | | | | | | | | | | Ensure that cpu->cpu is set before writing MSR_IA32_PERF_CTL during CPU initialization. Otherwise only cpu0 has its P-state set and all other cores are left with their values unchanged. In most cases, this is not too serious because the P-states will be set correctly when the timer function is run. But when the default governor is set to performance, the per-CPU current_pstate stays the same forever and no attempts are made to write the MSRs again. Signed-off-by: Vincent Minet <vincent@vincent-minet.net> Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* intel_pstate: don't touch turbo bit if turbo disabled or unavailable.Dirk Brandewie2014-07-071-6/+16
| | | | | | | | | | | | | | | | | | | | If turbo is disabled in the BIOS bit 38 should be set in MSR_IA32_MISC_ENABLE register per section 14.3.2.1 of the SDM Vol 3 document 325384-050US Feb 2014. If this bit is set do *not* attempt to disable trubo via the MSR_IA32_PERF_CTL register. On some systems trying to disable turbo via MSR_IA32_PERF_CTL will cause subsequent writes to MSR_IA32_PERF_CTL not take affect, in fact reading MSR_IA32_PERF_CTL will not show the IDA/Turbo DISENGAGE bit(32) as set. A write of bit 32 to zero returns to normal operation. Also deal with the case where the processor does not support turbo and the BIOS does not report the fact in MSR_IA32_MISC_ENABLE but does report the max and turbo P states as the same value. Link: https://bugzilla.kernel.org/show_bug.cgi?id=64251 Cc: 3.13+ <stable@vger.kernel.org> # 3.13+ Signed-off-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* intel_pstate: Fix setting VIDDirk Brandewie2014-07-071-5/+5
| | | | | | | | | | | | | | | | | | | | | | | Commit 21855ff5 (intel_pstate: Set turbo VID for BayTrail) introduced setting the turbo VID which is required to prevent a machine check on some Baytrail SKUs under heavy graphics based workloads. The docmumentation update that brought the requirement to light also changed the bit mask used for enumerating P state and VID values from 0x7f to 0x3f. This change returns the mask value to 0x7f. Tested with the Intel NUC DN2820FYK, BIOS version FYBYT10H.86A.0034.2014.0513.1413 with v3.16-rc1 and v3.14.8 kernel versions. Fixes: 21855ff5 (intel_pstate: Set turbo VID for BayTrail) Link: https://bugzilla.kernel.org/show_bug.cgi?id=77951 Reported-and-tested-by: Rune Reterson <rune@megahurts.dk> Reported-and-tested-by: Eric Eickmeyer <erich@ericheickmeyer.com> Cc: 3.13+ <stable@vger.kernel.org> # 3.13+ Signed-off-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: unlock when failing cpufreq_update_policy()Aaron Plattner2014-06-181-6/+4
| | | | | | | | | | | | | | | Commit bd0fa9bb455d introduced a failure path to cpufreq_update_policy() if cpufreq_driver->get(cpu) returns NULL. However, it jumps to the 'no_policy' label, which exits without unlocking any of the locks the function acquired earlier. This causes later calls into cpufreq to hang. Fix this by creating a new 'unlock' label and jumping to that instead. Fixes: bd0fa9bb455d ("cpufreq: Return error if ->get() failed in cpufreq_update_policy()") Link: https://devtalk.nvidia.com/default/topic/751903/kernel-3-15-and-nv-drivers-337-340-failed-to-initialize-the-nvidia-kernel-module-gtx-550-ti-/ Signed-off-by: Aaron Plattner <aplattner@nvidia.com> Cc: 3.15+ <stable@vger.kernel.org> # 3.15+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* intel_pstate: Correct rounding in busy calculationDoug Smythies2014-06-171-4/+1
| | | | | | | | | | | | | | | | | | | There was a mistake in the actual rounding portion this previous patch: f0fe3cd7e12d (intel_pstate: Correct rounding in busy calculation) such that the rounding was asymetric and incorrect. Severity: Not very serious, but can increase target pstate by one extra value. For real world work flows the issue should self correct (but I have no proof). It is the equivalent of different PID gains for positive and negative numbers. Examples: -3.000000 used to round to -4, rounds to -3 with this patch. -3.503906 used to round to -5, rounds to -4 with this patch. Fixes: f0fe3cd7e12d (intel_pstate: Correct rounding in busy calculation) Signed-off-by: Doug Smythies <dsmythies@telus.net> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpufreq: cpufreq-cpu0: fix CPU_THERMAL dependencyArnd Bergmann2014-06-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | 5fbfbcd3e842d ("cpufreq: cpufreq-cpu0: remove dependency on THERMAL and REGULATOR") was a little too quick in completely removing the dependency on the THERMAL driver. The problem is that while there are inline wrappers to turn the thermal API calls into empty functions, those do not help if the cpu-thermal driver is a loadable module and cpufreq-cpu0 is builtin. Since CONFIG_CPU_THERMAL is a bool option that decides whether the cpu code is built into the thermal module or not, we have to use a dependency on the thermal driver itself. However, if CPU_THERMAL is disabled, we don't need the dependency, hence the strange '!CPU_THERMAL || THERMAL' construct. Fixes: 5fbfbcd3e842d ("cpufreq: cpufreq-cpu0: remove dependency on THERMAL and REGULATOR") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge branch 'pm-cpufreq'Rafael J. Wysocki2014-06-129-59/+204
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-cpufreq: cpufreq: cpufreq-cpu0: remove dependency on THERMAL and REGULATOR cpufreq: tegra: update comment for clarity cpufreq: intel_pstate: Remove duplicate CPU ID check cpufreq: Mark CPU0 driver with CPUFREQ_NEED_INITIAL_FREQ_CHECK flag cpufreq: governor: remove copy_prev_load from 'struct cpu_dbs_common_info' cpufreq: governor: Be friendly towards latency-sensitive bursty workloads cpufreq: ppc-corenet-cpu-freq: do_div use quotient Revert "cpufreq: Enable big.LITTLE cpufreq driver on arm64" cpufreq: Tegra: implement intermediate frequency callbacks cpufreq: add support for intermediate (stable) frequencies
| * cpufreq: cpufreq-cpu0: remove dependency on THERMAL and REGULATORViresh Kumar2014-06-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpufreq-cpu0 uses thermal framework to register a cooling device, but doesn't depend on it as there are dummy calls provided by thermal layer when CONFIG_THERMAL=n. And when these calls fail, the driver is still usable. Similar explanation is valid for regulators as well. We do have dummy calls available for regulator APIs and the driver can work even when those calls fail. So, we don't really need to mention thermal and regulators as a dependency for cpufreq-cpu0 in Kconfig as platforms without support for thermal/regulator can also use this driver. Remove this dependency. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: tegra: update comment for clarityViresh Kumar2014-06-101-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tegra's driver got updated a bit (00917dd cpufreq: Tegra: implement intermediate frequency callbacks) and implements new 'intermediate freq' infrastructure of core. Above commit updated comments about when to call clk_prepare_enable(pll_x_clk) and Doug wasn't satisfied with those comments and said this: > The "Though when target-freq is intermediate freq, we don't need to > take this reference." makes me think that this function is actually > called when target-freq is intermediate freq.  I don't think it is, > right? For better clarity just make that comment more explicit about when we call tegra_target_intermediate(). Reviewed-by: Stephen Warren <swarren@nvidia.com> Reported-and-reviewed-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: intel_pstate: Remove duplicate CPU ID checkStratos Karafotis2014-06-101-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | We check the CPU ID during driver init. There is no need to do it again per logical CPU initialization. So, remove the duplicate check. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: Mark CPU0 driver with CPUFREQ_NEED_INITIAL_FREQ_CHECK flagViresh Kumar2014-06-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes boot loaders set CPU frequency to a value outside of frequency table present with cpufreq core. In such cases CPU might be unstable if it has to run on that frequency for long duration of time and so its better to set it to a frequency which is specified in frequency table. Sachin recently found this problem with cpufreq-cpu0 driver when he was testing it for Exynos. Set this flag for cpufreq-cpu0 driver. Reported-and-tested-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: governor: remove copy_prev_load from 'struct cpu_dbs_common_info'Viresh Kumar2014-06-092-9/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'copy_prev_load' was recently added by commit: 18b46ab (cpufreq: governor: Be friendly towards latency-sensitive bursty workloads). It actually is a bit redundant as we also have 'prev_load' which can store any integer value and can be used instead of 'copy_prev_load' by setting it zero. True load can also turn out to be zero during long idle intervals (and hence the actual value of 'prev_load' and the overloaded value can clash). However this is not a problem because, if the true load was really zero in the previous interval, it makes sense to evaluate the load afresh for the current interval rather than copying the previous load. So, drop 'copy_prev_load' and use 'prev_load' instead. Update comments as well to make it more clear. There is another change here which was probably missed by Srivatsa during the last version of updates he made. The unlikely in the 'if' statement was covering only half of the condition and the whole line should actually come under it. Also checkpatch is made more silent as it was reporting this (--strict option): CHECK: Alignment should match open parenthesis + if (unlikely(wall_time > (2 * sampling_rate) && + j_cdbs->prev_load)) { Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: governor: Be friendly towards latency-sensitive bursty workloadsSrivatsa S. Bhat2014-06-072-3/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cpufreq governors like the ondemand governor calculate the load on the CPU periodically by employing deferrable timers. A deferrable timer won't fire if the CPU is completely idle (and there are no other timers to be run), in order to avoid unnecessary wakeups and thus save CPU power. However, the load calculation logic is agnostic to all this, and this can lead to the problem described below. Time (ms) CPU 1 100 Task-A running 110 Governor's timer fires, finds load as 100% in the last 10ms interval and increases the CPU frequency. 110.5 Task-A running 120 Governor's timer fires, finds load as 100% in the last 10ms interval and increases the CPU frequency. 125 Task-A went to sleep. With nothing else to do, CPU 1 went completely idle. 200 Task-A woke up and started running again. 200.5 Governor's deferred timer (which was originally programmed to fire at time 130) fires now. It calculates load for the time period 120 to 200.5, and finds the load is almost zero. Hence it decreases the CPU frequency to the minimum. 210 Governor's timer fires, finds load as 100% in the last 10ms interval and increases the CPU frequency. So, after the workload woke up and started running, the frequency was suddenly dropped to absolute minimum, and after that, there was an unnecessary delay of 10ms (sampling period) to increase the CPU frequency back to a reasonable value. And this pattern repeats for every wake-up-from-cpu-idle for that workload. This can be quite undesirable for latency- or response-time sensitive bursty workloads. So we need to fix the governor's logic to detect such wake-up-from- cpu-idle scenarios and start the workload at a reasonably high CPU frequency. One extreme solution would be to fake a load of 100% in such scenarios. But that might lead to undesirable side-effects such as frequency spikes (which might also need voltage changes) especially if the previous frequency happened to be very low. We just want to avoid the stupidity of dropping down the frequency to a minimum and then enduring a needless (and long) delay before ramping it up back again. So, let us simply carry forward the previous load - that is, let us just pretend that the 'load' for the current time-window is the same as the load for the previous window. That way, the frequency and voltage will continue to be set to whatever values they were set at previously. This means that bursty workloads will get a chance to influence the CPU frequency at which they wake up from cpu-idle, based on their past execution history. Thus, they might be able to avoid suffering from slow wakeups and long response-times. However, we should take care not to over-do this. For example, such a "copy previous load" logic will benefit cases like this: (where # represents busy and . represents idle) ##########.........#########.........###########...........##########........ but it will be detrimental in cases like the one shown below, because it will retain the high frequency (copied from the previous interval) even in a mostly idle system: ##########.........#.................#.....................#............... (i.e., the workload finished and the remaining tasks are such that their busy periods are smaller than the sampling interval, which causes the timer to always get deferred. So, this will make the copy-previous-load logic copy the initial high load to subsequent idle periods over and over again, thus keeping the frequency high unnecessarily). So, we modify this copy-previous-load logic such that it is used only once upon every wakeup-from-idle. Thus if we have 2 consecutive idle periods, the previous load won't get blindly copied over; cpufreq will freshly evaluate the load in the second idle interval, thus ensuring that the system comes back to its normal state. [ The right way to solve this whole problem is to teach the CPU frequency governors to also track load on a per-task basis, not just a per-CPU basis, and then use both the data sources intelligently to set the appropriate frequency on the CPUs. But that involves redesigning the cpufreq subsystem, so this patch should make the situation bearable until then. ] Experimental results: +-------------------+ I ran a modified version of ebizzy (called 'sleeping-ebizzy') that sleeps in between its execution such that its total utilization can be a user-defined value, say 10% or 20% (higher the utilization specified, lesser the amount of sleeps injected). This ebizzy was run with a single-thread, tied to CPU 8. Behavior observed with tracing (sample taken from 40% utilization runs): ------------------------------------------------------------------------ Without patch: ~~~~~~~~~~~~~~ kworker/8:2-12137 416.335742: cpu_frequency: state=2061000 cpu_id=8 kworker/8:2-12137 416.335744: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40753 416.345741: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 kworker/8:2-12137 416.345744: cpu_frequency: state=4123000 cpu_id=8 kworker/8:2-12137 416.345746: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40753 416.355738: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 <snip> --------------------------------------------------------------------- <snip> <...>-40753 416.402202: sched_switch: prev_comm=ebizzy ==> next_comm=swapper/8 <idle>-0 416.502130: sched_switch: prev_comm=swapper/8 ==> next_comm=ebizzy <...>-40753 416.505738: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 kworker/8:2-12137 416.505739: cpu_frequency: state=2061000 cpu_id=8 kworker/8:2-12137 416.505741: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40753 416.515739: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 kworker/8:2-12137 416.515742: cpu_frequency: state=4123000 cpu_id=8 kworker/8:2-12137 416.515744: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy Observation: Ebizzy went idle at 416.402202, and started running again at 416.502130. But cpufreq noticed the long idle period, and dropped the frequency at 416.505739, only to increase it back again at 416.515742, realizing that the workload is in-fact CPU bound. Thus ebizzy needlessly ran at the lowest frequency for almost 13 milliseconds (almost 1 full sample period), and this pattern repeats on every sleep-wakeup. This could hurt latency-sensitive workloads quite a lot. With patch: ~~~~~~~~~~~ kworker/8:2-29802 464.832535: cpu_frequency: state=2061000 cpu_id=8 <snip> --------------------------------------------------------------------- <snip> kworker/8:2-29802 464.962538: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40738 464.972533: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 kworker/8:2-29802 464.972536: cpu_frequency: state=4123000 cpu_id=8 kworker/8:2-29802 464.972538: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40738 464.982531: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 <snip> --------------------------------------------------------------------- <snip> kworker/8:2-29802 465.022533: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40738 465.032531: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 kworker/8:2-29802 465.032532: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40738 465.035797: sched_switch: prev_comm=ebizzy ==> next_comm=swapper/8 <idle>-0 465.240178: sched_switch: prev_comm=swapper/8 ==> next_comm=ebizzy <...>-40738 465.242533: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 kworker/8:2-29802 465.242535: sched_switch: prev_comm=kworker/8:2 ==> next_comm=ebizzy <...>-40738 465.252531: sched_switch: prev_comm=ebizzy ==> next_comm=kworker/8:2 Observation: Ebizzy went idle at 465.035797, and started running again at 465.240178. Since ebizzy was the only real workload running on this CPU, cpufreq retained the frequency at 4.1Ghz throughout the run of ebizzy, no matter how many times ebizzy slept and woke-up in-between. Thus, ebizzy got the 10ms worth of 4.1 Ghz benefit during every sleep-wakeup (as compared to the run without the patch) and this boost gave a modest improvement in total throughput, as shown below. Sleeping-ebizzy records-per-second: ----------------------------------- Utilization Without patch With patch Difference (Absolute and % values) 10% 274767 277046 + 2279 (+0.829%) 20% 543429 553484 + 10055 (+1.850%) 40% 1090744 1107959 + 17215 (+1.578%) 60% 1634908 1662018 + 27110 (+1.658%) A rudimentary and somewhat approximately latency-sensitive workload such as sleeping-ebizzy itself showed a consistent, noticeable performance improvement with this patch. Hence, workloads that are truly latency-sensitive will benefit quite a bit from this change. Moreover, this is an overall win-win since this patch does not hurt power-savings at all (because, this patch does not reduce the idle time or idle residency; and the high frequency of the CPU when it goes to cpu-idle does not affect/hurt the power-savings of deep idle states). Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: ppc-corenet-cpu-freq: do_div use quotientEd Swarthout2014-06-061-4/+5
| | | | | | | | | | | | | | | | | | | | | | Commit 6712d2931933 (cpufreq: ppc-corenet-cpufreq: Fix __udivdi3 modpost error) used the remainder from do_div instead of the quotient. Fix that and add one to ensure minimum is met. Fixes: 6712d2931933 (cpufreq: ppc-corenet-cpufreq: Fix __udivdi3 modpost error) Signed-off-by: Ed Swarthout <Ed.Swarthout@freescale.com> Cc: 3.15+ <stable@vger.kernel.org> # 3.15+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * Revert "cpufreq: Enable big.LITTLE cpufreq driver on arm64"Rafael J. Wysocki2014-06-061-2/+1
| | | | | | | | | | | | | | | | | | This reverts commit 4920ab84979d (cpufreq: Enable big.LITTLE cpufreq driver on arm64) that breaks build on arm64. Fixes: 4920ab84979d (cpufreq: Enable big.LITTLE cpufreq driver on arm64) Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: Tegra: implement intermediate frequency callbacksViresh Kumar2014-06-051-35/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Tegra has been switching to intermediate frequency (pll_p_clk) forever. CPUFreq core has better support for handling notifications for these frequencies and so we can adapt Tegra's driver to it. Also do a WARN() if clk_set_parent() fails while moving back to pll_x as we should have atleast restored to earlier frequency on error. Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * cpufreq: add support for intermediate (stable) frequenciesViresh Kumar2014-06-051-7/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Douglas Anderson, recently pointed out an interesting problem due to which udelay() was expiring earlier than it should. While transitioning between frequencies few platforms may temporarily switch to a stable frequency, waiting for the main PLL to stabilize. For example: When we transition between very low frequencies on exynos, like between 200MHz and 300MHz, we may temporarily switch to a PLL running at 800MHz. No CPUFREQ notification is sent for that. That means there's a period of time when we're running at 800MHz but loops_per_jiffy is calibrated at between 200MHz and 300MHz. And so udelay behaves badly. To get this fixed in a generic way, introduce another set of callbacks get_intermediate() and target_intermediate(), only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION unset. get_intermediate() should return a stable intermediate frequency platform wants to switch to, and target_intermediate() should set CPU to that frequency, before jumping to the frequency corresponding to 'index'. Core will take care of sending notifications and driver doesn't have to handle them in target_intermediate() or target_index(). NOTE: ->target_index() should restore to policy->restore_freq in case of failures as core would send notifications for that. Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | Merge tag 'pm+acpi-3.16-rc1' of ↵Linus Torvalds2014-06-0429-352/+457
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm into next Pull ACPI and power management updates from Rafael Wysocki: "ACPICA is the leader this time (63 commits), followed by cpufreq (28 commits), devfreq (15 commits), system suspend/hibernation (12 commits), ACPI video and ACPI device enumeration (10 commits each). We have no major new features this time, but there are a few significant changes of how things work. The most visible one will probably be that we are now going to create platform devices rather than PNP devices by default for ACPI device objects with _HID. That was long overdue and will be really necessary to be able to use the same drivers for the same hardware blocks on ACPI and DT-based systems going forward. We're not expecting fallout from this one (as usual), but it's something to watch nevertheless. The second change having a chance to be visible is that ACPI video will now default to using native backlight rather than the ACPI backlight interface which should generally help systems with broken Win8 BIOSes. We're hoping that all problems with the native backlight handling that we had previously have been addressed and we are in a good enough shape to flip the default, but this change should be easy enough to revert if need be. In addition to that, the system suspend core has a new mechanism to allow runtime-suspended devices to stay suspended throughout system suspend/resume transitions if some extra conditions are met (generally, they are related to coordination within device hierarchy). However, enabling this feature requires cooperation from the bus type layer and for now it has only been implemented for the ACPI PM domain (used by ACPI-enumerated platform devices mostly today). Also, the acpidump utility that was previously shipped as a separate tool will now be provided by the upstream ACPICA along with the rest of ACPICA code, which will allow it to be more up to date and better supported, and we have one new cpuidle driver (ARM clps711x). The rest is improvements related to certain specific use cases, cleanups and fixes all over the place. Specifics: - ACPICA update to upstream version 20140424. That includes a number of fixes and improvements related to things like GPE handling, table loading, headers, memory mapping and unmapping, DSDT/SSDT overriding, and the Unload() operator. The acpidump utility from upstream ACPICA is included too. From Bob Moore, Lv Zheng, David Box, David Binderman, and Colin Ian King. - Fixes and cleanups related to ACPI video and backlight interfaces from Hans de Goede. That includes blacklist entries for some new machines and using native backlight by default. - ACPI device enumeration changes to create platform devices rather than PNP devices for ACPI device objects with _HID by default. PNP devices will still be created for the ACPI device object with device IDs corresponding to real PNP devices, so that change should not break things left and right, and we're expecting to see more and more ACPI-enumerated platform devices in the future. From Zhang Rui and Rafael J Wysocki. - Updates for the ACPI LPSS (Low-Power Subsystem) driver allowing it to handle system suspend/resume on Asus T100 correctly. From Heikki Krogerus and Rafael J Wysocki. - PM core update introducing a mechanism to allow runtime-suspended devices to stay suspended over system suspend/resume transitions if certain additional conditions related to coordination within device hierarchy are met. Related PM documentation update and ACPI PM domain support for the new feature. From Rafael J Wysocki. - Fixes and improvements related to the "freeze" sleep state. They affect several places including cpuidle, PM core, ACPI core, and the ACPI battery driver. From Rafael J Wysocki and Zhang Rui. - Miscellaneous fixes and updates of the ACPI core from Aaron Lu, Bjørn Mork, Hanjun Guo, Lan Tianyu, and Rafael J Wysocki. - Fixes and cleanups for the ACPI processor and ACPI PAD (Processor Aggregator Device) drivers from Baoquan He, Manuel Schölling, Tony Camuso, and Toshi Kani. - System suspend/resume optimization in the ACPI battery driver from Lan Tianyu. - OPP (Operating Performance Points) subsystem updates from Chander Kashyap, Mark Brown, and Nishanth Menon. - cpufreq core fixes, updates and cleanups from Srivatsa S Bhat, Stratos Karafotis, and Viresh Kumar. - Updates, fixes and cleanups for the Tegra, powernow-k8, imx6q, s5pv210, nforce2, and powernv cpufreq drivers from Brian Norris, Jingoo Han, Paul Bolle, Philipp Zabel, Stratos Karafotis, and Viresh Kumar. - intel_pstate driver fixes and cleanups from Dirk Brandewie, Doug Smythies, and Stratos Karafotis. - Enabling the big.LITTLE cpufreq driver on arm64 from Mark Brown. - Fix for the cpuidle menu governor from Chander Kashyap. - New ARM clps711x cpuidle driver from Alexander Shiyan. - Hibernate core fixes and cleanups from Chen Gang, Dan Carpenter, Fabian Frederick, Pali Rohár, and Sebastian Capella. - Intel RAPL (Running Average Power Limit) driver updates from Jacob Pan. - PNP subsystem updates from Bjorn Helgaas and Fabian Frederick. - devfreq core updates from Chanwoo Choi and Paul Bolle. - devfreq updates for exynos4 and exynos5 from Chanwoo Choi and Bartlomiej Zolnierkiewicz. - turbostat tool fix from Jean Delvare. - cpupower tool updates from Prarit Bhargava, Ramkumar Ramachandra and Thomas Renninger. - New ACPI ec_access.c tool for poking at the EC in a safe way from Thomas Renninger" * tag 'pm+acpi-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (187 commits) ACPICA: Namespace: Remove _PRP method support. intel_pstate: Improve initial busy calculation intel_pstate: add sample time scaling intel_pstate: Correct rounding in busy calculation intel_pstate: Remove C0 tracking PM / hibernate: fixed typo in comment ACPI: Fix x86 regression related to early mapping size limitation ACPICA: Tables: Add mechanism to control early table checksum verification. ACPI / scan: use platform bus type by default for _HID enumeration ACPI / scan: always register ACPI LPSS scan handler ACPI / scan: always register memory hotplug scan handler ACPI / scan: always register container scan handler ACPI / scan: Change the meaning of missing .attach() in scan handlers ACPI / scan: introduce platform_id device PNP type flag ACPI / scan: drop unsupported serial IDs from PNP ACPI scan handler ID list ACPI / scan: drop IDs that do not comply with the ACPI PNP ID rule ACPI / PNP: use device ID list for PNPACPI device enumeration ACPI / scan: .match() callback for ACPI scan handlers ACPI / battery: wakeup the system only when necessary power_supply: allow power supply devices registered w/o wakeup source ...
| * Merge back earlier cpufreq material.Rafael J. Wysocki2014-06-0329-331/+426
| |\ | | | | | | | | | | | | | | | Conflicts: arch/mips/loongson/lemote-2f/clock.c drivers/cpufreq/intel_pstate.c
| | * cpufreq: handle calls to ->target_index() in separate routineViresh Kumar2014-05-291-23/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handling calls to ->target_index() has got complex over time and might become more complex. So, its better to take target_index() bits out in another routine __target_index() for better code readability. Shouldn't have any functional impact. Tested-by: Stephen Warren <swarren@nvidia.com> Reviewed-by: Doug Anderson <dianders@chromium.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: s5pv210: drop check for CONFIG_PM_VERBOSEPaul Bolle2014-05-271-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A pr_err() was added in v3.1. It was guarded by a check for CONFIG_PM_VERBOSE. The Kconfig symbol PM_VERBOSE was removed in v3.0. So this pr_err() has never been used. Drop that check and clean up the message a bit. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: intel_pstate: Remove unused member name of cpudataStratos Karafotis2014-05-271-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although, a value is assigned to member name of struct cpudata, it is never used. We can safely remove it. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Break out early when frequency equals target_freqStratos Karafotis2014-05-201-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many drivers keep frequencies in frequency table in ascending or descending order. When governor tries to change to policy->min or policy->max respectively then the cpufreq_frequency_table_target could return on first iteration. This will save some iteration cycles. So, break out early when a frequency in cpufreq_frequency_table equals to target one. Testing this during kernel compilation using ondemand governor with a frequency table in ascending order, the cpufreq_frequency_table_target returned early on the first iteration at about 30% of times called. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Tegra: drop wrapper around tegra_update_cpu_speed()Viresh Kumar2014-05-171-7/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tegra has implemented an unnecessary wrapper over tegra_update_cpu_speed(), i.e. tegra_target(), which wasn't doing anything apart of calling tegra_update_cpu_speed(). Get rid of that and use tegra_target() directly. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: imx6q: Remove unused includePhilipp Zabel2014-05-171-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to include delay.h. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Shawn Guo <shawn.guo@freescale.com> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: imx6q: Drop devm_clk/regulator_get usagePhilipp Zabel2014-05-171-14/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This driver is using devres managed calls incorrectly, giving the cpu0 device as first parameter instead of the cpufreq platform device. This results in resources not being freed if the cpufreq platform device is unbound, for example if probing has to be deferred for a missing regulator. Supporting probe deferral properly is a prerequisite to enabling the internal LDO bypass on i.MX6 and regulating the CPU voltage with an external regulator. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Shawn Guo <shawn.guo@freescale.com> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: powernow-k8: Suppress checkpatch warningsStratos Karafotis2014-05-172-108/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Suppress the following checkpatch.pl warnings: - WARNING: Prefer pr_err(... to printk(KERN_ERR ... - WARNING: Prefer pr_info(... to printk(KERN_INFO ... - WARNING: Prefer pr_warn(... to printk(KERN_WARNING ... - WARNING: quoted string split across lines - WARNING: please, no spaces at the start of a line Also, define the pr_fmt macro instead of PFX for the module name. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: powernv: make local function staticBrian Norris2014-05-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | powernv_cpufreq_get() is only referenced in this file. Signed-off-by: Brian Norris <computersforpeace@gmail.com> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> on V2. Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Enable big.LITTLE cpufreq driver on arm64Mark Brown2014-05-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are arm64 big.LITTLE systems so enable the big.LITTLE cpufreq driver. While IKS is not available for these systems the driver is still useful since it manages clusters with shared frequencies which is the common case for these systems. Long term combining the cpufreq-cpu0 and big.LITTLE drivers may be a more sensible option but that is substantially more complex especially in the case of IKS. Signed-off-by: Mark Brown <broonie@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: nforce2: remove DEFINE_PCI_DEVICE_TABLE macroJingoo Han2014-05-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Don't use DEFINE_PCI_DEVICE_TABLE macro, because this macro is deprecated. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * intel_pstate: Add CPU IDs for Broadwell processorsDirk Brandewie2014-05-131-0/+3
| | | | | | | | | | | | | | | | | | | | | Add support for Broadwell processors. Signed-off-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Fix build error on some platforms that use cpufreq_for_each_*Stratos Karafotis2014-05-081-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On platforms that use cpufreq_for_each_* macros, build fails if CONFIG_CPU_FREQ=n, e.g. ARM/shmobile/koelsch/non-multiplatform: drivers/built-in.o: In function `clk_round_parent': clkdev.c:(.text+0xcf168): undefined reference to `cpufreq_next_valid' drivers/built-in.o: In function `clk_rate_table_find': clkdev.c:(.text+0xcf820): undefined reference to `cpufreq_next_valid' make[3]: *** [vmlinux] Error 1 Fix this making cpufreq_next_valid function inline and move it to cpufreq.h. Fixes: 27e289dce297 (cpufreq: Introduce macros for cpufreq_frequency_table iteration) Reported-and-tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * PM / OPP: Move cpufreq specific OPP functions out of generic OPP libraryNishanth Menon2014-05-072-0/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPUFreq specific helper functions for OPP (Operating Performance Points) now use generic OPP functions that allow CPUFreq to be be moved back into CPUFreq framework. This allows for independent modifications or future enhancements as needed isolated to just CPUFreq framework alone. Here, we just move relevant code and documentation to make this part of CPUFreq infrastructure. Cc: Kevin Hilman <khilman@deeprootsystems.com> Signed-off-by: Nishanth Menon <nm@ti.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Catch double invocations of cpufreq_freq_transition_begin/endSrivatsa S. Bhat2014-05-071-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some cpufreq drivers were redundantly invoking the _begin() and _end() APIs around frequency transitions, and this double invocation (one from the cpufreq core and the other from the cpufreq driver) used to result in a self-deadlock, leading to system hangs during boot. (The _begin() API makes contending callers wait until the previous invocation is complete. Hence, the cpufreq driver would end up waiting on itself!). Now all such drivers have been fixed, but debugging this issue was not very straight-forward (even lockdep didn't catch this). So let us add a debug infrastructure to the cpufreq core to catch such issues more easily in the future. We add a new field called 'transition_task' to the policy structure, to keep track of the task which is performing the frequency transition. Using this field, we make note of this task during _begin() and print a warning if we find a case where the same task is calling _begin() again, before completing the previous frequency transition using the corresponding _end(). We have left out ASYNC_NOTIFICATION drivers from this debug infrastructure for 2 reasons: 1. At the moment, we have no way to avoid a particular scenario where this debug infrastructure can emit false-positive warnings for such drivers. The scenario is depicted below: Task A Task B /* 1st freq transition */ Invoke _begin() { ... ... } Change the frequency /* 2nd freq transition */ Invoke _begin() { ... //waiting for B to ... //finish _end() for ... //the 1st transition ... | Got interrupt for successful ... | change of frequency (1st one). ... | ... | /* 1st freq transition */ ... | Invoke _end() { ... | ... ... V } ... ... } This scenario is actually deadlock-free because, once Task A changes the frequency, it is Task B's responsibility to invoke the corresponding _end() for the 1st frequency transition. Hence it is perfectly legal for Task A to go ahead and attempt another frequency transition in the meantime. (Of course it won't be able to proceed until Task B finishes the 1st _end(), but this doesn't cause a deadlock or a hang). The debug infrastructure cannot handle this scenario and will treat it as a deadlock and print a warning. To avoid this, we exclude such drivers from the purview of this code. 2. Luckily, we don't _need_ this infrastructure for ASYNC_NOTIFICATION drivers at all! The cpufreq core does not automatically invoke the _begin() and _end() APIs during frequency transitions in such drivers. Thus, the driver alone is responsible for invoking _begin()/_end() and hence there shouldn't be any conflicts which lead to double invocations. So, we can skip these drivers, since the probability that such drivers will hit this problem is extremely low, as outlined above. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * intel_pstate: Remove sample parameter in intel_pstate_calc_busyStratos Karafotis2014-05-071-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit d37e2b7644 ("intel_pstate: remove unneeded sample buffers") we use only one sample. So, there is no need to pass the sample pointer to intel_pstate_calc_busy. Instead, get the pointer from cpudata. Also, remove the unused SAMPLE_COUNT macro. While at it, reformat the first line in this function. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Kconfig: Fix spelling errorsStratos Karafotis2014-05-012-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | Fix 4 spelling errors in help sections. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: Make linux-pm@vger.kernel.org official mailing listViresh Kumar2014-05-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There has been confusion all the time about which mailing list to follow for cpufreq activities, linux-pm@vger.kernel.org or cpufreq@vger.kernel.org. Since patches sent to cpufreq@vger.kernel.org don't go to Patchwork which is a maintenance workflow problem, make linux-pm@vger.kernel.org the official mailing list for cpufreq stuff and remove all references of cpufreq@vger.kernel.org from kernel source. Later, we can request that the list be dropped entirely. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> [rjw: Changelog] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * cpufreq: exynos: Use dev_err/info function instead of pr_err/infoChanwoo Choi2014-05-012-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch uses dev_err/info function to show accurate log message with device name instead of pr_err/info function. Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | * Merge branch 'cpufreq-macros' into pm-cpufreqRafael J. Wysocki2014-05-0115-146/+127
| | |\
| | | * cpufreq: Use cpufreq_for_each_* macros for frequency table iterationStratos Karafotis2014-04-3014-146/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cpufreq core now supports the cpufreq_for_each_entry and cpufreq_for_each_valid_entry macros helpers for iteration over the cpufreq_frequency_table, so use them. It should have no functional changes. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Lad, Prabhakar <prabhakar.csengg@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| | | * cpufreq: Introduce macros for cpufreq_frequency_table iterationStratos Karafotis2014-04-301-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many cpufreq drivers need to iterate over the cpufreq_frequency_table for various tasks. This patch introduces two macros which can be used for iteration over cpufreq_frequency_table keeping a common coding style across drivers: - cpufreq_for_each_entry: iterate over each entry of the table - cpufreq_for_each_valid_entry: iterate over each entry that contains a valid frequency. It should have no functional changes. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Lad, Prabhakar <prabhakar.csengg@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | | intel_pstate: Improve initial busy calculationDoug Smythies2014-06-021-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes the busy calculation using 64 bit math which prevents overflow for large values of aperf/mperf. Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ Signed-off-by: Doug Smythies <dsmythies@telus.net> Signed-off-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | | intel_pstate: add sample time scalingDirk Brandewie2014-06-021-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PID assumes that samples are of equal time, which for a deferable timers this is not true when the system goes idle. This causes the PID to take a long time to converge to the min P state and depending on the pattern of the idle load can make the P state appear stuck. The hold-off value of three sample times before using the scaling is to give a grace period for applications that have high performance requirements and spend a lot of time idle, The poster child for this behavior is the ffmpeg benchmark in the Phoronix test suite. Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ Signed-off-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
| * | | intel_pstate: Correct rounding in busy calculationDirk Brandewie2014-06-021-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changing to fixed point math throughout the busy calculation in commit e66c1768 (Change busy calculation to use fixed point math.) Introduced some inaccuracies by rounding the busy value at two points in the calculation. This change removes roundings and moves the rounding to the output of the PID where the calculations are complete and the value returned as an integer. Fixes: e66c17683746 (intel_pstate: Change busy calculation to use fixed point math.) Reported-by: Doug Smythies <dsmythies@telus.net> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ Signed-off-by: Dirk Brandewie <dirk.j.brandewie@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>