diff options
author | Prashanth Prakash <pprakash@codeaurora.org> | 2017-11-15 18:11:50 +0100 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2018-01-02 14:50:34 +0100 |
commit | 8b9951ed7e5b517bdd5743d71ac662885d3c7bfc (patch) | |
tree | dacbbb58604af96ff6161869179d96706b8fb9d6 /arch/arm64 | |
parent | cpuidle: Add new macro to enter a retention idle state (diff) | |
download | linux-8b9951ed7e5b517bdd5743d71ac662885d3c7bfc.tar.xz linux-8b9951ed7e5b517bdd5743d71ac662885d3c7bfc.zip |
ARM64 / cpuidle: Use new cpuidle macro for entering retention state
CPU_PM_CPU_IDLE_ENTER_RETENTION skips calling cpu_pm_enter() and
cpu_pm_exit(). By not calling cpu_pm functions in idle entry/exit
paths we can reduce the latency involved in entering and exiting
the low power idle state.
On ARM64 based Qualcomm server platform we measured below overhead
for calling cpu_pm_enter and cpu_pm_exit for retention states.
workload: stress --hdd #CPUs --hdd-bytes 32M -t 30
Average overhead of cpu_pm_enter - 1.2us
Average overhead of cpu_pm_exit - 3.1us
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64')
-rw-r--r-- | arch/arm64/kernel/cpuidle.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/arch/arm64/kernel/cpuidle.c b/arch/arm64/kernel/cpuidle.c index fd691087dc9a..f2d13810daa8 100644 --- a/arch/arm64/kernel/cpuidle.c +++ b/arch/arm64/kernel/cpuidle.c @@ -47,6 +47,8 @@ int arm_cpuidle_suspend(int index) #include <acpi/processor.h> +#define ARM64_LPI_IS_RETENTION_STATE(arch_flags) (!(arch_flags)) + int acpi_processor_ffh_lpi_probe(unsigned int cpu) { return arm_cpuidle_init(cpu); @@ -54,6 +56,10 @@ int acpi_processor_ffh_lpi_probe(unsigned int cpu) int acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi) { - return CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, lpi->index); + if (ARM64_LPI_IS_RETENTION_STATE(lpi->arch_flags)) + return CPU_PM_CPU_IDLE_ENTER_RETENTION(arm_cpuidle_suspend, + lpi->index); + else + return CPU_PM_CPU_IDLE_ENTER(arm_cpuidle_suspend, lpi->index); } #endif |