diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-17 04:17:22 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-05-17 04:17:22 +0200 |
commit | d57d39431924d1628ac9b93a2de7f806fc80680a (patch) | |
tree | 8d630b5b22333a6368beb3531f20ae5c5eb72229 /drivers/base | |
parent | Merge tag 'mmc-v4.7' of git://git.linaro.org/people/ulf.hansson/mmc (diff) | |
parent | Merge branches 'pm-avs', 'pm-clk', 'powercap' and 'pm-tools' (diff) | |
download | linux-d57d39431924d1628ac9b93a2de7f806fc80680a.tar.xz linux-d57d39431924d1628ac9b93a2de7f806fc80680a.zip |
Merge tag 'pm-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"The majority of changes go into the cpufreq subsystem this time.
To me, quite obviously, the biggest ticket item is the new "schedutil"
governor. Interestingly enough, it's the first new cpufreq governor
since the beginning of the git era (except for some out-of-the-tree
ones).
There are two main differences between it and the existing governors.
First, it uses the information provided by the scheduler directly for
making its decisions, so it doesn't have to track anything by itself.
Second, it can invoke drivers (supporting that feature) to adjust CPU
performance right away without having to spawn work items to be
executed in process context or similar. Currently, the acpi-cpufreq
driver is the only one supporting that mode of operation, but then it
is used on a large number of systems.
The "schedutil" governor as included here is very simple and mostly
regarded as a foundation for future work on the integration of the
scheduler with CPU power management (in fact, there is work in
progress on top of it already). Nevertheless it works and the
preliminary results obtained with it are encouraging.
There also is some consolidation of CPU frequency management for ARM
platforms that can add their machine IDs the the new stub dt-platdev
driver now and that will take care of creating the requisite platform
device for cpufreq-dt, so it is not necessary to do that in platform
code any more. Several ARM platforms are switched over to using this
generic mechanism.
In addition to that, the intel_pstate driver is now going to respect
CPU frequency limits set by the platform firmware (or a BMC) and
provided via the ACPI _PPC object.
The devfreq subsystem is getting a new "passive" governor for SoCs
subsystems that will depend on somebody else to manage their voltage
rails and its support for Samsung Exynos SoCs is consolidated.
The rest is support for new hardware (Intel Broxton support in
intel_idle for one example), bug fixes, optimizations and cleanups in
a number of places.
Specifics:
- New cpufreq "schedutil" governor (making decisions based on CPU
utilization information provided by the scheduler and capable of
switching CPU frequencies right away if the underlying driver
supports that) and support for fast frequency switching in the
acpi-cpufreq driver (Rafael Wysocki)
- Consolidation of CPU frequency management on ARM platforms allowing
them to get rid of some platform-specific boilerplate code if they
are going to use the cpufreq-dt driver (Viresh Kumar, Finley Xiao,
Marc Gonzalez)
- Support for ACPI _PPC and CPU frequency limits in the intel_pstate
driver (Srinivas Pandruvada)
- Fixes and cleanups in the cpufreq core and generic governor code
(Rafael Wysocki, Sai Gurrappadi)
- intel_pstate driver optimizations and cleanups (Rafael Wysocki,
Philippe Longepe, Chen Yu, Joe Perches)
- cpufreq powernv driver fixes and cleanups (Akshay Adiga, Shilpasri
Bhat)
- cpufreq qoriq driver fixes and cleanups (Jia Hongtao)
- ACPI cpufreq driver cleanups (Viresh Kumar)
- Assorted cpufreq driver updates (Ashwin Chaugule, Geliang Tang,
Javier Martinez Canillas, Paul Gortmaker, Sudeep Holla)
- Assorted cpufreq fixes and cleanups (Joe Perches, Arnd Bergmann)
- Fixes and cleanups in the OPP (Operating Performance Points)
framework, mostly related to OPP sharing, and reorganization of
OF-dependent code in it (Viresh Kumar, Arnd Bergmann, Sudeep Holla)
- New "passive" governor for devfreq (for SoC subsystems that will
rely on someone else for the management of their power resources)
and consolidation of devfreq support for Exynos platforms, coding
style and typo fixes for devfreq (Chanwoo Choi, MyungJoo Ham)
- PM core fixes and cleanups, mostly to make it work better with the
generic power domains (genpd) framework, and updates for that
framework (Ulf Hansson, Thierry Reding, Colin Ian King)
- Intel Broxton support for the intel_idle driver (Len Brown)
- cpuidle core optimization and fix (Daniel Lezcano, Dave Gerlach)
- ARM cpuidle cleanups (Jisheng Zhang)
- Intel Kabylake support for the RAPL power capping driver (Jacob
Pan)
- AVS (Adaptive Voltage Switching) rockchip-io driver update (Heiko
Stuebner)
- Updates for the cpupower tool (Arjun Sreedharan, Colin Ian King,
Mattia Dongili, Thomas Renninger)"
* tag 'pm-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (112 commits)
intel_pstate: Clean up get_target_pstate_use_performance()
intel_pstate: Use sample.core_avg_perf in get_avg_pstate()
intel_pstate: Clarify average performance computation
intel_pstate: Avoid unnecessary synchronize_sched() during initialization
cpufreq: schedutil: Make default depend on CONFIG_SMP
cpufreq: powernv: del_timer_sync when global and local pstate are equal
cpufreq: powernv: Move smp_call_function_any() out of irq safe block
intel_pstate: Clean up intel_pstate_get()
cpufreq: schedutil: Make it depend on CONFIG_SMP
cpufreq: governor: Fix handling of special cases in dbs_update()
PM / OPP: Move CONFIG_OF dependent code in a separate file
cpufreq: intel_pstate: Ignore _PPC processing under HWP
cpufreq: arm_big_little: use generic OPP functions for {init, free}_opp_table
PM / OPP: add non-OF versions of dev_pm_opp_{cpumask_, }remove_table
cpufreq: tango: Use generic platdev driver
PM / OPP: pass cpumask by reference
cpufreq: Fix GOV_LIMITS handling for the userspace governor
cpupower: fix potential memory leak
PM / devfreq: style/typo fixes
PM / devfreq: exynos: Add the detailed correlation for Exynos5422 bus
..
Diffstat (limited to 'drivers/base')
-rw-r--r-- | drivers/base/power/clock_ops.c | 2 | ||||
-rw-r--r-- | drivers/base/power/domain.c | 145 | ||||
-rw-r--r-- | drivers/base/power/domain_governor.c | 20 | ||||
-rw-r--r-- | drivers/base/power/main.c | 18 | ||||
-rw-r--r-- | drivers/base/power/opp/Makefile | 1 | ||||
-rw-r--r-- | drivers/base/power/opp/core.c | 440 | ||||
-rw-r--r-- | drivers/base/power/opp/cpu.c | 199 | ||||
-rw-r--r-- | drivers/base/power/opp/of.c | 591 | ||||
-rw-r--r-- | drivers/base/power/opp/opp.h | 14 | ||||
-rw-r--r-- | drivers/base/power/runtime.c | 9 |
10 files changed, 806 insertions, 633 deletions
diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c index 0e64a1b5e62a..3657ac1cb801 100644 --- a/drivers/base/power/clock_ops.c +++ b/drivers/base/power/clock_ops.c @@ -159,7 +159,7 @@ int of_pm_clk_add_clks(struct device *dev) count = of_count_phandle_with_args(dev->of_node, "clocks", "#clock-cells"); - if (count == 0) + if (count <= 0) return -ENODEV; clks = kcalloc(count, sizeof(*clks), GFP_KERNEL); diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 56705b52758e..de23b648fce3 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -229,17 +229,6 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth) return ret; } -static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev) -{ - return GENPD_DEV_CALLBACK(genpd, int, save_state, dev); -} - -static int genpd_restore_dev(struct generic_pm_domain *genpd, - struct device *dev) -{ - return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev); -} - static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, unsigned long val, void *ptr) { @@ -372,17 +361,63 @@ static void genpd_power_off_work_fn(struct work_struct *work) } /** - * pm_genpd_runtime_suspend - Suspend a device belonging to I/O PM domain. + * __genpd_runtime_suspend - walk the hierarchy of ->runtime_suspend() callbacks + * @dev: Device to handle. + */ +static int __genpd_runtime_suspend(struct device *dev) +{ + int (*cb)(struct device *__dev); + + if (dev->type && dev->type->pm) + cb = dev->type->pm->runtime_suspend; + else if (dev->class && dev->class->pm) + cb = dev->class->pm->runtime_suspend; + else if (dev->bus && dev->bus->pm) + cb = dev->bus->pm->runtime_suspend; + else + cb = NULL; + + if (!cb && dev->driver && dev->driver->pm) + cb = dev->driver->pm->runtime_suspend; + + return cb ? cb(dev) : 0; +} + +/** + * __genpd_runtime_resume - walk the hierarchy of ->runtime_resume() callbacks + * @dev: Device to handle. + */ +static int __genpd_runtime_resume(struct device *dev) +{ + int (*cb)(struct device *__dev); + + if (dev->type && dev->type->pm) + cb = dev->type->pm->runtime_resume; + else if (dev->class && dev->class->pm) + cb = dev->class->pm->runtime_resume; + else if (dev->bus && dev->bus->pm) + cb = dev->bus->pm->runtime_resume; + else + cb = NULL; + + if (!cb && dev->driver && dev->driver->pm) + cb = dev->driver->pm->runtime_resume; + + return cb ? cb(dev) : 0; +} + +/** + * genpd_runtime_suspend - Suspend a device belonging to I/O PM domain. * @dev: Device to suspend. * * Carry out a runtime suspend of a device under the assumption that its * pm_domain field points to the domain member of an object of type * struct generic_pm_domain representing a PM domain consisting of I/O devices. */ -static int pm_genpd_runtime_suspend(struct device *dev) +static int genpd_runtime_suspend(struct device *dev) { struct generic_pm_domain *genpd; - bool (*stop_ok)(struct device *__dev); + bool (*suspend_ok)(struct device *__dev); struct gpd_timing_data *td = &dev_gpd_data(dev)->td; bool runtime_pm = pm_runtime_enabled(dev); ktime_t time_start; @@ -401,21 +436,21 @@ static int pm_genpd_runtime_suspend(struct device *dev) * runtime PM is disabled. Under these circumstances, we shall skip * validating/measuring the PM QoS latency. */ - stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL; - if (runtime_pm && stop_ok && !stop_ok(dev)) + suspend_ok = genpd->gov ? genpd->gov->suspend_ok : NULL; + if (runtime_pm && suspend_ok && !suspend_ok(dev)) return -EBUSY; /* Measure suspend latency. */ if (runtime_pm) time_start = ktime_get(); - ret = genpd_save_dev(genpd, dev); + ret = __genpd_runtime_suspend(dev); if (ret) return ret; ret = genpd_stop_dev(genpd, dev); if (ret) { - genpd_restore_dev(genpd, dev); + __genpd_runtime_resume(dev); return ret; } @@ -446,14 +481,14 @@ static int pm_genpd_runtime_suspend(struct device *dev) } /** - * pm_genpd_runtime_resume - Resume a device belonging to I/O PM domain. + * genpd_runtime_resume - Resume a device belonging to I/O PM domain. * @dev: Device to resume. * * Carry out a runtime resume of a device under the assumption that its * pm_domain field points to the domain member of an object of type * struct generic_pm_domain representing a PM domain consisting of I/O devices. */ -static int pm_genpd_runtime_resume(struct device *dev) +static int genpd_runtime_resume(struct device *dev) { struct generic_pm_domain *genpd; struct gpd_timing_data *td = &dev_gpd_data(dev)->td; @@ -491,7 +526,7 @@ static int pm_genpd_runtime_resume(struct device *dev) if (ret) goto err_poweroff; - ret = genpd_restore_dev(genpd, dev); + ret = __genpd_runtime_resume(dev); if (ret) goto err_stop; @@ -695,15 +730,6 @@ static int pm_genpd_prepare(struct device *dev) * at this point and a system wakeup event should be reported if it's * set up to wake up the system from sleep states. */ - pm_runtime_get_noresume(dev); - if (pm_runtime_barrier(dev) && device_may_wakeup(dev)) - pm_wakeup_event(dev, 0); - - if (pm_wakeup_pending()) { - pm_runtime_put(dev); - return -EBUSY; - } - if (resume_needed(dev, genpd)) pm_runtime_resume(dev); @@ -716,10 +742,8 @@ static int pm_genpd_prepare(struct device *dev) mutex_unlock(&genpd->lock); - if (genpd->suspend_power_off) { - pm_runtime_put_noidle(dev); + if (genpd->suspend_power_off) return 0; - } /* * The PM domain must be in the GPD_STATE_ACTIVE state at this point, @@ -741,7 +765,6 @@ static int pm_genpd_prepare(struct device *dev) pm_runtime_enable(dev); } - pm_runtime_put(dev); return ret; } @@ -1427,54 +1450,6 @@ out: } EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain); -/* Default device callbacks for generic PM domains. */ - -/** - * pm_genpd_default_save_state - Default "save device state" for PM domains. - * @dev: Device to handle. - */ -static int pm_genpd_default_save_state(struct device *dev) -{ - int (*cb)(struct device *__dev); - - if (dev->type && dev->type->pm) - cb = dev->type->pm->runtime_suspend; - else if (dev->class && dev->class->pm) - cb = dev->class->pm->runtime_suspend; - else if (dev->bus && dev->bus->pm) - cb = dev->bus->pm->runtime_suspend; - else - cb = NULL; - - if (!cb && dev->driver && dev->driver->pm) - cb = dev->driver->pm->runtime_suspend; - - return cb ? cb(dev) : 0; -} - -/** - * pm_genpd_default_restore_state - Default PM domains "restore device state". - * @dev: Device to handle. - */ -static int pm_genpd_default_restore_state(struct device *dev) -{ - int (*cb)(struct device *__dev); - - if (dev->type && dev->type->pm) - cb = dev->type->pm->runtime_resume; - else if (dev->class && dev->class->pm) - cb = dev->class->pm->runtime_resume; - else if (dev->bus && dev->bus->pm) - cb = dev->bus->pm->runtime_resume; - else - cb = NULL; - - if (!cb && dev->driver && dev->driver->pm) - cb = dev->driver->pm->runtime_resume; - - return cb ? cb(dev) : 0; -} - /** * pm_genpd_init - Initialize a generic I/O PM domain object. * @genpd: PM domain object to initialize. @@ -1498,8 +1473,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd, genpd->device_count = 0; genpd->max_off_time_ns = -1; genpd->max_off_time_changed = true; - genpd->domain.ops.runtime_suspend = pm_genpd_runtime_suspend; - genpd->domain.ops.runtime_resume = pm_genpd_runtime_resume; + genpd->domain.ops.runtime_suspend = genpd_runtime_suspend; + genpd->domain.ops.runtime_resume = genpd_runtime_resume; genpd->domain.ops.prepare = pm_genpd_prepare; genpd->domain.ops.suspend = pm_genpd_suspend; genpd->domain.ops.suspend_late = pm_genpd_suspend_late; @@ -1520,8 +1495,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd, genpd->domain.ops.restore_early = pm_genpd_resume_early; genpd->domain.ops.restore = pm_genpd_resume; genpd->domain.ops.complete = pm_genpd_complete; - genpd->dev_ops.save_state = pm_genpd_default_save_state; - genpd->dev_ops.restore_state = pm_genpd_default_restore_state; if (genpd->flags & GENPD_FLAG_PM_CLK) { genpd->dev_ops.stop = pm_clk_suspend; diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c index 00a5436dd44b..2e0fce711135 100644 --- a/drivers/base/power/domain_governor.c +++ b/drivers/base/power/domain_governor.c @@ -37,10 +37,10 @@ static int dev_update_qos_constraint(struct device *dev, void *data) } /** - * default_stop_ok - Default PM domain governor routine for stopping devices. + * default_suspend_ok - Default PM domain governor routine to suspend devices. * @dev: Device to check. */ -static bool default_stop_ok(struct device *dev) +static bool default_suspend_ok(struct device *dev) { struct gpd_timing_data *td = &dev_gpd_data(dev)->td; unsigned long flags; @@ -51,13 +51,13 @@ static bool default_stop_ok(struct device *dev) spin_lock_irqsave(&dev->power.lock, flags); if (!td->constraint_changed) { - bool ret = td->cached_stop_ok; + bool ret = td->cached_suspend_ok; spin_unlock_irqrestore(&dev->power.lock, flags); return ret; } td->constraint_changed = false; - td->cached_stop_ok = false; + td->cached_suspend_ok = false; td->effective_constraint_ns = -1; constraint_ns = __dev_pm_qos_read_value(dev); @@ -83,13 +83,13 @@ static bool default_stop_ok(struct device *dev) return false; } td->effective_constraint_ns = constraint_ns; - td->cached_stop_ok = constraint_ns >= 0; + td->cached_suspend_ok = constraint_ns >= 0; /* * The children have been suspended already, so we don't need to take - * their stop latencies into account here. + * their suspend latencies into account here. */ - return td->cached_stop_ok; + return td->cached_suspend_ok; } /** @@ -150,7 +150,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd, */ td = &to_gpd_data(pdd)->td; constraint_ns = td->effective_constraint_ns; - /* default_stop_ok() need not be called before us. */ + /* default_suspend_ok() need not be called before us. */ if (constraint_ns < 0) { constraint_ns = dev_pm_qos_read_value(pdd->dev); constraint_ns *= NSEC_PER_USEC; @@ -227,7 +227,7 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain) } struct dev_power_governor simple_qos_governor = { - .stop_ok = default_stop_ok, + .suspend_ok = default_suspend_ok, .power_down_ok = default_power_down_ok, }; @@ -236,5 +236,5 @@ struct dev_power_governor simple_qos_governor = { */ struct dev_power_governor pm_domain_always_on_gov = { .power_down_ok = always_on_power_down_ok, - .stop_ok = default_stop_ok, + .suspend_ok = default_suspend_ok, }; diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 6e7c3ccea24b..c81667d4bb60 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -1556,7 +1556,6 @@ int dpm_suspend(pm_message_t state) static int device_prepare(struct device *dev, pm_message_t state) { int (*callback)(struct device *) = NULL; - char *info = NULL; int ret = 0; if (dev->power.syscore) @@ -1579,24 +1578,17 @@ static int device_prepare(struct device *dev, pm_message_t state) goto unlock; } - if (dev->pm_domain) { - info = "preparing power domain "; + if (dev->pm_domain) callback = dev->pm_domain->ops.prepare; - } else if (dev->type && dev->type->pm) { - info = "preparing type "; + else if (dev->type && dev->type->pm) callback = dev->type->pm->prepare; - } else if (dev->class && dev->class->pm) { - info = "preparing class "; + else if (dev->class && dev->class->pm) callback = dev->class->pm->prepare; - } else if (dev->bus && dev->bus->pm) { - info = "preparing bus "; + else if (dev->bus && dev->bus->pm) callback = dev->bus->pm->prepare; - } - if (!callback && dev->driver && dev->driver->pm) { - info = "preparing driver "; + if (!callback && dev->driver && dev->driver->pm) callback = dev->driver->pm->prepare; - } if (callback) ret = callback(dev); diff --git a/drivers/base/power/opp/Makefile b/drivers/base/power/opp/Makefile index 19837ef04d8e..e70ceb406fe9 100644 --- a/drivers/base/power/opp/Makefile +++ b/drivers/base/power/opp/Makefile @@ -1,3 +1,4 @@ ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG obj-y += core.o cpu.o +obj-$(CONFIG_OF) += of.o obj-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/base/power/opp/core.c b/drivers/base/power/opp/core.c index d8f4cc22856c..7c04c87738a6 100644 --- a/drivers/base/power/opp/core.c +++ b/drivers/base/power/opp/core.c @@ -18,7 +18,6 @@ #include <linux/err.h> #include <linux/slab.h> #include <linux/device.h> -#include <linux/of.h> #include <linux/export.h> #include <linux/regulator/consumer.h> @@ -29,7 +28,7 @@ * from here, with each opp_table containing the list of opps it supports in * various states of availability. */ -static LIST_HEAD(opp_tables); +LIST_HEAD(opp_tables); /* Lock to allow exclusive modification to the device and opp lists */ DEFINE_MUTEX(opp_table_lock); @@ -53,26 +52,6 @@ static struct opp_device *_find_opp_dev(const struct device *dev, return NULL; } -static struct opp_table *_managed_opp(const struct device_node *np) -{ - struct opp_table *opp_table; - - list_for_each_entry_rcu(opp_table, &opp_tables, node) { - if (opp_table->np == np) { - /* - * Multiple devices can point to the same OPP table and - * so will have same node-pointer, np. - * - * But the OPPs will be considered as shared only if the - * OPP table contains a "opp-shared" property. - */ - return opp_table->shared_opp ? opp_table : NULL; - } - } - - return NULL; -} - /** * _find_opp_table() - find opp_table struct using device pointer * @dev: device pointer used to lookup OPP table @@ -757,7 +736,6 @@ static struct opp_table *_add_opp_table(struct device *dev) { struct opp_table *opp_table; struct opp_device *opp_dev; - struct device_node *np; int ret; /* Check for existing table for 'dev' first */ @@ -781,20 +759,7 @@ static struct opp_table *_add_opp_table(struct device *dev) return NULL; } - /* - * Only required for backward compatibility with v1 bindings, but isn't - * harmful for other cases. And so we do it unconditionally. - */ - np = of_node_get(dev->of_node); - if (np) { - u32 val; - - if (!of_property_read_u32(np, "clock-latency", &val)) - opp_table->clock_latency_ns_max = val; - of_property_read_u32(np, "voltage-tolerance", - &opp_table->voltage_tolerance_v1); - of_node_put(np); - } + _of_init_opp_table(opp_table, dev); /* Set regulator to a non-NULL error value */ opp_table->regulator = ERR_PTR(-ENXIO); @@ -890,8 +855,8 @@ static void _kfree_opp_rcu(struct rcu_head *head) * It is assumed that the caller holds required mutex for an RCU updater * strategy. */ -static void _opp_remove(struct opp_table *opp_table, - struct dev_pm_opp *opp, bool notify) +void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp, + bool notify) { /* * Notify the changes in the availability of the operable @@ -952,8 +917,8 @@ unlock: } EXPORT_SYMBOL_GPL(dev_pm_opp_remove); -static struct dev_pm_opp *_allocate_opp(struct device *dev, - struct opp_table **opp_table) +struct dev_pm_opp *_allocate_opp(struct device *dev, + struct opp_table **opp_table) { struct dev_pm_opp *opp; @@ -989,8 +954,8 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp, return true; } -static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, - struct opp_table *opp_table) +int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, + struct opp_table *opp_table) { struct dev_pm_opp *opp; struct list_head *head = &opp_table->opp_list; @@ -1066,8 +1031,8 @@ static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, * Duplicate OPPs (both freq and volt are same) and !opp->available * -ENOMEM Memory allocation failure */ -static int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, - bool dynamic) +int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, + bool dynamic) { struct opp_table *opp_table; struct dev_pm_opp *new_opp; @@ -1112,83 +1077,6 @@ unlock: return ret; } -/* TODO: Support multiple regulators */ -static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, - struct opp_table *opp_table) -{ - u32 microvolt[3] = {0}; - u32 val; - int count, ret; - struct property *prop = NULL; - char name[NAME_MAX]; - - /* Search for "opp-microvolt-<name>" */ - if (opp_table->prop_name) { - snprintf(name, sizeof(name), "opp-microvolt-%s", - opp_table->prop_name); - prop = of_find_property(opp->np, name, NULL); - } - - if (!prop) { - /* Search for "opp-microvolt" */ - sprintf(name, "opp-microvolt"); - prop = of_find_property(opp->np, name, NULL); - - /* Missing property isn't a problem, but an invalid entry is */ - if (!prop) - return 0; - } - - count = of_property_count_u32_elems(opp->np, name); - if (count < 0) { - dev_err(dev, "%s: Invalid %s property (%d)\n", - __func__, name, count); - return count; - } - - /* There can be one or three elements here */ - if (count != 1 && count != 3) { - dev_err(dev, "%s: Invalid number of elements in %s property (%d)\n", - __func__, name, count); - return -EINVAL; - } - - ret = of_property_read_u32_array(opp->np, name, microvolt, count); - if (ret) { - dev_err(dev, "%s: error parsing %s: %d\n", __func__, name, ret); - return -EINVAL; - } - - opp->u_volt = microvolt[0]; - - if (count == 1) { - opp->u_volt_min = opp->u_volt; - opp->u_volt_max = opp->u_volt; - } else { - opp->u_volt_min = microvolt[1]; - opp->u_volt_max = microvolt[2]; - } - - /* Search for "opp-microamp-<name>" */ - prop = NULL; - if (opp_table->prop_name) { - snprintf(name, sizeof(name), "opp-microamp-%s", - opp_table->prop_name); - prop = of_find_property(opp->np, name, NULL); - } - - if (!prop) { - /* Search for "opp-microamp" */ - sprintf(name, "opp-microamp"); - prop = of_find_property(opp->np, name, NULL); - } - - if (prop && !of_property_read_u32(opp->np, name, &val)) - opp->u_amp = val; - - return 0; -} - /** * dev_pm_opp_set_supported_hw() - Set supported platforms * @dev: Device for which supported-hw has to be set. @@ -1517,144 +1405,6 @@ unlock: } EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulator); -static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table, - struct device_node *np) -{ - unsigned int count = opp_table->supported_hw_count; - u32 version; - int ret; - - if (!opp_table->supported_hw) - return true; - - while (count--) { - ret = of_property_read_u32_index(np, "opp-supported-hw", count, - &version); - if (ret) { - dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n", - __func__, count, ret); - return false; - } - - /* Both of these are bitwise masks of the versions */ - if (!(version & opp_table->supported_hw[count])) - return false; - } - - return true; -} - -/** - * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings) - * @dev: device for which we do this operation - * @np: device node - * - * This function adds an opp definition to the opp table and returns status. The - * opp can be controlled using dev_pm_opp_enable/disable functions and may be - * removed by dev_pm_opp_remove. - * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function internally uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. - * - * Return: - * 0 On success OR - * Duplicate OPPs (both freq and volt are same) and opp->available - * -EEXIST Freq are same and volt are different OR - * Duplicate OPPs (both freq and volt are same) and !opp->available - * -ENOMEM Memory allocation failure - * -EINVAL Failed parsing the OPP node - */ -static int _opp_add_static_v2(struct device *dev, struct device_node *np) -{ - struct opp_table *opp_table; - struct dev_pm_opp *new_opp; - u64 rate; - u32 val; - int ret; - - /* Hold our table modification lock here */ - mutex_lock(&opp_table_lock); - - new_opp = _allocate_opp(dev, &opp_table); - if (!new_opp) { - ret = -ENOMEM; - goto unlock; - } - - ret = of_property_read_u64(np, "opp-hz", &rate); - if (ret < 0) { - dev_err(dev, "%s: opp-hz not found\n", __func__); - goto free_opp; - } - - /* Check if the OPP supports hardware's hierarchy of versions or not */ - if (!_opp_is_supported(dev, opp_table, np)) { - dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate); - goto free_opp; - } - - /* - * Rate is defined as an unsigned long in clk API, and so casting - * explicitly to its type. Must be fixed once rate is 64 bit - * guaranteed in clk API. - */ - new_opp->rate = (unsigned long)rate; - new_opp->turbo = of_property_read_bool(np, "turbo-mode"); - - new_opp->np = np; - new_opp->dynamic = false; - new_opp->available = true; - - if (!of_property_read_u32(np, "clock-latency-ns", &val)) - new_opp->clock_latency_ns = val; - - ret = opp_parse_supplies(new_opp, dev, opp_table); - if (ret) - goto free_opp; - - ret = _opp_add(dev, new_opp, opp_table); - if (ret) - goto free_opp; - - /* OPP to select on device suspend */ - if (of_property_read_bool(np, "opp-suspend")) { - if (opp_table->suspend_opp) { - dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n", - __func__, opp_table->suspend_opp->rate, - new_opp->rate); - } else { - new_opp->suspend = true; - opp_table->suspend_opp = new_opp; - } - } - - if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max) - opp_table->clock_latency_ns_max = new_opp->clock_latency_ns; - - mutex_unlock(&opp_table_lock); - - pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", - __func__, new_opp->turbo, new_opp->rate, new_opp->u_volt, - new_opp->u_volt_min, new_opp->u_volt_max, - new_opp->clock_latency_ns); - - /* - * Notify the changes in the availability of the operable - * frequency/voltage list. - */ - srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); - return 0; - -free_opp: - _opp_remove(opp_table, new_opp, false); -unlock: - mutex_unlock(&opp_table_lock); - return ret; -} - /** * dev_pm_opp_add() - Add an OPP table from a table definitions * @dev: device for which we do this operation @@ -1842,21 +1592,11 @@ struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev) } EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier); -#ifdef CONFIG_OF -/** - * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT - * entries - * @dev: device pointer used to lookup OPP table. - * - * Free OPPs created using static entries present in DT. - * - * Locking: The internal opp_table and opp structures are RCU protected. - * Hence this function indirectly uses RCU updater strategy with mutex locks - * to keep the integrity of the internal data structures. Callers should ensure - * that this function is *NOT* called under RCU protection or in contexts where - * mutex cannot be locked. +/* + * Free OPPs either created using static entries present in DT or even the + * dynamically added entries based on remove_all param. */ -void dev_pm_opp_of_remove_table(struct device *dev) +void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) { struct opp_table *opp_table; struct dev_pm_opp *opp, *tmp; @@ -1881,7 +1621,7 @@ void dev_pm_opp_of_remove_table(struct device *dev) if (list_is_singular(&opp_table->dev_list)) { /* Free static OPPs */ list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) { - if (!opp->dynamic) + if (remove_all || !opp->dynamic) _opp_remove(opp_table, opp, true); } } else { @@ -1891,160 +1631,22 @@ void dev_pm_opp_of_remove_table(struct device *dev) unlock: mutex_unlock(&opp_table_lock); } -EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); - -/* Returns opp descriptor node for a device, caller must do of_node_put() */ -struct device_node *_of_get_opp_desc_node(struct device *dev) -{ - /* - * TODO: Support for multiple OPP tables. - * - * There should be only ONE phandle present in "operating-points-v2" - * property. - */ - - return of_parse_phandle(dev->of_node, "operating-points-v2", 0); -} - -/* Initializes OPP tables based on new bindings */ -static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) -{ - struct device_node *np; - struct opp_table *opp_table; - int ret = 0, count = 0; - - mutex_lock(&opp_table_lock); - - opp_table = _managed_opp(opp_np); - if (opp_table) { - /* OPPs are already managed */ - if (!_add_opp_dev(dev, opp_table)) - ret = -ENOMEM; - mutex_unlock(&opp_table_lock); - return ret; - } - mutex_unlock(&opp_table_lock); - - /* We have opp-table node now, iterate over it and add OPPs */ - for_each_available_child_of_node(opp_np, np) { - count++; - - ret = _opp_add_static_v2(dev, np); - if (ret) { - dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, - ret); - goto free_table; - } - } - - /* There should be one of more OPP defined */ - if (WARN_ON(!count)) - return -ENOENT; - - mutex_lock(&opp_table_lock); - - opp_table = _find_opp_table(dev); - if (WARN_ON(IS_ERR(opp_table))) { - ret = PTR_ERR(opp_table); - mutex_unlock(&opp_table_lock); - goto free_table; - } - - opp_table->np = opp_np; - opp_table->shared_opp = of_property_read_bool(opp_np, "opp-shared"); - - mutex_unlock(&opp_table_lock); - - return 0; - -free_table: - dev_pm_opp_of_remove_table(dev); - - return ret; -} - -/* Initializes OPP tables based on old-deprecated bindings */ -static int _of_add_opp_table_v1(struct device *dev) -{ - const struct property *prop; - const __be32 *val; - int nr; - - prop = of_find_property(dev->of_node, "operating-points", NULL); - if (!prop) - return -ENODEV; - if (!prop->value) - return -ENODATA; - - /* - * Each OPP is a set of tuples consisting of frequency and - * voltage like <freq-kHz vol-uV>. - */ - nr = prop->length / sizeof(u32); - if (nr % 2) { - dev_err(dev, "%s: Invalid OPP table\n", __func__); - return -EINVAL; - } - - val = prop->value; - while (nr) { - unsigned long freq = be32_to_cpup(val++) * 1000; - unsigned long volt = be32_to_cpup(val++); - - if (_opp_add_v1(dev, freq, volt, false)) - dev_warn(dev, "%s: Failed to add OPP %ld\n", - __func__, freq); - nr -= 2; - } - - return 0; -} /** - * dev_pm_opp_of_add_table() - Initialize opp table from device tree + * dev_pm_opp_remove_table() - Free all OPPs associated with the device * @dev: device pointer used to lookup OPP table. * - * Register the initial OPP table with the OPP library for given device. + * Free both OPPs created using static entries present in DT and the + * dynamically added entries. * * Locking: The internal opp_table and opp structures are RCU protected. * Hence this function indirectly uses RCU updater strategy with mutex locks * to keep the integrity of the internal data structures. Callers should ensure * that this function is *NOT* called under RCU protection or in contexts where * mutex cannot be locked. - * - * Return: - * 0 On success OR - * Duplicate OPPs (both freq and volt are same) and opp->available - * -EEXIST Freq are same and volt are different OR - * Duplicate OPPs (both freq and volt are same) and !opp->available - * -ENOMEM Memory allocation failure - * -ENODEV when 'operating-points' property is not found or is invalid data - * in device node. - * -ENODATA when empty 'operating-points' property is found - * -EINVAL when invalid entries are found in opp-v2 table */ -int dev_pm_opp_of_add_table(struct device *dev) +void dev_pm_opp_remove_table(struct device *dev) { - struct device_node *opp_np; - int ret; - - /* - * OPPs have two version of bindings now. The older one is deprecated, - * try for the new binding first. - */ - opp_np = _of_get_opp_desc_node(dev); - if (!opp_np) { - /* - * Try old-deprecated bindings for backward compatibility with - * older dtbs. - */ - return _of_add_opp_table_v1(dev); - } - - ret = _of_add_opp_table_v2(dev, opp_np); - of_node_put(opp_np); - - return ret; + _dev_pm_opp_remove_table(dev, true); } -EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); -#endif +EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table); diff --git a/drivers/base/power/opp/cpu.c b/drivers/base/power/opp/cpu.c index ba2bdbd932ef..83d6e7ba1a34 100644 --- a/drivers/base/power/opp/cpu.c +++ b/drivers/base/power/opp/cpu.c @@ -18,7 +18,6 @@ #include <linux/err.h> #include <linux/errno.h> #include <linux/export.h> -#include <linux/of.h> #include <linux/slab.h> #include "opp.h" @@ -119,8 +118,66 @@ void dev_pm_opp_free_cpufreq_table(struct device *dev, EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table); #endif /* CONFIG_CPU_FREQ */ -/* Required only for V1 bindings, as v2 can manage it from DT itself */ -int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) +void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of) +{ + struct device *cpu_dev; + int cpu; + + WARN_ON(cpumask_empty(cpumask)); + + for_each_cpu(cpu, cpumask) { + cpu_dev = get_cpu_device(cpu); + if (!cpu_dev) { + pr_err("%s: failed to get cpu%d device\n", __func__, + cpu); + continue; + } + + if (of) + dev_pm_opp_of_remove_table(cpu_dev); + else + dev_pm_opp_remove_table(cpu_dev); + } +} + +/** + * dev_pm_opp_cpumask_remove_table() - Removes OPP table for @cpumask + * @cpumask: cpumask for which OPP table needs to be removed + * + * This removes the OPP tables for CPUs present in the @cpumask. + * This should be used to remove all the OPPs entries associated with + * the cpus in @cpumask. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + */ +void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask) +{ + _dev_pm_opp_cpumask_remove_table(cpumask, false); +} +EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table); + +/** + * dev_pm_opp_set_sharing_cpus() - Mark OPP table as shared by few CPUs + * @cpu_dev: CPU device for which we do this operation + * @cpumask: cpumask of the CPUs which share the OPP table with @cpu_dev + * + * This marks OPP table of the @cpu_dev as shared by the CPUs present in + * @cpumask. + * + * Returns -ENODEV if OPP table isn't already present. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + */ +int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, + const struct cpumask *cpumask) { struct opp_device *opp_dev; struct opp_table *opp_table; @@ -131,7 +188,7 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) opp_table = _find_opp_table(cpu_dev); if (IS_ERR(opp_table)) { - ret = -EINVAL; + ret = PTR_ERR(opp_table); goto unlock; } @@ -152,6 +209,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) __func__, cpu); continue; } + + /* Mark opp-table as multiple CPUs are sharing it now */ + opp_table->shared_opp = true; } unlock: mutex_unlock(&opp_table_lock); @@ -160,112 +220,47 @@ unlock: } EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); -#ifdef CONFIG_OF -void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask) -{ - struct device *cpu_dev; - int cpu; - - WARN_ON(cpumask_empty(cpumask)); - - for_each_cpu(cpu, cpumask) { - cpu_dev = get_cpu_device(cpu); - if (!cpu_dev) { - pr_err("%s: failed to get cpu%d device\n", __func__, - cpu); - continue; - } - - dev_pm_opp_of_remove_table(cpu_dev); - } -} -EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table); - -int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask) -{ - struct device *cpu_dev; - int cpu, ret = 0; - - WARN_ON(cpumask_empty(cpumask)); - - for_each_cpu(cpu, cpumask) { - cpu_dev = get_cpu_device(cpu); - if (!cpu_dev) { - pr_err("%s: failed to get cpu%d device\n", __func__, - cpu); - continue; - } - - ret = dev_pm_opp_of_add_table(cpu_dev); - if (ret) { - pr_err("%s: couldn't find opp table for cpu:%d, %d\n", - __func__, cpu, ret); - - /* Free all other OPPs */ - dev_pm_opp_of_cpumask_remove_table(cpumask); - break; - } - } - - return ret; -} -EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); - -/* - * Works only for OPP v2 bindings. +/** + * dev_pm_opp_get_sharing_cpus() - Get cpumask of CPUs sharing OPPs with @cpu_dev + * @cpu_dev: CPU device for which we do this operation + * @cpumask: cpumask to update with information of sharing CPUs + * + * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev. * - * Returns -ENOENT if operating-points-v2 bindings aren't supported. + * Returns -ENODEV if OPP table isn't already present. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. */ -int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) +int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) { - struct device_node *np, *tmp_np; - struct device *tcpu_dev; - int cpu, ret = 0; - - /* Get OPP descriptor node */ - np = _of_get_opp_desc_node(cpu_dev); - if (!np) { - dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__); - return -ENOENT; - } - - cpumask_set_cpu(cpu_dev->id, cpumask); - - /* OPPs are shared ? */ - if (!of_property_read_bool(np, "opp-shared")) - goto put_cpu_node; - - for_each_possible_cpu(cpu) { - if (cpu == cpu_dev->id) - continue; + struct opp_device *opp_dev; + struct opp_table *opp_table; + int ret = 0; - tcpu_dev = get_cpu_device(cpu); - if (!tcpu_dev) { - dev_err(cpu_dev, "%s: failed to get cpu%d device\n", - __func__, cpu); - ret = -ENODEV; - goto put_cpu_node; - } + mutex_lock(&opp_table_lock); - /* Get OPP descriptor node */ - tmp_np = _of_get_opp_desc_node(tcpu_dev); - if (!tmp_np) { - dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n", - __func__); - ret = -ENOENT; - goto put_cpu_node; - } + opp_table = _find_opp_table(cpu_dev); + if (IS_ERR(opp_table)) { + ret = PTR_ERR(opp_table); + goto unlock; + } - /* CPUs are sharing opp node */ - if (np == tmp_np) - cpumask_set_cpu(cpu, cpumask); + cpumask_clear(cpumask); - of_node_put(tmp_np); + if (opp_table->shared_opp) { + list_for_each_entry(opp_dev, &opp_table->dev_list, node) + cpumask_set_cpu(opp_dev->dev->id, cpumask); + } else { + cpumask_set_cpu(cpu_dev->id, cpumask); } -put_cpu_node: - of_node_put(np); +unlock: + mutex_unlock(&opp_table_lock); + return ret; } -EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); -#endif +EXPORT_SYMBOL_GPL(dev_pm_opp_get_sharing_cpus); diff --git a/drivers/base/power/opp/of.c b/drivers/base/power/opp/of.c new file mode 100644 index 000000000000..94d2010558e3 --- /dev/null +++ b/drivers/base/power/opp/of.c @@ -0,0 +1,591 @@ +/* + * Generic OPP OF helpers + * + * Copyright (C) 2009-2010 Texas Instruments Incorporated. + * Nishanth Menon + * Romit Dasgupta + * Kevin Hilman + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/cpu.h> +#include <linux/errno.h> +#include <linux/device.h> +#include <linux/of.h> +#include <linux/export.h> + +#include "opp.h" + +static struct opp_table *_managed_opp(const struct device_node *np) +{ + struct opp_table *opp_table; + + list_for_each_entry_rcu(opp_table, &opp_tables, node) { + if (opp_table->np == np) { + /* + * Multiple devices can point to the same OPP table and + * so will have same node-pointer, np. + * + * But the OPPs will be considered as shared only if the + * OPP table contains a "opp-shared" property. + */ + return opp_table->shared_opp ? opp_table : NULL; + } + } + + return NULL; +} + +void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) +{ + struct device_node *np; + + /* + * Only required for backward compatibility with v1 bindings, but isn't + * harmful for other cases. And so we do it unconditionally. + */ + np = of_node_get(dev->of_node); + if (np) { + u32 val; + + if (!of_property_read_u32(np, "clock-latency", &val)) + opp_table->clock_latency_ns_max = val; + of_property_read_u32(np, "voltage-tolerance", + &opp_table->voltage_tolerance_v1); + of_node_put(np); + } +} + +static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table, + struct device_node *np) +{ + unsigned int count = opp_table->supported_hw_count; + u32 version; + int ret; + + if (!opp_table->supported_hw) + return true; + + while (count--) { + ret = of_property_read_u32_index(np, "opp-supported-hw", count, + &version); + if (ret) { + dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n", + __func__, count, ret); + return false; + } + + /* Both of these are bitwise masks of the versions */ + if (!(version & opp_table->supported_hw[count])) + return false; + } + + return true; +} + +/* TODO: Support multiple regulators */ +static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, + struct opp_table *opp_table) +{ + u32 microvolt[3] = {0}; + u32 val; + int count, ret; + struct property *prop = NULL; + char name[NAME_MAX]; + + /* Search for "opp-microvolt-<name>" */ + if (opp_table->prop_name) { + snprintf(name, sizeof(name), "opp-microvolt-%s", + opp_table->prop_name); + prop = of_find_property(opp->np, name, NULL); + } + + if (!prop) { + /* Search for "opp-microvolt" */ + sprintf(name, "opp-microvolt"); + prop = of_find_property(opp->np, name, NULL); + + /* Missing property isn't a problem, but an invalid entry is */ + if (!prop) + return 0; + } + + count = of_property_count_u32_elems(opp->np, name); + if (count < 0) { + dev_err(dev, "%s: Invalid %s property (%d)\n", + __func__, name, count); + return count; + } + + /* There can be one or three elements here */ + if (count != 1 && count != 3) { + dev_err(dev, "%s: Invalid number of elements in %s property (%d)\n", + __func__, name, count); + return -EINVAL; + } + + ret = of_property_read_u32_array(opp->np, name, microvolt, count); + if (ret) { + dev_err(dev, "%s: error parsing %s: %d\n", __func__, name, ret); + return -EINVAL; + } + + opp->u_volt = microvolt[0]; + + if (count == 1) { + opp->u_volt_min = opp->u_volt; + opp->u_volt_max = opp->u_volt; + } else { + opp->u_volt_min = microvolt[1]; + opp->u_volt_max = microvolt[2]; + } + + /* Search for "opp-microamp-<name>" */ + prop = NULL; + if (opp_table->prop_name) { + snprintf(name, sizeof(name), "opp-microamp-%s", + opp_table->prop_name); + prop = of_find_property(opp->np, name, NULL); + } + + if (!prop) { + /* Search for "opp-microamp" */ + sprintf(name, "opp-microamp"); + prop = of_find_property(opp->np, name, NULL); + } + + if (prop && !of_property_read_u32(opp->np, name, &val)) + opp->u_amp = val; + + return 0; +} + +/** + * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT + * entries + * @dev: device pointer used to lookup OPP table. + * + * Free OPPs created using static entries present in DT. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function indirectly uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + */ +void dev_pm_opp_of_remove_table(struct device *dev) +{ + _dev_pm_opp_remove_table(dev, false); +} +EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); + +/* Returns opp descriptor node for a device, caller must do of_node_put() */ +struct device_node *_of_get_opp_desc_node(struct device *dev) +{ + /* + * TODO: Support for multiple OPP tables. + * + * There should be only ONE phandle present in "operating-points-v2" + * property. + */ + + return of_parse_phandle(dev->of_node, "operating-points-v2", 0); +} + +/** + * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings) + * @dev: device for which we do this operation + * @np: device node + * + * This function adds an opp definition to the opp table and returns status. The + * opp can be controlled using dev_pm_opp_enable/disable functions and may be + * removed by dev_pm_opp_remove. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + * + * Return: + * 0 On success OR + * Duplicate OPPs (both freq and volt are same) and opp->available + * -EEXIST Freq are same and volt are different OR + * Duplicate OPPs (both freq and volt are same) and !opp->available + * -ENOMEM Memory allocation failure + * -EINVAL Failed parsing the OPP node + */ +static int _opp_add_static_v2(struct device *dev, struct device_node *np) +{ + struct opp_table *opp_table; + struct dev_pm_opp *new_opp; + u64 rate; + u32 val; + int ret; + + /* Hold our table modification lock here */ + mutex_lock(&opp_table_lock); + + new_opp = _allocate_opp(dev, &opp_table); + if (!new_opp) { + ret = -ENOMEM; + goto unlock; + } + + ret = of_property_read_u64(np, "opp-hz", &rate); + if (ret < 0) { + dev_err(dev, "%s: opp-hz not found\n", __func__); + goto free_opp; + } + + /* Check if the OPP supports hardware's hierarchy of versions or not */ + if (!_opp_is_supported(dev, opp_table, np)) { + dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate); + goto free_opp; + } + + /* + * Rate is defined as an unsigned long in clk API, and so casting + * explicitly to its type. Must be fixed once rate is 64 bit + * guaranteed in clk API. + */ + new_opp->rate = (unsigned long)rate; + new_opp->turbo = of_property_read_bool(np, "turbo-mode"); + + new_opp->np = np; + new_opp->dynamic = false; + new_opp->available = true; + + if (!of_property_read_u32(np, "clock-latency-ns", &val)) + new_opp->clock_latency_ns = val; + + ret = opp_parse_supplies(new_opp, dev, opp_table); + if (ret) + goto free_opp; + + ret = _opp_add(dev, new_opp, opp_table); + if (ret) + goto free_opp; + + /* OPP to select on device suspend */ + if (of_property_read_bool(np, "opp-suspend")) { + if (opp_table->suspend_opp) { + dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n", + __func__, opp_table->suspend_opp->rate, + new_opp->rate); + } else { + new_opp->suspend = true; + opp_table->suspend_opp = new_opp; + } + } + + if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max) + opp_table->clock_latency_ns_max = new_opp->clock_latency_ns; + + mutex_unlock(&opp_table_lock); + + pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", + __func__, new_opp->turbo, new_opp->rate, new_opp->u_volt, + new_opp->u_volt_min, new_opp->u_volt_max, + new_opp->clock_latency_ns); + + /* + * Notify the changes in the availability of the operable + * frequency/voltage list. + */ + srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); + return 0; + +free_opp: + _opp_remove(opp_table, new_opp, false); +unlock: + mutex_unlock(&opp_table_lock); + return ret; +} + +/* Initializes OPP tables based on new bindings */ +static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) +{ + struct device_node *np; + struct opp_table *opp_table; + int ret = 0, count = 0; + + mutex_lock(&opp_table_lock); + + opp_table = _managed_opp(opp_np); + if (opp_table) { + /* OPPs are already managed */ + if (!_add_opp_dev(dev, opp_table)) + ret = -ENOMEM; + mutex_unlock(&opp_table_lock); + return ret; + } + mutex_unlock(&opp_table_lock); + + /* We have opp-table node now, iterate over it and add OPPs */ + for_each_available_child_of_node(opp_np, np) { + count++; + + ret = _opp_add_static_v2(dev, np); + if (ret) { + dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, + ret); + goto free_table; + } + } + + /* There should be one of more OPP defined */ + if (WARN_ON(!count)) + return -ENOENT; + + mutex_lock(&opp_table_lock); + + opp_table = _find_opp_table(dev); + if (WARN_ON(IS_ERR(opp_table))) { + ret = PTR_ERR(opp_table); + mutex_unlock(&opp_table_lock); + goto free_table; + } + + opp_table->np = opp_np; + opp_table->shared_opp = of_property_read_bool(opp_np, "opp-shared"); + + mutex_unlock(&opp_table_lock); + + return 0; + +free_table: + dev_pm_opp_of_remove_table(dev); + + return ret; +} + +/* Initializes OPP tables based on old-deprecated bindings */ +static int _of_add_opp_table_v1(struct device *dev) +{ + const struct property *prop; + const __be32 *val; + int nr; + + prop = of_find_property(dev->of_node, "operating-points", NULL); + if (!prop) + return -ENODEV; + if (!prop->value) + return -ENODATA; + + /* + * Each OPP is a set of tuples consisting of frequency and + * voltage like <freq-kHz vol-uV>. + */ + nr = prop->length / sizeof(u32); + if (nr % 2) { + dev_err(dev, "%s: Invalid OPP table\n", __func__); + return -EINVAL; + } + + val = prop->value; + while (nr) { + unsigned long freq = be32_to_cpup(val++) * 1000; + unsigned long volt = be32_to_cpup(val++); + + if (_opp_add_v1(dev, freq, volt, false)) + dev_warn(dev, "%s: Failed to add OPP %ld\n", + __func__, freq); + nr -= 2; + } + + return 0; +} + +/** + * dev_pm_opp_of_add_table() - Initialize opp table from device tree + * @dev: device pointer used to lookup OPP table. + * + * Register the initial OPP table with the OPP library for given device. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function indirectly uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + * + * Return: + * 0 On success OR + * Duplicate OPPs (both freq and volt are same) and opp->available + * -EEXIST Freq are same and volt are different OR + * Duplicate OPPs (both freq and volt are same) and !opp->available + * -ENOMEM Memory allocation failure + * -ENODEV when 'operating-points' property is not found or is invalid data + * in device node. + * -ENODATA when empty 'operating-points' property is found + * -EINVAL when invalid entries are found in opp-v2 table + */ +int dev_pm_opp_of_add_table(struct device *dev) +{ + struct device_node *opp_np; + int ret; + + /* + * OPPs have two version of bindings now. The older one is deprecated, + * try for the new binding first. + */ + opp_np = _of_get_opp_desc_node(dev); + if (!opp_np) { + /* + * Try old-deprecated bindings for backward compatibility with + * older dtbs. + */ + return _of_add_opp_table_v1(dev); + } + + ret = _of_add_opp_table_v2(dev, opp_np); + of_node_put(opp_np); + + return ret; +} +EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); + +/* CPU device specific helpers */ + +/** + * dev_pm_opp_of_cpumask_remove_table() - Removes OPP table for @cpumask + * @cpumask: cpumask for which OPP table needs to be removed + * + * This removes the OPP tables for CPUs present in the @cpumask. + * This should be used only to remove static entries created from DT. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + */ +void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask) +{ + _dev_pm_opp_cpumask_remove_table(cpumask, true); +} +EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table); + +/** + * dev_pm_opp_of_cpumask_add_table() - Adds OPP table for @cpumask + * @cpumask: cpumask for which OPP table needs to be added. + * + * This adds the OPP tables for CPUs present in the @cpumask. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + */ +int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask) +{ + struct device *cpu_dev; + int cpu, ret = 0; + + WARN_ON(cpumask_empty(cpumask)); + + for_each_cpu(cpu, cpumask) { + cpu_dev = get_cpu_device(cpu); + if (!cpu_dev) { + pr_err("%s: failed to get cpu%d device\n", __func__, + cpu); + continue; + } + + ret = dev_pm_opp_of_add_table(cpu_dev); + if (ret) { + pr_err("%s: couldn't find opp table for cpu:%d, %d\n", + __func__, cpu, ret); + + /* Free all other OPPs */ + dev_pm_opp_of_cpumask_remove_table(cpumask); + break; + } + } + + return ret; +} +EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); + +/* + * Works only for OPP v2 bindings. + * + * Returns -ENOENT if operating-points-v2 bindings aren't supported. + */ +/** + * dev_pm_opp_of_get_sharing_cpus() - Get cpumask of CPUs sharing OPPs with + * @cpu_dev using operating-points-v2 + * bindings. + * + * @cpu_dev: CPU device for which we do this operation + * @cpumask: cpumask to update with information of sharing CPUs + * + * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev. + * + * Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev. + * + * Locking: The internal opp_table and opp structures are RCU protected. + * Hence this function internally uses RCU updater strategy with mutex locks + * to keep the integrity of the internal data structures. Callers should ensure + * that this function is *NOT* called under RCU protection or in contexts where + * mutex cannot be locked. + */ +int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, + struct cpumask *cpumask) +{ + struct device_node *np, *tmp_np; + struct device *tcpu_dev; + int cpu, ret = 0; + + /* Get OPP descriptor node */ + np = _of_get_opp_desc_node(cpu_dev); + if (!np) { + dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__); + return -ENOENT; + } + + cpumask_set_cpu(cpu_dev->id, cpumask); + + /* OPPs are shared ? */ + if (!of_property_read_bool(np, "opp-shared")) + goto put_cpu_node; + + for_each_possible_cpu(cpu) { + if (cpu == cpu_dev->id) + continue; + + tcpu_dev = get_cpu_device(cpu); + if (!tcpu_dev) { + dev_err(cpu_dev, "%s: failed to get cpu%d device\n", + __func__, cpu); + ret = -ENODEV; + goto put_cpu_node; + } + + /* Get OPP descriptor node */ + tmp_np = _of_get_opp_desc_node(tcpu_dev); + if (!tmp_np) { + dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n", + __func__); + ret = -ENOENT; + goto put_cpu_node; + } + + /* CPUs are sharing opp node */ + if (np == tmp_np) + cpumask_set_cpu(cpu, cpumask); + + of_node_put(tmp_np); + } + +put_cpu_node: + of_node_put(np); + return ret; +} +EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); diff --git a/drivers/base/power/opp/opp.h b/drivers/base/power/opp/opp.h index f67f806fcf3a..20f3be22e060 100644 --- a/drivers/base/power/opp/opp.h +++ b/drivers/base/power/opp/opp.h @@ -28,6 +28,8 @@ struct regulator; /* Lock to allow exclusive modification to the device and opp lists */ extern struct mutex opp_table_lock; +extern struct list_head opp_tables; + /* * Internal data structure organization with the OPP layer library is as * follows: @@ -183,6 +185,18 @@ struct opp_table { struct opp_table *_find_opp_table(struct device *dev); struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); struct device_node *_of_get_opp_desc_node(struct device *dev); +void _dev_pm_opp_remove_table(struct device *dev, bool remove_all); +struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table); +int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table); +void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp, bool notify); +int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, bool dynamic); +void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of); + +#ifdef CONFIG_OF +void _of_init_opp_table(struct opp_table *opp_table, struct device *dev); +#else +static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) {} +#endif #ifdef CONFIG_DEBUG_FS void opp_debug_remove_one(struct dev_pm_opp *opp); diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c index 4c7055009bd6..b74690418504 100644 --- a/drivers/base/power/runtime.c +++ b/drivers/base/power/runtime.c @@ -1506,11 +1506,16 @@ int pm_runtime_force_resume(struct device *dev) goto out; } - ret = callback(dev); + ret = pm_runtime_set_active(dev); if (ret) goto out; - pm_runtime_set_active(dev); + ret = callback(dev); + if (ret) { + pm_runtime_set_suspended(dev); + goto out; + } + pm_runtime_mark_last_busy(dev); out: pm_runtime_enable(dev); |