More power management updates for 5.9-rc1
- Add adaptive voltage scaling (AVS) support to the brcmstb cpufreq
driver and clean it up (Florian Fainelli, Markus Mayer).
- Add a new Tegra cpufreq driver and clean up the existing one (Jon
Hunter, Sumit Gupta).
- Add bandwidth level support to the Qcom cpufreq driver along with
OPP changes (Sibi Sankar).
- Clean up the sti, cpufreq-dt, ap806, CPPC cpufreq drivers (Viresh
Kumar, Lee Jones, Ivan Kokshaysky, Sven Auhagen, Xin Hao).
- Make schedutil the default governor for ARM (Valentin Schneider).
- Fix dependency issues for the imx cpufreq driver (Walter Lozano).
- Clean up cached_resolved_idx handlihng in the cpufreq core (Viresh
Kumar).
- Fix the intel_pstate driver to use the correct maximum frequency
value when MSR_TURBO_RATIO_LIMIT is 0 (Srinivas Pandruvada).
- Provide kenrneldoc comments for multiple runtime PM helpers and
improve the pm_runtime_get_if_active() kerneldoc (Rafael Wysocki).
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl8tlk4SHHJqd0Byand5
c29ja2kubmV0AAoJEILEb/54YlRxyl8QAIHknuudPrtt1yiZM2dpKCwi1fpdZjWL
GkGNlS4I1AkzMaVnGdsaiJb8ek1aukcl3w3vkj3tMCfPuBLT3P5f5mNagtwPgwdG
ZpNoU+OJUK1zBeVaYH8OL0Vb8dQQjOqk3RUx8MmnHkIRo4EpixxFoEOuo+6eKGZZ
G6KG5u3r+ZrY3nWmoBEVJ8ytM/ovDS0uv/j+uPR5qB1GCGuQJuW4ngA/0CgIBClS
Rk+/r7enmhGPylFp74UJD6S1rVhypLzEAX7JfqfQB3T0918ZTkYFuaQpb7JJqwj7
5rbyZX0xWjVMoypW7JaWDctcywdQ9aslWLHo0rmEdZCKKDDT5bPduXhb+4HAKqFg
j62eCbQzkz7swk1jPDMPuDLFVweAqKEoU2OOram9rGrzevOPvm2t1awNSiLDhmxx
TQL6COs1rbwOPuBT23NUa5jTc7us8xYQh13bI4zKrf1pGxju1s+QwOe7HbnQixuA
TngK62e3fl5qxSaq4yTKITX2y2/SxAIjgqy5FXTC7aU0rob3YVtSjMISZxJmKKN8
vbkkF+ZeGJn7TJycwurq7HgDCbggUopM8upaPj4+BVLZamiHcNEUP4V2/Q2jVQ1s
9/wbMPetycGnM0fd0drtcV0TxVz/cGWAkKfE12lWM8rxUEMqeblVTiOXYaYM2w8V
Dlxm6D4as+gE
=70pc
-----END PGP SIGNATURE-----
Merge tag 'pm-5.9-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull more power management updates from Rafael Wysocki:
"These are mostly ARM cpufreq driver updates plus a cpufreq core
cleanup, an ARM-wide change to make schedutil the default scaling
governor, an intel_pstate driver fix and some runtime PM changes
regarding kerneldoc comments.
Specifics:
- Add adaptive voltage scaling (AVS) support to the brcmstb cpufreq
driver and clean it up (Florian Fainelli, Markus Mayer).
- Add a new Tegra cpufreq driver and clean up the existing one (Jon
Hunter, Sumit Gupta).
- Add bandwidth level support to the Qcom cpufreq driver along with
OPP changes (Sibi Sankar).
- Clean up the sti, cpufreq-dt, ap806, CPPC cpufreq drivers (Viresh
Kumar, Lee Jones, Ivan Kokshaysky, Sven Auhagen, Xin Hao).
- Make schedutil the default governor for ARM (Valentin Schneider).
- Fix dependency issues for the imx cpufreq driver (Walter Lozano).
- Clean up cached_resolved_idx handlihng in the cpufreq core (Viresh
Kumar).
- Fix the intel_pstate driver to use the correct maximum frequency
value when MSR_TURBO_RATIO_LIMIT is 0 (Srinivas Pandruvada).
- Provide kenrneldoc comments for multiple runtime PM helpers and
improve the pm_runtime_get_if_active() kerneldoc (Rafael Wysocki)"
* tag 'pm-5.9-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (22 commits)
cpufreq: intel_pstate: Fix cpuinfo_max_freq when MSR_TURBO_RATIO_LIMIT is 0
PM: runtime: Improve kerneldoc of pm_runtime_get_if_active()
PM: runtime: Add kerneldoc comments to multiple helpers
cpufreq: make schedutil the default for arm and arm64
cpufreq: cached_resolved_idx can not be negative
cpufreq: Add Tegra194 cpufreq driver
dt-bindings: arm: Add NVIDIA Tegra194 CPU Complex binding
cpufreq: imx: Select NVMEM_IMX_OCOTP
cpufreq: sti-cpufreq: Fix some formatting and misspelling issues
cpufreq: tegra186: Simplify probe return path
cpufreq: CPPC: Reuse caps variable in few routines
cpufreq: ap806: fix cpufreq driver needs ap cpu clk
cpufreq: cppc: Reorder code and remove apply_hisi_workaround variable
cpufreq: dt: fix oops on armada37xx
cpufreq: brcmstb-avs-cpufreq: send S2_ENTER / S2_EXIT commands to AVS
cpufreq: brcmstb-avs-cpufreq: Support polling AVS firmware
cpufreq: brcmstb-avs-cpufreq: more flexible interface for __issue_avs_command()
cpufreq: qcom: Disable fast switch when scaling DDR/L3
cpufreq: qcom: Update the bandwidth levels on frequency change
OPP: Add and export helper to set bandwidth
...
This commit is contained in:
commit
f6235eb189
19 changed files with 963 additions and 114 deletions
|
|
@ -127,7 +127,7 @@ struct cpufreq_policy {
|
|||
|
||||
/* Cached frequency lookup from cpufreq_driver_resolve_freq. */
|
||||
unsigned int cached_target_freq;
|
||||
int cached_resolved_idx;
|
||||
unsigned int cached_resolved_idx;
|
||||
|
||||
/* Synchronization for frequency transitions */
|
||||
bool transition_ongoing; /* Tracks transition status */
|
||||
|
|
|
|||
|
|
@ -152,6 +152,7 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names
|
|||
void dev_pm_opp_detach_genpd(struct opp_table *opp_table);
|
||||
int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
|
||||
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
|
||||
int dev_pm_opp_set_bw(struct device *dev, struct dev_pm_opp *opp);
|
||||
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
|
||||
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
|
||||
void dev_pm_opp_remove_table(struct device *dev);
|
||||
|
|
@ -343,6 +344,11 @@ static inline int dev_pm_opp_set_rate(struct device *dev, unsigned long target_f
|
|||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_opp_set_bw(struct device *dev, struct dev_pm_opp *opp)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
|
|
|
|||
|
|
@ -60,58 +60,151 @@ extern void pm_runtime_put_suppliers(struct device *dev);
|
|||
extern void pm_runtime_new_link(struct device *dev);
|
||||
extern void pm_runtime_drop_link(struct device *dev);
|
||||
|
||||
/**
|
||||
* pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Increment the runtime PM usage counter of @dev if its runtime PM status is
|
||||
* %RPM_ACTIVE and its runtime PM usage counter is greater than 0.
|
||||
*/
|
||||
static inline int pm_runtime_get_if_in_use(struct device *dev)
|
||||
{
|
||||
return pm_runtime_get_if_active(dev, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_suspend_ignore_children - Set runtime PM behavior regarding children.
|
||||
* @dev: Target device.
|
||||
* @enable: Whether or not to ignore possible dependencies on children.
|
||||
*
|
||||
* The dependencies of @dev on its children will not be taken into account by
|
||||
* the runtime PM framework going forward if @enable is %true, or they will
|
||||
* be taken into account otherwise.
|
||||
*/
|
||||
static inline void pm_suspend_ignore_children(struct device *dev, bool enable)
|
||||
{
|
||||
dev->power.ignore_children = enable;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_get_noresume - Bump up runtime PM usage counter of a device.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline void pm_runtime_get_noresume(struct device *dev)
|
||||
{
|
||||
atomic_inc(&dev->power.usage_count);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_noidle - Drop runtime PM usage counter of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev unless it is 0 already.
|
||||
*/
|
||||
static inline void pm_runtime_put_noidle(struct device *dev)
|
||||
{
|
||||
atomic_add_unless(&dev->power.usage_count, -1, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_suspended - Check whether or not a device is runtime-suspended.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if runtime PM is enabled for @dev and its runtime PM status is
|
||||
* %RPM_SUSPENDED, or %false otherwise.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which
|
||||
* runtime PM cannot be either disabled or enabled for @dev and its runtime PM
|
||||
* status cannot change.
|
||||
*/
|
||||
static inline bool pm_runtime_suspended(struct device *dev)
|
||||
{
|
||||
return dev->power.runtime_status == RPM_SUSPENDED
|
||||
&& !dev->power.disable_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_active - Check whether or not a device is runtime-active.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if runtime PM is enabled for @dev and its runtime PM status is
|
||||
* %RPM_ACTIVE, or %false otherwise.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which
|
||||
* runtime PM cannot be either disabled or enabled for @dev and its runtime PM
|
||||
* status cannot change.
|
||||
*/
|
||||
static inline bool pm_runtime_active(struct device *dev)
|
||||
{
|
||||
return dev->power.runtime_status == RPM_ACTIVE
|
||||
|| dev->power.disable_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_status_suspended - Check if runtime PM status is "suspended".
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if the runtime PM status of @dev is %RPM_SUSPENDED, or %false
|
||||
* otherwise, regardless of whether or not runtime PM has been enabled for @dev.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which the
|
||||
* runtime PM status of @dev cannot change.
|
||||
*/
|
||||
static inline bool pm_runtime_status_suspended(struct device *dev)
|
||||
{
|
||||
return dev->power.runtime_status == RPM_SUSPENDED;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_enabled - Check if runtime PM is enabled.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if runtime PM is enabled for @dev or %false otherwise.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which
|
||||
* runtime PM cannot be either disabled or enabled for @dev.
|
||||
*/
|
||||
static inline bool pm_runtime_enabled(struct device *dev)
|
||||
{
|
||||
return !dev->power.disable_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_has_no_callbacks - Check if runtime PM callbacks may be present.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if @dev is a special device without runtime PM callbacks or
|
||||
* %false otherwise.
|
||||
*/
|
||||
static inline bool pm_runtime_has_no_callbacks(struct device *dev)
|
||||
{
|
||||
return dev->power.no_callbacks;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_mark_last_busy - Update the last access time of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Update the last access time of @dev used by the runtime PM autosuspend
|
||||
* mechanism to the current time as returned by ktime_get_mono_fast_ns().
|
||||
*/
|
||||
static inline void pm_runtime_mark_last_busy(struct device *dev)
|
||||
{
|
||||
WRITE_ONCE(dev->power.last_busy, ktime_get_mono_fast_ns());
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_is_irq_safe - Check if runtime PM can work in interrupt context.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if @dev has been marked as an "IRQ-safe" device (with respect
|
||||
* to runtime PM), in which case its runtime PM callabcks can be expected to
|
||||
* work correctly when invoked from interrupt handlers.
|
||||
*/
|
||||
static inline bool pm_runtime_is_irq_safe(struct device *dev)
|
||||
{
|
||||
return dev->power.irq_safe;
|
||||
|
|
@ -191,97 +284,250 @@ static inline void pm_runtime_drop_link(struct device *dev) {}
|
|||
|
||||
#endif /* !CONFIG_PM */
|
||||
|
||||
/**
|
||||
* pm_runtime_idle - Conditionally set up autosuspend of a device or suspend it.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Invoke the "idle check" callback of @dev and, depending on its return value,
|
||||
* set up autosuspend of @dev or suspend it (depending on whether or not
|
||||
* autosuspend has been enabled for it).
|
||||
*/
|
||||
static inline int pm_runtime_idle(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_suspend - Suspend a device synchronously.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline int pm_runtime_suspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_autosuspend - Set up autosuspend of a device or suspend it.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Set up autosuspend of @dev or suspend it (depending on whether or not
|
||||
* autosuspend is enabled for it) without engaging its "idle check" callback.
|
||||
*/
|
||||
static inline int pm_runtime_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_resume - Resume a device synchronously.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline int pm_runtime_resume(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_request_idle - Queue up "idle check" execution for a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Queue up a work item to run an equivalent of pm_runtime_idle() for @dev
|
||||
* asynchronously.
|
||||
*/
|
||||
static inline int pm_request_idle(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_request_resume - Queue up runtime-resume of a device.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline int pm_request_resume(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_request_autosuspend - Queue up autosuspend of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Queue up a work item to run an equivalent pm_runtime_autosuspend() for @dev
|
||||
* asynchronously.
|
||||
*/
|
||||
static inline int pm_request_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_ASYNC | RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_get - Bump up usage counter and queue up resume of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Bump up the runtime PM usage counter of @dev and queue up a work item to
|
||||
* carry out runtime-resume of it.
|
||||
*/
|
||||
static inline int pm_runtime_get(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, RPM_GET_PUT | RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_get_sync - Bump up usage counter of a device and resume it.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Bump up the runtime PM usage counter of @dev and carry out runtime-resume of
|
||||
* it synchronously.
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_resume() and the runtime PM usage counter of @dev remains
|
||||
* incremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_get_sync(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, RPM_GET_PUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put - Drop device usage counter and queue up "idle check" if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, queue up a work item for @dev like in pm_request_idle().
|
||||
*/
|
||||
static inline int pm_runtime_put(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_autosuspend - Drop device usage counter and queue autosuspend if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, queue up a work item for @dev like in pm_request_autosuspend().
|
||||
*/
|
||||
static inline int pm_runtime_put_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev,
|
||||
RPM_GET_PUT | RPM_ASYNC | RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_sync - Drop device usage counter and run "idle check" if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, invoke the "idle check" callback of @dev and, depending on its
|
||||
* return value, set up autosuspend of @dev or suspend it (depending on whether
|
||||
* or not autosuspend has been enabled for it).
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_idle() and the runtime PM usage counter of @dev remains
|
||||
* decremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_put_sync(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, RPM_GET_PUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_sync_suspend - Drop device usage counter and suspend if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, carry out runtime-suspend of @dev synchronously.
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_suspend() and the runtime PM usage counter of @dev remains
|
||||
* decremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_put_sync_suspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_GET_PUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_sync_autosuspend - Drop device usage counter and autosuspend if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, set up autosuspend of @dev or suspend it synchronously (depending
|
||||
* on whether or not autosuspend has been enabled for it).
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_autosuspend() and the runtime PM usage counter of @dev remains
|
||||
* decremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_put_sync_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_GET_PUT | RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_set_active - Set runtime PM status to "active".
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Set the runtime PM status of @dev to %RPM_ACTIVE and ensure that dependencies
|
||||
* of it will be taken into account.
|
||||
*
|
||||
* It is not valid to call this function for devices with runtime PM enabled.
|
||||
*/
|
||||
static inline int pm_runtime_set_active(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_set_status(dev, RPM_ACTIVE);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_set_suspended - Set runtime PM status to "active".
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Set the runtime PM status of @dev to %RPM_SUSPENDED and ensure that
|
||||
* dependencies of it will be taken into account.
|
||||
*
|
||||
* It is not valid to call this function for devices with runtime PM enabled.
|
||||
*/
|
||||
static inline int pm_runtime_set_suspended(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_set_status(dev, RPM_SUSPENDED);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_disable - Disable runtime PM for a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Prevent the runtime PM framework from working with @dev (by incrementing its
|
||||
* "blocking" counter).
|
||||
*
|
||||
* For each invocation of this function for @dev there must be a matching
|
||||
* pm_runtime_enable() call in order for runtime PM to be enabled for it.
|
||||
*/
|
||||
static inline void pm_runtime_disable(struct device *dev)
|
||||
{
|
||||
__pm_runtime_disable(dev, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_use_autosuspend - Allow autosuspend to be used for a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Allow the runtime PM autosuspend mechanism to be used for @dev whenever
|
||||
* requested (or "autosuspend" will be handled as direct runtime-suspend for
|
||||
* it).
|
||||
*/
|
||||
static inline void pm_runtime_use_autosuspend(struct device *dev)
|
||||
{
|
||||
__pm_runtime_use_autosuspend(dev, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_dont_use_autosuspend - Prevent autosuspend from being used.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Prevent the runtime PM autosuspend mechanism from being used for @dev which
|
||||
* means that "autosuspend" will be handled as direct runtime-suspend for it
|
||||
* going forward.
|
||||
*/
|
||||
static inline void pm_runtime_dont_use_autosuspend(struct device *dev)
|
||||
{
|
||||
__pm_runtime_use_autosuspend(dev, false);
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue