diff options
author | Mike Tipton <mdtipton@codeaurora.org> | 2021-07-21 19:54:31 +0200 |
---|---|---|
committer | Georgi Djakov <djakov@kernel.org> | 2021-07-30 15:50:40 +0200 |
commit | ce5a595744126be4f1327e29e3c5ae9aac6b38d5 (patch) | |
tree | 1b742c41661390f2e5ba8df41476010e3185bc96 /drivers/interconnect | |
parent | interconnect: Always call pre_aggregate before aggregate (diff) | |
download | linux-ce5a595744126be4f1327e29e3c5ae9aac6b38d5.tar.xz linux-ce5a595744126be4f1327e29e3c5ae9aac6b38d5.zip |
interconnect: qcom: icc-rpmh: Ensure floor BW is enforced for all nodes
We currently only enforce BW floors for a subset of nodes in a path.
All BCMs that need updating are queued in the pre_aggregate/aggregate
phase. The first set() commits all queued BCMs and subsequent set()
calls short-circuit without committing anything. Since the floor BW
isn't set in sum_avg/max_peak until set(), then some BCMs are committed
before their associated nodes reflect the floor.
Set the floor as each node is being aggregated. This ensures that all
all relevant floors are set before the BCMs are committed.
Fixes: 266cd33b5913 ("interconnect: qcom: Ensure that the floor bandwidth value is enforced")
Signed-off-by: Mike Tipton <mdtipton@codeaurora.org>
Link: https://lore.kernel.org/r/20210721175432.2119-4-mdtipton@codeaurora.org
[georgi: Removed unused variable]
Signed-off-by: Georgi Djakov <djakov@kernel.org>
Diffstat (limited to 'drivers/interconnect')
-rw-r--r-- | drivers/interconnect/qcom/icc-rpmh.c | 12 |
1 files changed, 5 insertions, 7 deletions
diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c index bf01d09dba6c..f6fae64861ce 100644 --- a/drivers/interconnect/qcom/icc-rpmh.c +++ b/drivers/interconnect/qcom/icc-rpmh.c @@ -57,6 +57,11 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, qn->sum_avg[i] += avg_bw; qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); } + + if (node->init_avg || node->init_peak) { + qn->sum_avg[i] = max_t(u64, qn->sum_avg[i], node->init_avg); + qn->max_peak[i] = max_t(u64, qn->max_peak[i], node->init_peak); + } } *agg_avg += avg_bw; @@ -79,7 +84,6 @@ EXPORT_SYMBOL_GPL(qcom_icc_aggregate); int qcom_icc_set(struct icc_node *src, struct icc_node *dst) { struct qcom_icc_provider *qp; - struct qcom_icc_node *qn; struct icc_node *node; if (!src) @@ -88,12 +92,6 @@ int qcom_icc_set(struct icc_node *src, struct icc_node *dst) node = src; qp = to_qcom_provider(node->provider); - qn = node->data; - - qn->sum_avg[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->sum_avg[QCOM_ICC_BUCKET_AMC], - node->avg_bw); - qn->max_peak[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->max_peak[QCOM_ICC_BUCKET_AMC], - node->peak_bw); qcom_icc_bcm_voter_commit(qp->voter); |