diff options
author | Tejun Heo <tj@kernel.org> | 2020-09-14 17:05:13 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-09-15 01:25:39 +0200 |
commit | aa67db24b6761277d731cc3434420283479927ca (patch) | |
tree | d0f2619c33c90a1f333b3d67e65a4f23d133cf91 /block/blk-iocost.c | |
parent | blk-iocost: fix divide-by-zero in transfer_surpluses() (diff) | |
download | linux-aa67db24b6761277d731cc3434420283479927ca.tar.xz linux-aa67db24b6761277d731cc3434420283479927ca.zip |
iocost: fix infinite loop bug in adjust_inuse_and_calc_cost()
adjust_inuse_and_calc_cost() is responsible for reducing the amount of
donated weights dynamically in period as the budget runs low. Because we
don't want to do full donation calculation in period, we keep latching up
inuse by INUSE_ADJ_STEP_PCT of the active weight of the cgroup until the
resulting hweight_inuse is satisfactory.
Unfortunately, the adj_step calculation was reading the active weight before
acquiring ioc->lock. Because the current thread could have lost race to
activate the iocg to another thread before entering this function, it may
read the active weight as zero before acquiring ioc->lock. When this
happens, the adj_step is calculated as zero and the incremental adjustment
loop becomes an infinite one.
Fix it by fetching the active weight after acquiring ioc->lock.
Fixes: b0853ab4a238 ("blk-iocost: revamp in-period donation snapbacks")
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-iocost.c')
-rw-r--r-- | block/blk-iocost.c | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/block/blk-iocost.c b/block/blk-iocost.c index 6e29b4dcf356..ef9476fca1d8 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -2323,9 +2323,8 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime, { struct ioc *ioc = iocg->ioc; struct ioc_margins *margins = &ioc->margins; - u32 adj_step = DIV_ROUND_UP(iocg->active * INUSE_ADJ_STEP_PCT, 100); u32 __maybe_unused old_inuse = iocg->inuse, __maybe_unused old_hwi; - u32 hwi; + u32 hwi, adj_step; s64 margin; u64 cost, new_inuse; @@ -2354,8 +2353,15 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime, return cost; } - /* bump up inuse till @abs_cost fits in the existing budget */ + /* + * Bump up inuse till @abs_cost fits in the existing budget. + * adj_step must be determined after acquiring ioc->lock - we might + * have raced and lost to another thread for activation and could + * be reading 0 iocg->active before ioc->lock which will lead to + * infinite loop. + */ new_inuse = iocg->inuse; + adj_step = DIV_ROUND_UP(iocg->active * INUSE_ADJ_STEP_PCT, 100); do { new_inuse = new_inuse + adj_step; propagate_weights(iocg, iocg->active, new_inuse, true, now); |