diff options
author | Tao Su <tao1.su@linux.intel.com> | 2023-04-28 06:51:49 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-04-28 19:23:58 +0200 |
commit | 8176080d59e6d4ff9fc97ae534063073b4f7a715 (patch) | |
tree | 8e07178019071c8092214cef26c6453cf0ee10c0 /block/blk-cgroup.c | |
parent | writeback: fix call of incorrect macro (diff) | |
download | linux-8176080d59e6d4ff9fc97ae534063073b4f7a715.tar.xz linux-8176080d59e6d4ff9fc97ae534063073b4f7a715.zip |
block: Skip destroyed blkg when restart in blkg_destroy_all()
Kernel hang in blkg_destroy_all() when total blkg greater than
BLKG_DESTROY_BATCH_SIZE, because of not removing destroyed blkg in
blkg_list. So the size of blkg_list is same after destroying a
batch of blkg, and the infinite 'restart' occurs.
Since blkg should stay on the queue list until blkg_free_workfn(),
skip destroyed blkg when restart a new round, which will solve this
kernel hang issue and satisfy the previous will to restart.
Reported-by: Xiangfei Ma <xiangfeix.ma@intel.com>
Tested-by: Xiangfei Ma <xiangfeix.ma@intel.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Fixes: f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()")
Suggested-and-reviewed-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20230428045149.1310073-1-tao1.su@linux.intel.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-cgroup.c')
-rw-r--r-- | block/blk-cgroup.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 1c1ebeb51003..0ecb4cce8af2 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -527,6 +527,9 @@ restart: list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg; + if (hlist_unhashed(&blkg->blkcg_node)) + continue; + spin_lock(&blkcg->lock); blkg_destroy(blkg); spin_unlock(&blkcg->lock); |