diff options
author | Josef Bacik <josef@toxicpanda.com> | 2020-01-17 15:07:38 +0100 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2020-01-31 14:01:55 +0100 |
commit | a7a63acc6575ded6f48ab293e275e8b903325e54 (patch) | |
tree | 1509f3939da6c5787fcd31c1e831f505e8cad4bd /fs | |
parent | btrfs: Correctly handle empty trees in find_first_clear_extent_bit (diff) | |
download | linux-a7a63acc6575ded6f48ab293e275e8b903325e54.tar.xz linux-a7a63acc6575ded6f48ab293e275e8b903325e54.zip |
btrfs: fix force usage in inc_block_group_ro
For some reason we've translated the do_chunk_alloc that goes into
btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
two different things.
force for inc_block_group_ro is used when we are forcing the block group
read only no matter what, for example when the underlying chunk is
marked read only. We need to not do the space check here as this block
group needs to be read only.
btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
we need to pre-allocate a chunk before marking the block group read
only. This has nothing to do with forcing, and in fact we _always_ want
to do the space check in this case, so unconditionally pass false for
force in this case.
Then fixup inc_block_group_ro to honor force as it's expected and
documented to do.
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/btrfs/block-group.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 14851584e245..c12e91ba7d7a 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -1213,7 +1213,7 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force) * Here we make sure if we mark this bg RO, we still have enough * free space as buffer. */ - if (sinfo_used + num_bytes <= sinfo->total_bytes) { + if (force || (sinfo_used + num_bytes <= sinfo->total_bytes)) { sinfo->bytes_readonly += num_bytes; cache->ro++; list_add_tail(&cache->ro_list, &sinfo->ro_bgs); @@ -2225,7 +2225,7 @@ again: } } - ret = inc_block_group_ro(cache, !do_chunk_alloc); + ret = inc_block_group_ro(cache, 0); if (!do_chunk_alloc) goto unlock_out; if (!ret) |