summaryrefslogtreecommitdiffstats
path: root/fs/btrfs/tree-checker.c
diff options
context:
space:
mode:
authorQu Wenruo <wqu@suse.com>2023-02-17 06:36:59 +0100
committerDavid Sterba <dsterba@suse.com>2023-04-17 18:01:14 +0200
commit6ded22c1bfe6a8a91216046d5d4c01fd1442988b (patch)
treeee6565fe8ceaaea3a6a88c4c31cd52fec90fa446 /fs/btrfs/tree-checker.c
parentbtrfs: replace map_lookup->stripe_len by BTRFS_STRIPE_LEN (diff)
downloadlinux-6ded22c1bfe6a8a91216046d5d4c01fd1442988b.tar.xz
linux-6ded22c1bfe6a8a91216046d5d4c01fd1442988b.zip
btrfs: reduce div64 calls by limiting the number of stripes of a chunk to u32
There are quite some div64 calls inside btrfs_map_block() and its variants. Such calls are for @stripe_nr, where @stripe_nr is the number of stripes before our logical bytenr inside a chunk. However we can eliminate such div64 calls by just reducing the width of @stripe_nr from 64 to 32. This can be done because our chunk size limit is already 10G, with fixed stripe length 64K. Thus a U32 is definitely enough to contain the number of stripes. With such width reduction, we can get rid of slower div64, and extra warning for certain 32bit arch. This patch would do: - Add a new tree-checker chunk validation on chunk length Make sure no chunk can reach 256G, which can also act as a bitflip checker. - Reduce the width from u64 to u32 for @stripe_nr variables - Replace unnecessary div64 calls with regular modulo and division 32bit division and modulo are much faster than 64bit operations, and we are finally free of the div64 fear at least in those involved functions. Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs/tree-checker.c')
-rw-r--r--fs/btrfs/tree-checker.c14
1 files changed, 14 insertions, 0 deletions
diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
index baad1ed7e111..e2b54793bf0c 100644
--- a/fs/btrfs/tree-checker.c
+++ b/fs/btrfs/tree-checker.c
@@ -849,6 +849,20 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf,
stripe_len);
return -EUCLEAN;
}
+ /*
+ * We artificially limit the chunk size, so that the number of stripes
+ * inside a chunk can be fit into a U32. The current limit (256G) is
+ * way too large for real world usage anyway, and it's also much larger
+ * than our existing limit (10G).
+ *
+ * Thus it should be a good way to catch obvious bitflips.
+ */
+ if (unlikely(length >= ((u64)U32_MAX << BTRFS_STRIPE_LEN_SHIFT))) {
+ chunk_err(leaf, chunk, logical,
+ "chunk length too large: have %llu limit %llu",
+ length, (u64)U32_MAX << BTRFS_STRIPE_LEN_SHIFT);
+ return -EUCLEAN;
+ }
if (unlikely(type & ~(BTRFS_BLOCK_GROUP_TYPE_MASK |
BTRFS_BLOCK_GROUP_PROFILE_MASK))) {
chunk_err(leaf, chunk, logical,