summaryrefslogtreecommitdiffstats
path: root/fs/btrfs/disk-io.c
diff options
context:
space:
mode:
authorJosef Bacik <jbacik@fb.com>2014-01-23 16:54:11 +0100
committerChris Mason <clm@fb.com>2014-01-28 22:20:26 +0100
commit0a2b2a844af616addc87cac3cc18dcaba2a9d0fb (patch)
treed81e13b3388df4a66e3a2af6ff2df82f532d5c9e /fs/btrfs/disk-io.c
parentBtrfs: attach delayed ref updates to delayed ref heads (diff)
downloadlinux-0a2b2a844af616addc87cac3cc18dcaba2a9d0fb.tar.xz
linux-0a2b2a844af616addc87cac3cc18dcaba2a9d0fb.zip
Btrfs: throttle delayed refs better
On one of our gluster clusters we noticed some pretty big lag spikes. This turned out to be because our transaction commit was taking like 3 minutes to complete. This is because we have like 30 gigs of metadata, so our global reserve would end up being the max which is like 512 mb. So our throttling code would allow a ridiculous amount of delayed refs to build up and then they'd all get run at transaction commit time, and for a cold mounted file system that could take up to 3 minutes to run. So fix the throttling to be based on both the size of the global reserve and how long it takes us to run delayed refs. This patch tracks the time it takes to run delayed refs and then only allows 1 seconds worth of outstanding delayed refs at a time. This way it will auto-tune itself from cold cache up to when everything is in memory and it no longer has to go to disk. This makes our transaction commits take much less time to run. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
Diffstat (limited to 'fs/btrfs/disk-io.c')
-rw-r--r--fs/btrfs/disk-io.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index ed23127a4b02..f0e7bbe14823 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2185,7 +2185,7 @@ int open_ctree(struct super_block *sb,
fs_info->free_chunk_space = 0;
fs_info->tree_mod_log = RB_ROOT;
fs_info->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;
-
+ fs_info->avg_delayed_ref_runtime = div64_u64(NSEC_PER_SEC, 64);
/* readahead state */
INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_WAIT);
spin_lock_init(&fs_info->reada_lock);