diff options
author | Qu Wenruo <quwenruo@cn.fujitsu.com> | 2016-07-20 09:04:18 +0200 |
---|---|---|
committer | Chris Mason <clm@fb.com> | 2016-08-25 12:58:16 +0200 |
commit | d8422ba334f9df16e071bc77707e55fd7f8446ae (patch) | |
tree | cfcef8f2e9b45840f48eec4e0a8fc57c952c5dbc /fs | |
parent | Btrfs: fix memory leak of reloc_root (diff) | |
download | linux-d8422ba334f9df16e071bc77707e55fd7f8446ae.tar.xz linux-d8422ba334f9df16e071bc77707e55fd7f8446ae.zip |
btrfs: backref: Fix soft lockup in __merge_refs function
When over 1000 file extents refers to one extent, find_parent_nodes()
will be obviously slow, due to the O(n^2)~O(n^3) loops inside
__merge_refs().
The following ftrace shows the cubic growth of execution time:
256 refs
5) + 91.768 us | __add_keyed_refs.isra.12 [btrfs]();
5) 1.447 us | __add_missing_keys.isra.13 [btrfs]();
5) ! 114.544 us | __merge_refs [btrfs]();
5) ! 136.399 us | __merge_refs [btrfs]();
512 refs
6) ! 279.859 us | __add_keyed_refs.isra.12 [btrfs]();
6) 3.164 us | __add_missing_keys.isra.13 [btrfs]();
6) ! 442.498 us | __merge_refs [btrfs]();
6) # 2091.073 us | __merge_refs [btrfs]();
and 1024 refs
7) ! 368.683 us | __add_keyed_refs.isra.12 [btrfs]();
7) 4.810 us | __add_missing_keys.isra.13 [btrfs]();
7) # 2043.428 us | __merge_refs [btrfs]();
7) * 18964.23 us | __merge_refs [btrfs]();
And sort them into the following char:
(Unit: us)
------------------------------------------------------------------------
Trace function | 256 ref | 512 refs | 1024 refs |
------------------------------------------------------------------------
__add_keyed_refs | 91 | 249 | 368 |
__add_missing_keys | 1 | 3 | 4 |
__merge_refs 1st call | 114 | 442 | 2043 |
__merge_refs 2nd call | 136 | 2091 | 18964 |
------------------------------------------------------------------------
We can see the that __add_keyed_refs() grows almost in linear behavior.
And __add_missing_keys() in this case doesn't change much or takes much
time.
While for the 1st __merge_refs() it's square growth
for the 2nd __merge_refs() call it's cubic growth.
It's no doubt that merge_refs() will take a long long time to execute if
the number of refs continues its grows.
So add a cond_resced() into the loop of __merge_refs().
Although this will solve the problem of soft lockup, we need to use the
new rb_tree based structure introduced by Lu Fengqi to really solve the
long execution time.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/btrfs/backref.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index 2b88439c2ee8..455a6b2fd539 100644 --- a/fs/btrfs/backref.c +++ b/fs/btrfs/backref.c @@ -589,6 +589,7 @@ static void __merge_refs(struct list_head *head, int mode) list_del(&ref2->list); kmem_cache_free(btrfs_prelim_ref_cache, ref2); + cond_resched(); } } |