diff options
author | Chao Yu <chao2.yu@samsung.com> | 2015-10-05 16:19:24 +0200 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2015-10-10 01:20:55 +0200 |
commit | a43f7ec327b0d86cbb80d0841673038c0706e714 (patch) | |
tree | dd4c3359f53c5c5a5bb86d4455a7ac718f676f54 /fs/f2fs/gc.c | |
parent | f2fs: use atomic64_t for extent cache hit stat (diff) | |
download | linux-a43f7ec327b0d86cbb80d0841673038c0706e714.tar.xz linux-a43f7ec327b0d86cbb80d0841673038c0706e714.zip |
f2fs: fix to avoid redundant searching in dirty map during gc
When doing gc, we search a victim in dirty map, starting from position of
last victim, we will reset the current searching position until we touch
the end of dirty map, and then search the whole diryt map. So sometimes we
will search the range [victim, last] twice, it's redundant, this patch
avoids this issue.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'fs/f2fs/gc.c')
-rw-r--r-- | fs/f2fs/gc.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 343b096cb654..e5c255ba227b 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -257,6 +257,7 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi, struct dirty_seglist_info *dirty_i = DIRTY_I(sbi); struct victim_sel_policy p; unsigned int secno, max_cost; + unsigned int last_segment = MAIN_SEGS(sbi); int nsearched = 0; mutex_lock(&dirty_i->seglist_lock); @@ -277,9 +278,10 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi, unsigned long cost; unsigned int segno; - segno = find_next_bit(p.dirty_segmap, MAIN_SEGS(sbi), p.offset); - if (segno >= MAIN_SEGS(sbi)) { + segno = find_next_bit(p.dirty_segmap, last_segment, p.offset); + if (segno >= last_segment) { if (sbi->last_victim[p.gc_mode]) { + last_segment = sbi->last_victim[p.gc_mode]; sbi->last_victim[p.gc_mode] = 0; p.offset = 0; continue; |