diff options
author | Jaegeuk Kim <jaegeuk@kernel.org> | 2016-06-17 01:41:49 +0200 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2016-07-06 19:44:08 +0200 |
commit | ad4edb83143fdeef9e6fdd9daaa735b59476565b (patch) | |
tree | c2691e320b927f9dce1ae9df62875f3123f908d9 /fs/f2fs/segment.c | |
parent | f2fs: detect host-managed SMR by feature flag (diff) | |
download | linux-ad4edb83143fdeef9e6fdd9daaa735b59476565b.tar.xz linux-ad4edb83143fdeef9e6fdd9daaa735b59476565b.zip |
f2fs: produce more nids and reduce readahead nats
The readahead nat pages are more likely to be reclaimed quickly, so it'd better
to gather more free nids in advance.
And, let's keep some free nids as much as possible.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'fs/f2fs/segment.c')
-rw-r--r-- | fs/f2fs/segment.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 782975e791f1..6d16ecf9d29e 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -371,7 +371,9 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi) try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK); if (!available_free_memory(sbi, FREE_NIDS)) - try_to_free_nids(sbi, NAT_ENTRY_PER_BLOCK * FREE_NID_PAGES); + try_to_free_nids(sbi, MAX_FREE_NIDS); + else + build_free_nids(sbi); /* checkpoint is the only way to shrink partial cached entries */ if (!available_free_memory(sbi, NAT_ENTRIES) || |