diff options
author | Sahitya Tummala <stummala@codeaurora.org> | 2019-11-13 11:31:03 +0100 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2019-11-19 23:41:21 +0100 |
commit | 677017d196ba2a4cfff13626b951cc9a206b8c7c (patch) | |
tree | e892d3c5ea3e54572efc436cde028bccff138ee0 /fs/f2fs/file.c | |
parent | f2fs: show f2fs instance in printk_ratelimited (diff) | |
download | linux-677017d196ba2a4cfff13626b951cc9a206b8c7c.tar.xz linux-677017d196ba2a4cfff13626b951cc9a206b8c7c.zip |
f2fs: Fix deadlock in f2fs_gc() context during atomic files handling
The FS got stuck in the below stack when the storage is almost
full/dirty condition (when FG_GC is being done).
schedule_timeout
io_schedule_timeout
congestion_wait
f2fs_drop_inmem_pages_all
f2fs_gc
f2fs_balance_fs
__write_node_page
f2fs_fsync_node_pages
f2fs_do_sync_file
f2fs_ioctl
The root cause for this issue is there is a potential infinite loop
in f2fs_drop_inmem_pages_all() for the case where gc_failure is true
and when there an inode whose i_gc_failures[GC_FAILURE_ATOMIC] is
not set. Fix this by keeping track of the total atomic files
currently opened and using that to exit from this condition.
Fix-suggested-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Sahitya Tummala <stummala@codeaurora.org>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'fs/f2fs/file.c')
-rw-r--r-- | fs/f2fs/file.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index f9f3a417a0cd..c0560d62dbee 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -1922,6 +1922,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) spin_lock(&sbi->inode_lock[ATOMIC_FILE]); if (list_empty(&fi->inmem_ilist)) list_add_tail(&fi->inmem_ilist, &sbi->inode_list[ATOMIC_FILE]); + sbi->atomic_files++; spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); /* add inode in inmem_list first and set atomic_file */ |