diff options
author | Konstantin Khlebnikov <khlebnikov@yandex-team.ru> | 2016-08-04 20:36:05 +0200 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2016-08-10 03:58:06 +0200 |
commit | 51350ea0d7f355dfc03deb343a665802d3d5cbba (patch) | |
tree | 775b636093a744285f6226337a16d99020d1ee6d /fs/fs-writeback.c | |
parent | Merge branch 'nvmf-4.8-rc' of git://git.infradead.org/nvme-fabrics into for-l... (diff) | |
download | linux-51350ea0d7f355dfc03deb343a665802d3d5cbba.tar.xz linux-51350ea0d7f355dfc03deb343a665802d3d5cbba.zip |
mm, writeback: flush plugged IO in wakeup_flusher_threads()
I've found funny live-lock between raid10 barriers during resync and
memory controller hard limits. Inside mpage_readpages() task holds on to
its plug bio which blocks the barrier in raid10. Its memory cgroup have
no free memory thus the task goes into reclaimer but all reclaimable
pages are dirty and cannot be written because raid10 is rebuilding and
stuck on the barrier.
Common flush of such IO in schedule() never happens, because the caller
doesn't go to sleep.
Lock is 'live' because changing memory limit or killing tasks which
holds that stuck bio unblock whole progress.
That was what happened in 3.18.x but I see no difference in upstream
logic. Theoretically this might happen even without memory cgroup.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'fs/fs-writeback.c')
-rw-r--r-- | fs/fs-writeback.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 4d09d4441e3e..05713a5da083 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -1949,6 +1949,12 @@ void wakeup_flusher_threads(long nr_pages, enum wb_reason reason) { struct backing_dev_info *bdi; + /* + * If we are expecting writeback progress we must submit plugged IO. + */ + if (blk_needs_flush_plug(current)) + blk_schedule_flush_plug(current); + if (!nr_pages) nr_pages = get_nr_dirty_pages(); |