diff options
author | Joe Thornber <ejt@redhat.com> | 2017-11-08 12:41:43 +0100 |
---|---|---|
committer | Mike Snitzer <snitzer@redhat.com> | 2017-11-10 21:45:03 +0100 |
commit | 64748b1645b81399d01ad86657c5bbe097c1701c (patch) | |
tree | aa903c1a0cc40df6b51f61b8a595c897c52f7697 /drivers/md | |
parent | dm cache policy smq: take origin idle status into account when queuing writeb... (diff) | |
download | linux-64748b1645b81399d01ad86657c5bbe097c1701c.tar.xz linux-64748b1645b81399d01ad86657c5bbe097c1701c.zip |
dm cache background tracker: limit amount of background work that may be issued at once
On large systems the cache policy can be over enthusiastic and queue far
too much dirty data to be written back. This consumes memory.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Diffstat (limited to '')
-rw-r--r-- | drivers/md/dm-cache-background-tracker.c | 18 |
1 files changed, 12 insertions, 6 deletions
diff --git a/drivers/md/dm-cache-background-tracker.c b/drivers/md/dm-cache-background-tracker.c index 707233891291..1d0af0a21fc7 100644 --- a/drivers/md/dm-cache-background-tracker.c +++ b/drivers/md/dm-cache-background-tracker.c @@ -161,8 +161,17 @@ EXPORT_SYMBOL_GPL(btracker_nr_demotions_queued); static bool max_work_reached(struct background_tracker *b) { - // FIXME: finish - return false; + return atomic_read(&b->pending_promotes) + + atomic_read(&b->pending_writebacks) + + atomic_read(&b->pending_demotes) >= b->max_work; +} + +struct bt_work *alloc_work(struct background_tracker *b) +{ + if (max_work_reached(b)) + return NULL; + + return kmem_cache_alloc(b->work_cache, GFP_NOWAIT); } int btracker_queue(struct background_tracker *b, @@ -174,10 +183,7 @@ int btracker_queue(struct background_tracker *b, if (pwork) *pwork = NULL; - if (max_work_reached(b)) - return -ENOMEM; - - w = kmem_cache_alloc(b->work_cache, GFP_NOWAIT); + w = alloc_work(b); if (!w) return -ENOMEM; |