summaryrefslogtreecommitdiffstats
path: root/drivers/md/dm-snap-persistent.c
diff options
context:
space:
mode:
authorMikulas Patocka <mpatocka@redhat.com>2013-09-19 01:14:22 +0200
committerMike Snitzer <snitzer@redhat.com>2013-09-20 16:36:34 +0200
commit5ea330a75bd86b2b2a01d7b85c516983238306fb (patch)
tree8579f306b45641432a1f4b7f3f4e73e6d2bb9ce7 /drivers/md/dm-snap-persistent.c
parentdm stats: fix possible counter corruption on 32-bit systems (diff)
downloadlinux-5ea330a75bd86b2b2a01d7b85c516983238306fb.tar.xz
linux-5ea330a75bd86b2b2a01d7b85c516983238306fb.zip
dm snapshot: workaround for a false positive lockdep warning
The kernel reports a lockdep warning if a snapshot is invalidated because it runs out of space. The lockdep warning was triggered by commit 0976dfc1d0cd80a4e9dfaf87bd87 ("workqueue: Catch more locking problems with flush_work()") in v3.5. The warning is false positive. The real cause for the warning is that the lockdep engine treats different instances of md->lock as a single lock. This patch is a workaround - we use flush_workqueue instead of flush_work. This code path is not performance sensitive (it is called only on initialization or invalidation), thus it doesn't matter that we flush the whole workqueue. The real fix for the problem would be to teach the lockdep engine to treat different instances of md->lock as separate locks. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # 3.5+
Diffstat (limited to 'drivers/md/dm-snap-persistent.c')
-rw-r--r--drivers/md/dm-snap-persistent.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index 3ac415675b6c..4caa8e6d59d7 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -256,7 +256,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int rw,
*/
INIT_WORK_ONSTACK(&req.work, do_metadata);
queue_work(ps->metadata_wq, &req.work);
- flush_work(&req.work);
+ flush_workqueue(ps->metadata_wq);
return req.result;
}