diff options
author | Neil Brown <neilb@suse.de> | 2008-05-15 01:05:54 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-05-15 04:11:15 +0200 |
commit | e7e72bf641b1fc7b9df6f40bd2c36dfccd8d647c (patch) | |
tree | 81b1db5434c9635bf23fb40415056e10390cd692 /drivers/md/raid0.c | |
parent | char: select fw_loader by moxa (diff) | |
download | linux-e7e72bf641b1fc7b9df6f40bd2c36dfccd8d647c.tar.xz linux-e7e72bf641b1fc7b9df6f40bd2c36dfccd8d647c.zip |
Remove blkdev warning triggered by using md
As setting and clearing queue flags now requires that we hold a spinlock
on the queue, and as blk_queue_stack_limits is called without that lock,
get the lock inside blk_queue_stack_limits.
For blk_queue_stack_limits to be able to find the right lock, each md
personality needs to set q->queue_lock to point to the appropriate lock.
Those personalities which didn't previously use a spin_lock, us
q->__queue_lock. So always initialise that lock when allocated.
With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
longer cause warnings as it will be clear that the proper lock is held.
Thanks to Dan Williams for review and fixing the silly bugs.
Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Alistair John Strachan <alistair@devzero.co.uk>
Cc: Nick Piggin <npiggin@suse.de>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Jacek Luczak <difrost.kernel@gmail.com>
Cc: Prakash Punnoor <prakash@punnoor.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'drivers/md/raid0.c')
-rw-r--r-- | drivers/md/raid0.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 818b48284096..914c04ddec7c 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -280,6 +280,7 @@ static int raid0_run (mddev_t *mddev) (mddev->chunk_size>>1)-1); blk_queue_max_sectors(mddev->queue, mddev->chunk_size >> 9); blk_queue_segment_boundary(mddev->queue, (mddev->chunk_size>>1) - 1); + mddev->queue->queue_lock = &mddev->queue->__queue_lock; conf = kmalloc(sizeof (raid0_conf_t), GFP_KERNEL); if (!conf) |