diff options
author | NeilBrown <neilb@suse.de> | 2014-12-15 02:56:56 +0100 |
---|---|---|
committer | NeilBrown <neilb@suse.de> | 2015-02-03 22:35:52 +0100 |
commit | 5c675f83c68fbdf9c0e103c1090b06be747fa62c (patch) | |
tree | 9a03f84c7a3bcef7d5e757dc28ce7bd5d205b26a /drivers/md/raid1.c | |
parent | md: rename mddev->write_lock to mddev->lock (diff) | |
download | linux-5c675f83c68fbdf9c0e103c1090b06be747fa62c.tar.xz linux-5c675f83c68fbdf9c0e103c1090b06be747fa62c.zip |
md: make ->congested robust against personality changes.
There is currently no locking around calls to the 'congested'
bdi function. If called at an awkward time while an array is
being converted from one level (or personality) to another, there
is a tiny chance of running code in an unreferenced module etc.
So add a 'congested' function to the md_personality operations
structure, and call it with appropriate locking from a central
'mddev_congested'.
When the array personality is changing the array will be 'suspended'
so no IO is processed.
If mddev_congested detects this, it simply reports that the
array is congested, which is a safe guess.
As mddev_suspend calls synchronize_rcu(), mddev_congested can
avoid races by included the whole call inside an rcu_read_lock()
region.
This require that the congested functions for all subordinate devices
can be run under rcu_lock. Fortunately this is the case.
Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'drivers/md/raid1.c')
-rw-r--r-- | drivers/md/raid1.c | 14 |
1 files changed, 2 insertions, 12 deletions
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 40b35be34f8d..9ad7ce7091be 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -734,7 +734,7 @@ static int raid1_mergeable_bvec(struct request_queue *q, } -int md_raid1_congested(struct mddev *mddev, int bits) +static int raid1_congested(struct mddev *mddev, int bits) { struct r1conf *conf = mddev->private; int i, ret = 0; @@ -763,15 +763,6 @@ int md_raid1_congested(struct mddev *mddev, int bits) rcu_read_unlock(); return ret; } -EXPORT_SYMBOL_GPL(md_raid1_congested); - -static int raid1_congested(void *data, int bits) -{ - struct mddev *mddev = data; - - return mddev_congested(mddev, bits) || - md_raid1_congested(mddev, bits); -} static void flush_pending_writes(struct r1conf *conf) { @@ -2955,8 +2946,6 @@ static int run(struct mddev *mddev) md_set_array_sectors(mddev, raid1_size(mddev, 0, 0)); if (mddev->queue) { - mddev->queue->backing_dev_info.congested_fn = raid1_congested; - mddev->queue->backing_dev_info.congested_data = mddev; blk_queue_merge_bvec(mddev->queue, raid1_mergeable_bvec); if (discard_supported) @@ -3193,6 +3182,7 @@ static struct md_personality raid1_personality = .check_reshape = raid1_reshape, .quiesce = raid1_quiesce, .takeover = raid1_takeover, + .congested = raid1_congested, }; static int __init raid_init(void) |