diff options
author | NeilBrown <neilb@suse.com> | 2016-11-04 06:46:03 +0100 |
---|---|---|
committer | Shaohua Li <shli@fb.com> | 2016-11-08 00:08:23 +0100 |
commit | 5e2c7a3611977b69ae0531e8fbdeab5dad17925a (patch) | |
tree | d7618a423b48af572c9f0af614bb9d4d764394dc /drivers/md/raid1.c | |
parent | md: perform async updates for metadata where possible. (diff) | |
download | linux-5e2c7a3611977b69ae0531e8fbdeab5dad17925a.tar.xz linux-5e2c7a3611977b69ae0531e8fbdeab5dad17925a.zip |
md/raid1: abort delayed writes when device fails.
When writing to an array with a bitmap enabled, the writes are grouped
in batches which are preceded by an update to the bitmap.
It is quite likely if that a drive develops a problem which is not
media related, that the bitmap write will be the first to report an
error and cause the device to be marked faulty (as the bitmap write is
at the start of a batch).
In this case, there is point submiting the subsequent writes to the
failed device - that just wastes times.
So re-check the Faulty state of a device before submitting a
delayed write.
This requires that we keep the 'rdev', rather than the 'bdev' in the
bio, then swap in the bdev just before final submission.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Diffstat (limited to 'drivers/md/raid1.c')
-rw-r--r-- | drivers/md/raid1.c | 20 |
1 files changed, 15 insertions, 5 deletions
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 15f0b552bf48..aac2a05cf8d1 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -742,9 +742,14 @@ static void flush_pending_writes(struct r1conf *conf) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; + struct md_rdev *rdev = (void*)bio->bi_bdev; bio->bi_next = NULL; - if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) + bio->bi_bdev = rdev->bdev; + if (test_bit(Faulty, &rdev->flags)) { + bio->bi_error = -EIO; + bio_endio(bio); + } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && + !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) /* Just ignore it */ bio_endio(bio); else @@ -1016,9 +1021,14 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; + struct md_rdev *rdev = (void*)bio->bi_bdev; bio->bi_next = NULL; - if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) + bio->bi_bdev = rdev->bdev; + if (test_bit(Faulty, &rdev->flags)) { + bio->bi_error = -EIO; + bio_endio(bio); + } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && + !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) /* Just ignore it */ bio_endio(bio); else @@ -1357,7 +1367,7 @@ read_again: mbio->bi_iter.bi_sector = (r1_bio->sector + conf->mirrors[i].rdev->data_offset); - mbio->bi_bdev = conf->mirrors[i].rdev->bdev; + mbio->bi_bdev = (void*)conf->mirrors[i].rdev; mbio->bi_end_io = raid1_end_write_request; bio_set_op_attrs(mbio, op, do_flush_fua | do_sync); mbio->bi_private = r1_bio; |