summaryrefslogtreecommitdiffstats
path: root/drivers/md
diff options
context:
space:
mode:
authorShaohua Li <shli@kernel.org>2014-04-09 05:27:42 +0200
committerNeilBrown <neilb@suse.de>2014-04-09 06:42:42 +0200
commite240c1839d11152b0355442f8ac6d2d2d921be36 (patch)
treebb2f80fd9a3be90a710e2e2053c246ff1dedf6f7 /drivers/md
parentraid5: make_request does less prepare wait (diff)
downloadlinux-e240c1839d11152b0355442f8ac6d2d2d921be36.tar.xz
linux-e240c1839d11152b0355442f8ac6d2d2d921be36.zip
raid5: get_active_stripe avoids device_lock
For sequential workload (or request size big workload), get_active_stripe can find cached stripe. In this case, we always hold device_lock, which exposes a lot of lock contention for such workload. If stripe count isn't 0, we don't need hold the lock actually, since we just increase its count. And this is the hot code path for such workload. Unfortunately we must delete the BUG_ON. Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'drivers/md')
-rw-r--r--drivers/md/raid5.c9
1 files changed, 2 insertions, 7 deletions
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index a904a2c80fc8..25247a852912 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -679,14 +679,9 @@ get_active_stripe(struct r5conf *conf, sector_t sector,
init_stripe(sh, sector, previous);
atomic_inc(&sh->count);
}
- } else {
+ } else if (!atomic_inc_not_zero(&sh->count)) {
spin_lock(&conf->device_lock);
- if (atomic_read(&sh->count)) {
- BUG_ON(!list_empty(&sh->lru)
- && !test_bit(STRIPE_EXPANDING, &sh->state)
- && !test_bit(STRIPE_ON_UNPLUG_LIST, &sh->state)
- );
- } else {
+ if (!atomic_read(&sh->count)) {
if (!test_bit(STRIPE_HANDLE, &sh->state))
atomic_inc(&conf->active_stripes);
BUG_ON(list_empty(&sh->lru) &&