diff options
author | Jens Axboe <axboe@suse.de> | 2006-07-21 20:30:28 +0200 |
---|---|---|
committer | Jens Axboe <axboe@nelson.home.kernel.dk> | 2006-09-30 20:29:41 +0200 |
commit | da20a20f3b5c175648fa797c899dd577e4dacb51 (patch) | |
tree | 690ba6f8f4f62a9deaa2b6d5d3cf6bd3220dac1b /block | |
parent | [PATCH] cfq-iosched: improve queue preemption (diff) | |
download | linux-da20a20f3b5c175648fa797c899dd577e4dacb51.tar.xz linux-da20a20f3b5c175648fa797c899dd577e4dacb51.zip |
[PATCH] ll_rw_blk: allow more flexibility for read_ahead_kb store
It can make sense to set read-ahead larger than a single request.
We should not be enforcing such policy on the user. Additionally,
using the BLKRASET ioctl doesn't impose such a restriction. So
additionally we now expose identical behaviour through the two.
Issue also reported by Anton <cbou@mail.ru>
Signed-off-by: Jens Axboe <axboe@suse.de>
Diffstat (limited to 'block')
-rw-r--r-- | block/ll_rw_blk.c | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/block/ll_rw_blk.c b/block/ll_rw_blk.c index 346be9ae31f6..e3980ec747c1 100644 --- a/block/ll_rw_blk.c +++ b/block/ll_rw_blk.c @@ -3806,9 +3806,6 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count) ssize_t ret = queue_var_store(&ra_kb, page, count); spin_lock_irq(q->queue_lock); - if (ra_kb > (q->max_sectors >> 1)) - ra_kb = (q->max_sectors >> 1); - q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10); spin_unlock_irq(q->queue_lock); |