diff options
author | Miquel van Smoorenburg <mikevs@xs4all.net> | 2009-01-06 23:39:02 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-07 00:58:59 +0100 |
commit | 38c8e6180939e5619140b2e9e479cb26029ff8b1 (patch) | |
tree | 1980f3dadfa02ac6c1fc2ad7236205af54f7972a /fs/mpage.c | |
parent | oom: print triggering task's cpuset and mems allowed (diff) | |
download | linux-38c8e6180939e5619140b2e9e479cb26029ff8b1.tar.xz linux-38c8e6180939e5619140b2e9e479cb26029ff8b1.zip |
do_mpage_readpage(): don't submit lots of small bios on boundary
While tracing I/O patterns with blktrace (a great tool) a few weeks ago I
identified a minor issue in fs/mpage.c
As the comment above mpage_readpages() says, a fs's get_block function
will set BH_Boundary when it maps a block just before a block for which
extra I/O is required.
Since get_block() can map a range of pages, for all these pages the
BH_Boundary flag will be set. But we only need to push what I/O we have
accumulated at the last block of this range.
This makes do_mpage_readpage() send out the largest possible bio instead
of a bunch of page-sized ones in the BH_Boundary case.
Signed-off-by: Miquel van Smoorenburg <mikevs@xs4all.net>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r-- | fs/mpage.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/fs/mpage.c b/fs/mpage.c index 552b80b3facc..46e977efd50a 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -308,7 +308,10 @@ alloc_new: goto alloc_new; } - if (buffer_boundary(map_bh) || (first_hole != blocks_per_page)) + relative_block = block_in_file - *first_logical_block; + nblocks = map_bh->b_size >> blkbits; + if ((buffer_boundary(map_bh) && relative_block == nblocks) || + (first_hole != blocks_per_page)) bio = mpage_bio_submit(READ, bio); else *last_block_in_bio = blocks[blocks_per_page - 1]; |