diff options
author | Wu Fengguang <fengguang.wu@intel.com> | 2009-06-17 00:31:38 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-06-17 04:47:30 +0200 |
commit | 7ffc59b4d0bdfa00e882339f85b8a969bb7021e2 (patch) | |
tree | 6b6d96208f08bc394c8e64efed6767b9a95e7a6d | |
parent | readahead: remove redundant test in shrink_readahead_size_eio() (diff) | |
download | linux-7ffc59b4d0bdfa00e882339f85b8a969bb7021e2.tar.xz linux-7ffc59b4d0bdfa00e882339f85b8a969bb7021e2.zip |
readahead: enforce full sync mmap readahead size
Now that we do readahead for sequential mmap reads, here is a simple
evaluation of the impacts, and one further optimization.
It's an NFS-root debian desktop system, readahead size = 60 pages.
The numbers are grabbed after a fresh boot into console.
approach pgmajfault RA miss ratio mmap IO count avg IO size(pages)
A 383 31.6% 383 11
B 225 32.4% 390 11
C 224 32.6% 307 13
case A: mmap sync/async readahead disabled
case B: mmap sync/async readahead enabled, with enforced full async readahead size
case C: mmap sync/async readahead enabled, with enforced full sync/async readahead size
or:
A = vanilla 2.6.30-rc1
B = A plus mmap readahead
C = B plus this patch
The numbers show that
- there are good possibilities for random mmap reads to trigger readahead
- 'pgmajfault' is reduced by 1/3, due to the _async_ nature of readahead
- case C can further reduce IO count by 1/4
- readahead miss ratios are not quite affected
The theory is
- readahead is _good_ for clustered random reads, and can perform
_better_ than readaround because they could be _async_.
- async readahead size is guaranteed to be larger than readaround
size, and they are _async_, hence will mostly behave better
However for B
- sync readahead size could be smaller than readaround size, hence may
make things worse by produce more smaller IOs
which will be fixed by this patch.
Final conclusion:
- mmap readahead reduced major faults by 1/3 and no obvious overheads;
- mmap io can be further reduced by 1/4 with this patch.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/filemap.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index 2e9bcc2e7977..6846a902f5cf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1471,7 +1471,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, if (VM_SequentialReadHint(vma) || offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) { - page_cache_sync_readahead(mapping, ra, file, offset, 1); + page_cache_sync_readahead(mapping, ra, file, offset, + ra->ra_pages); return; } |