summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorFengguang Wu <wfg@mail.ustc.edu.cn>2007-07-19 10:48:07 +0200
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-07-19 19:04:44 +0200
commitfe3cba17c49471e99d3421e675fc8b3deaaf0b70 (patch)
treedf696c4584c6db2e439f068d2474fcb946ca587d /mm
parentreadahead: pass real splice size (diff)
downloadlinux-fe3cba17c49471e99d3421e675fc8b3deaaf0b70.tar.xz
linux-fe3cba17c49471e99d3421e675fc8b3deaaf0b70.zip
mm: share PG_readahead and PG_reclaim
Share the same page flag bit for PG_readahead and PG_reclaim. One is used only on file reads, another is only for emergency writes. One is used mostly for fresh/young pages, another is for old pages. Combinations of possible interactions are: a) clear PG_reclaim => implicit clear of PG_readahead it will delay an asynchronous readahead into a synchronous one it actually does _good_ for readahead: the pages will be reclaimed soon, it's readahead thrashing! in this case, synchronous readahead makes more sense. b) clear PG_readahead => implicit clear of PG_reclaim one(and only one) page will not be reclaimed in time it can be avoided by checking PageWriteback(page) in readahead first c) set PG_reclaim => implicit set of PG_readahead will confuse readahead and make it restart the size rampup process it's a trivial problem, and can mostly be avoided by checking PageWriteback(page) first in readahead d) set PG_readahead => implicit set of PG_reclaim PG_readahead will never be set on already cached pages. PG_reclaim will always be cleared on dirtying a page. so not a problem. In summary, a) we get better behavior b,d) possible interactions can be avoided c) racy condition exists that might affect readahead, but the chance is _really_ low, and the hurt on readahead is trivial. Compound pages also use PG_reclaim, but for now they do not interact with reclaim/readahead code. Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page-writeback.c1
-rw-r--r--mm/page_alloc.c7
-rw-r--r--mm/readahead.c6
3 files changed, 7 insertions, 7 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index e62482718012..51b3eb6ab445 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -920,6 +920,7 @@ int clear_page_dirty_for_io(struct page *page)
BUG_ON(!PageLocked(page));
+ ClearPageReclaim(page);
if (mapping && mapping_cap_account_dirty(mapping)) {
/*
* Yes, Virginia, this is indeed insane.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2165be9462c0..43cb3b3e1679 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -453,12 +453,6 @@ static inline int free_pages_check(struct page *page)
1 << PG_reserved |
1 << PG_buddy ))))
bad_page(page);
- /*
- * PageReclaim == PageTail. It is only an error
- * for PageReclaim to be set if PageCompound is clear.
- */
- if (unlikely(!PageCompound(page) && PageReclaim(page)))
- bad_page(page);
if (PageDirty(page))
__ClearPageDirty(page);
/*
@@ -602,7 +596,6 @@ static int prep_new_page(struct page *page, int order, gfp_t gfp_flags)
1 << PG_locked |
1 << PG_active |
1 << PG_dirty |
- 1 << PG_reclaim |
1 << PG_slab |
1 << PG_swapcache |
1 << PG_writeback |
diff --git a/mm/readahead.c b/mm/readahead.c
index 5b3c9b7d70fa..205a4a431516 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -448,6 +448,12 @@ page_cache_readahead_ondemand(struct address_space *mapping,
return 0;
if (page) {
+ /*
+ * It can be PG_reclaim.
+ */
+ if (PageWriteback(page))
+ return 0;
+
ClearPageReadahead(page);
/*