summaryrefslogtreecommitdiffstats
path: root/mm/page-writeback.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@woody.linux-foundation.org>2007-11-16 01:41:52 +0100
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-11-16 01:41:52 +0100
commit8c0863403f109a43d7000b4646da4818220d501f (patch)
tree925a87846bda5a0f427cbd19b65c9ea0375ebdd3 /mm/page-writeback.c
parentMerge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/ne... (diff)
downloadlinux-8c0863403f109a43d7000b4646da4818220d501f.tar.xz
linux-8c0863403f109a43d7000b4646da4818220d501f.zip
dirty page balancing: Get rid of broken unmapped_ratio logic
This code harks back to the days when we didn't count dirty mapped pages, which led us to try to balance the number of dirty unmapped pages by how much unmapped memory there was in the system. That makes no sense any more, since now the dirty counts include the mapped pages. Not to mention that the math doesn't work with HIGHMEM machines anyway, and causes the unmapped_ratio to potentially turn negative (which we do catch thanks to clamping it at a minimum value, but I mention that as an indication of how broken the code is). The code also was written at a time when the default dirty ratio was much larger, and the unmapped_ratio logic effectively capped that large dirty ratio a bit. Again, we've since lowered the dirty ratio rather aggressively, further lessening the point of that code. Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--mm/page-writeback.c8
1 files changed, 0 insertions, 8 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 81a91e6f1f99..d55cfcae2ef1 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -297,20 +297,12 @@ get_dirty_limits(long *pbackground, long *pdirty, long *pbdi_dirty,
{
int background_ratio; /* Percentages */
int dirty_ratio;
- int unmapped_ratio;
long background;
long dirty;
unsigned long available_memory = determine_dirtyable_memory();
struct task_struct *tsk;
- unmapped_ratio = 100 - ((global_page_state(NR_FILE_MAPPED) +
- global_page_state(NR_ANON_PAGES)) * 100) /
- available_memory;
-
dirty_ratio = vm_dirty_ratio;
- if (dirty_ratio > unmapped_ratio / 2)
- dirty_ratio = unmapped_ratio / 2;
-
if (dirty_ratio < 5)
dirty_ratio = 5;