summaryrefslogtreecommitdiffstats
path: root/Makefile
diff options
context:
space:
mode:
authorWu Fengguang <fengguang.wu@intel.com>2011-12-06 20:17:17 +0100
committerWu Fengguang <fengguang.wu@intel.com>2011-12-18 07:20:30 +0100
commit5b9b357435a51ff14835c06d8b00765a4c68f313 (patch)
tree858bdc6ce0984aa0a9abc88d4c53931e6b299312 /Makefile
parentwriteback: max, min and target dirty pause time (diff)
downloadlinux-5b9b357435a51ff14835c06d8b00765a4c68f313.tar.xz
linux-5b9b357435a51ff14835c06d8b00765a4c68f313.zip
writeback: avoid tiny dirty poll intervals
The LKP tests see big 56% regression for the case fio_mmap_randwrite_64k. Shaohua manages to root cause it to be the much smaller dirty pause times and hence much more frequent invocations to the IO-less balance_dirty_pages(). Since fio_mmap_randwrite_64k effectively contains both reads and writes, the more frequent pauses triggered more idling in the cfq IO scheduler. The solution is to increase pause time all the way up to the max 200ms in this case, which is found to restore most performance. This will help reduce CPU overheads in other cases, too. Note that I don't expect many performance critical workloads to run this access pattern: the mmap read-on-write is rather inefficient and could be avoided by doing normal writes syscalls. CC: Jan Kara <jack@suse.cz> CC: Peter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: Li Shaohua <shaohua.li@intel.com> Tested-by: Li Shaohua <shaohua.li@intel.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Diffstat (limited to 'Makefile')
0 files changed, 0 insertions, 0 deletions