diff options
author | Oleg Drokin <green@linuxhacker.ru> | 2016-06-09 00:33:59 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-06-09 23:23:11 +0200 |
commit | 18aba41cbfbcd138e9f6d8d446427d8b7691c194 (patch) | |
tree | c75be3a413a4fca8c2e0058d75df06a2c22610fa /mm | |
parent | mm: introduce dedicated WQ_MEM_RECLAIM workqueue to do lru_add_drain_all (diff) | |
download | linux-18aba41cbfbcd138e9f6d8d446427d8b7691c194.tar.xz linux-18aba41cbfbcd138e9f6d8d446427d8b7691c194.zip |
mm/fadvise.c: do not discard partial pages with POSIX_FADV_DONTNEED
I noticed that the logic in the fadvise64_64 syscall is incorrect for
partial pages. While first page of the region is correctly skipped if
it is partial, the last page of the region is mistakenly discarded.
This leads to problems for applications that read data in
non-page-aligned chunks discarding already processed data between the
reads.
A somewhat misguided application that does something like write(XX bytes
(non-page-alligned)); drop the data it just wrote; repeat gets a
significant penalty in performance as a result.
Link: http://lkml.kernel.org/r/1464917140-1506698-1-git-send-email-green@linuxhacker.ru
Signed-off-by: Oleg Drokin <green@linuxhacker.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/fadvise.c | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/mm/fadvise.c b/mm/fadvise.c index b8024fa7101d..6c707bfe02fd 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -126,6 +126,17 @@ SYSCALL_DEFINE4(fadvise64_64, int, fd, loff_t, offset, loff_t, len, int, advice) */ start_index = (offset+(PAGE_SIZE-1)) >> PAGE_SHIFT; end_index = (endbyte >> PAGE_SHIFT); + if ((endbyte & ~PAGE_MASK) != ~PAGE_MASK) { + /* First page is tricky as 0 - 1 = -1, but pgoff_t + * is unsigned, so the end_index >= start_index + * check below would be true and we'll discard the whole + * file cache which is not what was asked. + */ + if (end_index == 0) + break; + + end_index--; + } if (end_index >= start_index) { unsigned long count = invalidate_mapping_pages(mapping, |