summaryrefslogtreecommitdiffstats
path: root/fs (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-linus' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2014-10-033-8/+4
|\ | | | | | | | | | | | | | | | | | | Pull cifs/smb3 fixes from Steve French: "Fix for CIFS/SMB3 oops on reconnect during readpages (3.17 regression) and for incorrectly closing file handle in symlink error cases" * 'for-linus' of git://git.samba.org/sfrench/cifs-2.6: CIFS: Fix readpages retrying on reconnects Fix problem recognizing symlinks
| * CIFS: Fix readpages retrying on reconnectsPavel Shilovsky2014-10-021-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we got a reconnect error from async readv we re-add pages back to page_list and continue loop. That is wrong because these pages have been already added to the pagecache but page_list has pages that have not been added to the pagecache yet. This ends up with a general protection fault in put_pages after readpages. Fix it by not retrying the read of these pages and falling back to readpage instead. Fixes debian bug 762306 Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Steve French <smfrench@gmail.com> Tested-by: Arthur Marsh <arthur.marsh@internode.on.net>
| * Fix problem recognizing symlinksSteve French2014-10-022-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changeset eb85d94bd introduced a problem where if a cifs open fails during query info of a file we will still try to close the file (happens with certain types of reparse points) even though the file handle is not valid. In addition for SMB2/SMB3 we were not mapping the return code returned by Windows when trying to open a file (like a Windows NFS symlink) which is a reparse point. Signed-off-by: Steve French <smfrench@gmail.com> Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org> CC: stable <stable@vger.kernel.org> #v3.13+
* | ocfs2/dlm: should put mle when goto kill in dlm_assert_master_handleralex chen2014-10-031-0/+4
|/ | | | | | | | | | | | | In dlm_assert_master_handler, the mle is get in dlm_find_mle, should be put when goto kill, otherwise, this mle will never be released. Signed-off-by: Alex Chen <alex.chen@huawei.com> Reviewed-by: Joseph Qi <joseph.qi@huawei.com> Reviewed-by: joyce.xue <xuejiufei@huawei.com> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* nfsd4: fix corruption of NFSv4 read dataJ. Bruce Fields2014-09-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | The calculation of page_ptr here is wrong in the case the read doesn't start at an offset that is a multiple of a page. The result is that nfs4svc_encode_compoundres sets rq_next_page to a value one too small, and then the loop in svc_free_res_pages may incorrectly fail to clear a page pointer in rq_respages[]. Pages left in rq_respages[] are available for the next rpc request to use, so xdr data may be written to that page, which may hold data still waiting to be transmitted to the client or data in the page cache. The observed result was silent data corruption seen on an NFSv4 client. We tag this as "fixing" 05638dc73af2 because that commit exposed this bug, though the incorrect calculation predates it. Particular thanks to Andrea Arcangeli and David Gilbert for analysis and testing. Fixes: 05638dc73af2 "nfsd4: simplify server xdr->next_page use" Cc: stable@vger.kernel.org Reported-by: Andrea Arcangeli <aarcange@redhat.com> Tested-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* Merge branch 'for-linus' of ↵Linus Torvalds2014-09-285-78/+47
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs fixes from Al Viro: "Assorted fixes + unifying __d_move() and __d_materialise_dentry() + minimal regression fix for d_path() of victims of overwriting rename() ported on top of that" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: vfs: Don't exchange "short" filenames unconditionally. fold swapping ->d_name.hash into switch_names() fold unlocking the children into dentry_unlock_parents_for_move() kill __d_materialise_dentry() __d_materialise_dentry(): flip the order of arguments __d_move(): fold manipulations with ->d_child/->d_subdirs don't open-code d_rehash() in d_materialise_unique() pull rehashing and unlocking the target dentry into __d_materialise_dentry() ufs: deal with nfsd/iget races fuse: honour max_read and max_write in direct_io mode shmem: fix nlink for rename overwrite directory
| * vfs: Don't exchange "short" filenames unconditionally.Mikhail Efremov2014-09-271-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only exchange source and destination filenames if flags contain RENAME_EXCHANGE. In case if executable file was running and replaced by other file /proc/PID/exe should still show correct file name, not the old name of the file by which it was replaced. The scenario when this bug manifests itself was like this: * ALT Linux uses rpm and start-stop-daemon; * during a package upgrade rpm creates a temporary file for an executable to rename it upon successful unpacking; * start-stop-daemon is run subsequently and it obtains the (nonexistant) temporary filename via /proc/PID/exe thus failing to identify the running process. Note that "long" filenames (> DNAiME_INLINE_LEN) are still exchanged without RENAME_EXCHANGE and this behaviour exists long enough (should be fixed too apparently). So this patch is just an interim workaround that restores behavior for "short" names as it was before changes introduced by commit da1ce0670c14 ("vfs: add cross-rename"). See https://lkml.org/lkml/2014/9/7/6 for details. AV: the comments about being more careful with ->d_name.hash than with ->d_name.name are from back in 2.3.40s; they became obsolete by 2.3.60s, when we started to unhash the target instead of swapping hash chain positions followed by d_delete() as we used to do when dcache was first introduced. Acked-by: Miklos Szeredi <mszeredi@suse.cz> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: linux-fsdevel@vger.kernel.org Cc: stable@vger.kernel.org Fixes: da1ce0670c14 "vfs: add cross-rename" Signed-off-by: Mikhail Efremov <sem@altlinux.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fold swapping ->d_name.hash into switch_names()Linus Torvalds2014-09-271-2/+1
| | | | | | | | | | | | | | and do it along with ->d_name.len there Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fold unlocking the children into dentry_unlock_parents_for_move()Al Viro2014-09-271-5/+4
| | | | | | | | | | | | | | ... renaming it into dentry_unlock_for_move() and making it more symmetric with dentry_lock_for_move(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * kill __d_materialise_dentry()Al Viro2014-09-271-44/+10
| | | | | | | | | | | | it folds into __d_move() now Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * __d_materialise_dentry(): flip the order of argumentsAl Viro2014-09-271-24/+20
| | | | | | | | | | | | | | ... thus making it much closer to (now unreachable, BTW) IS_ROOT(dentry) case in __d_move(). A bit more and it'll fold in. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * __d_move(): fold manipulations with ->d_child/->d_subdirsAl Viro2014-09-271-5/+3
| | | | | | | | | | | | | | | | | | list_del() + list_add() is a slightly pessimised list_move() list_del() + INIT_LIST_HEAD() is a slightly pessimised list_del_init() Interleaving those makes the resulting code even worse. And harder to follow... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * don't open-code d_rehash() in d_materialise_unique()Al Viro2014-09-271-5/+1
| | | | | | | | | | | | ... and get rid of duplicate BUG_ON() there Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * pull rehashing and unlocking the target dentry into __d_materialise_dentry()Al Viro2014-09-271-7/+4
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ufs: deal with nfsd/iget racesAl Viro2014-09-272-1/+9
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fuse: honour max_read and max_write in direct_io modeMiklos Szeredi2014-09-272-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The third argument of fuse_get_user_pages() "nbytesp" refers to the number of bytes a caller asked to pack into fuse request. This value may be lesser than capacity of fuse request or iov_iter. So fuse_get_user_pages() must ensure that *nbytesp won't grow. Now, when helper iov_iter_get_pages() performs all hard work of extracting pages from iov_iter, it can be done by passing properly calculated "maxsize" to the helper. The other caller of iov_iter_get_pages() (dio_refill_pages()) doesn't need this capability, so pass LONG_MAX as the maxsize argument here. Fixes: c9c37e2e6378 ("fuse: switch to iov_iter_get_pages()") Reported-by: Werner Baumann <werner.baumann@onlinehome.de> Tested-by: Maxim Patlasov <mpatlasov@parallels.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | fs/cachefiles: add missing \n to kerror conversionsFabian Frederick2014-09-266-33/+33
| | | | | | | | | | | | | | | | | | | | | | | | Commit 0227d6abb378 ("fs/cachefiles: replace kerror by pr_err") didn't include newline featuring in original kerror definition Signed-off-by: Fabian Frederick <fabf@skynet.be> Reported-by: David Howells <dhowells@redhat.com> Acked-by: David Howells <dhowells@redhat.com> Cc: <stable@vger.kernel.org> [3.16.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | mm: softdirty: addresses before VMAs in PTE holes aren't softdirtyPeter Feiner2014-09-261-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In PTE holes that contain VM_SOFTDIRTY VMAs, unmapped addresses before VM_SOFTDIRTY VMAs are reported as softdirty by /proc/pid/pagemap. This bug was introduced in commit 68b5a6524856 ("mm: softdirty: respect VM_SOFTDIRTY in PTE holes"). That commit made /proc/pid/pagemap look at VM_SOFTDIRTY in PTE holes but neglected to observe the start of VMAs returned by find_vma. Tested: Wrote a selftest that creates a PMD-sized VMA then unmaps the first page and asserts that the page is not softdirty. I'm going to send the pagemap selftest in a later commit. Signed-off-by: Peter Feiner <pfeiner@google.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Cc: Hugh Dickins <hughd@google.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Jamie Liu <jamieliu@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | ocfs2/dlm: do not get resource spinlock if lockres is newJoseph Qi2014-09-261-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a deadlock case which reported by Guozhonghua: https://oss.oracle.com/pipermail/ocfs2-devel/2014-September/010079.html This case is caused by &res->spinlock and &dlm->master_lock misordering in different threads. It was introduced by commit 8d400b81cc83 ("ocfs2/dlm: Clean up refmap helpers"). Since lockres is new, it doesn't not require the &res->spinlock. So remove it. Fixes: 8d400b81cc83 ("ocfs2/dlm: Clean up refmap helpers") Signed-off-by: Joseph Qi <joseph.qi@huawei.com> Reviewed-by: joyce.xue <xuejiufei@huawei.com> Reported-by: Guozhonghua <guozhonghua@h3c.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | nilfs2: fix data loss with mmap()Andreas Rohner2014-09-261-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug leads to reproducible silent data loss, despite the use of msync(), sync() and a clean unmount of the file system. It is easily reproducible with the following script: ----------------[BEGIN SCRIPT]-------------------- mkfs.nilfs2 -f /dev/sdb mount /dev/sdb /mnt dd if=/dev/zero bs=1M count=30 of=/mnt/testfile umount /mnt mount /dev/sdb /mnt CHECKSUM_BEFORE="$(md5sum /mnt/testfile)" /root/mmaptest/mmaptest /mnt/testfile 30 10 5 sync CHECKSUM_AFTER="$(md5sum /mnt/testfile)" umount /mnt mount /dev/sdb /mnt CHECKSUM_AFTER_REMOUNT="$(md5sum /mnt/testfile)" umount /mnt echo "BEFORE MMAP:\t$CHECKSUM_BEFORE" echo "AFTER MMAP:\t$CHECKSUM_AFTER" echo "AFTER REMOUNT:\t$CHECKSUM_AFTER_REMOUNT" ----------------[END SCRIPT]-------------------- The mmaptest tool looks something like this (very simplified, with error checking removed): ----------------[BEGIN mmaptest]-------------------- data = mmap(NULL, file_size - file_offset, PROT_READ | PROT_WRITE, MAP_SHARED, fd, file_offset); for (i = 0; i < write_count; ++i) { memcpy(data + i * 4096, buf, sizeof(buf)); msync(data, file_size - file_offset, MS_SYNC)) } ----------------[END mmaptest]-------------------- The output of the script looks something like this: BEFORE MMAP: 281ed1d5ae50e8419f9b978aab16de83 /mnt/testfile AFTER MMAP: 6604a1c31f10780331a6850371b3a313 /mnt/testfile AFTER REMOUNT: 281ed1d5ae50e8419f9b978aab16de83 /mnt/testfile So it is clear, that the changes done using mmap() do not survive a remount. This can be reproduced a 100% of the time. The problem was introduced in commit 136e8770cd5d ("nilfs2: fix issue of nilfs_set_page_dirty() for page at EOF boundary"). If the page was read with mpage_readpage() or mpage_readpages() for example, then it has no buffers attached to it. In that case page_has_buffers(page) in nilfs_set_page_dirty() will be false. Therefore nilfs_set_file_dirty() is never called and the pages are never collected and never written to disk. This patch fixes the problem by also calling nilfs_set_file_dirty() if the page has no buffers attached to it. [akpm@linux-foundation.org: s/PAGE_SHIFT/PAGE_CACHE_SHIFT/] Signed-off-by: Andreas Rohner <andreas.rohner@gmx.net> Tested-by: Andreas Rohner <andreas.rohner@gmx.net> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | ocfs2: free vol_label in ocfs2_delete_osb()Joseph Qi2014-09-261-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | osb->vol_label is malloced in ocfs2_initialize_super but not freed if error occurs or during umount, thus causing a memory leak. Signed-off-by: Joseph Qi <joseph.qi@huawei.com> Reviewed-by: joyce.xue <xuejiufei@huawei.com> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge tag 'fscache-fixes-20140917' of ↵Linus Torvalds2014-09-234-11/+24
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs Pull fs-cache fixes from David Howells: - Put a timeout in releasepage() to deal with a recursive hang between the memory allocator, writeback, ext4 and fscache under memory pressure. - Fix a pair of refcount bugs in the fscache error handling. - Remove a couple of unused pagevecs. - The cachefiles requirement that the base directory support rename should permit rename2 as an alternative - otherwise certain filesystems cannot now be used as backing stores (such as ext4). * tag 'fscache-fixes-20140917' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: CacheFiles: Handle rename2 cachefiles: remove two unused pagevecs. FS-Cache: refcount becomes corrupt under vma pressure. FS-Cache: Reduce cookie ref count if submit fails. FS-Cache: Timeout for releasepage()
| * | CacheFiles: Handle rename2David Howells2014-09-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not all filesystems now provide the rename i_op - ext4 for one - but rather provide the rename2 i_op. CacheFiles checks that the filesystem has rename and so will reject ext4 now with EPERM: CacheFiles: Failed to register: -1 Fix this by checking for rename2 as an alternative. The call to vfs_rename() actually handles selection of the appropriate function, so we needn't worry about that. Turning on debugging shows: [cachef] ==> cachefiles_get_directory(,,cache) [cachef] subdir -> ffff88000b22b778 positive [cachef] <== cachefiles_get_directory() = -1 [check] where -1 is EPERM. Signed-off-by: David Howells <dhowells@redhat.com>
| * | cachefiles: remove two unused pagevecs.NeilBrown2014-09-181-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These two have been unused since commit c4d6d8dbf335c7fa47341654a37c53a512b519bb CacheFiles: Fix the marking of cached pages in 3.8. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: David Howells <dhowells@redhat.com>
| * | FS-Cache: refcount becomes corrupt under vma pressure.Milosz Tanski2014-09-171-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In rare cases under heavy VMA pressure the ref count for a fscache cookie becomes corrupt. In this case we decrement ref count even if we fail before incrementing the refcount. FS-Cache: Assertion failed bnode-eca5f9c6/syslog 0 > 0 is false ------------[ cut here ]------------ kernel BUG at fs/fscache/cookie.c:519! invalid opcode: 0000 [#1] SMP Call Trace: [<ffffffffa01ba060>] __fscache_relinquish_cookie+0x50/0x220 [fscache] [<ffffffffa02d64ce>] ceph_fscache_unregister_inode_cookie+0x3e/0x50 [ceph] [<ffffffffa02ae1d3>] ceph_destroy_inode+0x33/0x200 [ceph] [<ffffffff811cf67e>] ? __fsnotify_inode_delete+0xe/0x10 [<ffffffff811a9e0c>] destroy_inode+0x3c/0x70 [<ffffffff811a9f51>] evict+0x111/0x180 [<ffffffff811aa763>] iput+0x103/0x190 [<ffffffff811a5de8>] __dentry_kill+0x1c8/0x220 [<ffffffff811a5f31>] shrink_dentry_list+0xf1/0x250 [<ffffffff811a762c>] prune_dcache_sb+0x4c/0x60 [<ffffffff811930af>] super_cache_scan+0xff/0x170 [<ffffffff8113d7a0>] shrink_slab_node+0x140/0x2c0 [<ffffffff8113f2da>] shrink_slab+0x8a/0x130 [<ffffffff81142572>] balance_pgdat+0x3e2/0x5d0 [<ffffffff811428ca>] kswapd+0x16a/0x4a0 [<ffffffff810a43f0>] ? __wake_up_sync+0x20/0x20 [<ffffffff81142760>] ? balance_pgdat+0x5d0/0x5d0 [<ffffffff81083e09>] kthread+0xc9/0xe0 [<ffffffff81010000>] ? ftrace_raw_event_xen_mmu_release_ptpage+0x70/0x90 [<ffffffff81083d40>] ? flush_kthread_worker+0xb0/0xb0 [<ffffffff8159f63c>] ret_from_fork+0x7c/0xb0 [<ffffffff81083d40>] ? flush_kthread_worker+0xb0/0xb0 RIP [<ffffffffa01b984b>] __fscache_disable_cookie+0x1db/0x210 [fscache] RSP <ffff8803bc85f9b8> ---[ end trace 254d0d7c74a01f25 ]--- Signed-off-by: Milosz Tanski <milosz@adfin.com> Signed-off-by: David Howells <dhowells@redhat.com>
| * | FS-Cache: Reduce cookie ref count if submit fails.Milosz Tanski2014-08-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I've been seeing issues with disposing cookies under vma pressure. The symptom is that the refcount gets out of sync. In this case we fail to decrement the refcount if submit fails. I found this while auditing the error in and around cookie operations. Signed-off-by: Milosz Tanski <milosz@adfin.com> Signed-off-by: David Howells <dhowells@redhat.com>
| * | FS-Cache: Timeout for releasepage()Milosz Tanski2014-08-271-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is meant to avoid a recusive hang caused by underlying filesystem trying to grab a free page and causing a write-out. INFO: task kworker/u30:7:28375 blocked for more than 120 seconds. Not tainted 3.15.0-virtual #74 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u30:7 D 0000000000000000 0 28375 2 0x00000000 Workqueue: fscache_operation fscache_op_work_func [fscache] ffff88000b147148 0000000000000046 0000000000000000 ffff88000b1471c8 ffff8807aa031820 0000000000014040 ffff88000b147fd8 0000000000014040 ffff880f0c50c860 ffff8807aa031820 ffff88000b147158 ffff88007be59cd0 Call Trace: [<ffffffff815930e9>] schedule+0x29/0x70 [<ffffffffa018bed5>] __fscache_wait_on_page_write+0x55/0x90 [fscache] [<ffffffff810a4350>] ? __wake_up_sync+0x20/0x20 [<ffffffffa018c135>] __fscache_maybe_release_page+0x65/0x1e0 [fscache] [<ffffffffa02ad813>] ceph_releasepage+0x83/0x100 [ceph] [<ffffffff811635b0>] ? anon_vma_fork+0x130/0x130 [<ffffffff8112cdd2>] try_to_release_page+0x32/0x50 [<ffffffff81140096>] shrink_page_list+0x7e6/0x9d0 [<ffffffff8113f278>] ? isolate_lru_pages.isra.73+0x78/0x1e0 [<ffffffff81140932>] shrink_inactive_list+0x252/0x4c0 [<ffffffff811412b1>] shrink_lruvec+0x3e1/0x670 [<ffffffff8114157f>] shrink_zone+0x3f/0x110 [<ffffffff81141b06>] do_try_to_free_pages+0x1d6/0x450 [<ffffffff8114a939>] ? zone_statistics+0x99/0xc0 [<ffffffff81141e44>] try_to_free_pages+0xc4/0x180 [<ffffffff81136982>] __alloc_pages_nodemask+0x6b2/0xa60 [<ffffffff811c1d4e>] ? __find_get_block+0xbe/0x250 [<ffffffff810a405e>] ? wake_up_bit+0x2e/0x40 [<ffffffff811740c3>] alloc_pages_current+0xb3/0x180 [<ffffffff8112cf07>] __page_cache_alloc+0xb7/0xd0 [<ffffffff8112da6c>] grab_cache_page_write_begin+0x7c/0xe0 [<ffffffff81214072>] ? ext4_mark_inode_dirty+0x82/0x220 [<ffffffff81214a89>] ext4_da_write_begin+0x89/0x2d0 [<ffffffff8112c6ee>] generic_perform_write+0xbe/0x1d0 [<ffffffff811a96b1>] ? update_time+0x81/0xc0 [<ffffffff811ad4c2>] ? mnt_clone_write+0x12/0x30 [<ffffffff8112e80e>] __generic_file_aio_write+0x1ce/0x3f0 [<ffffffff8112ea8e>] generic_file_aio_write+0x5e/0xe0 [<ffffffff8120b94f>] ext4_file_write+0x9f/0x410 [<ffffffff8120af56>] ? ext4_file_open+0x66/0x180 [<ffffffff8118f0da>] do_sync_write+0x5a/0x90 [<ffffffffa025c6c9>] cachefiles_write_page+0x149/0x430 [cachefiles] [<ffffffff812cf439>] ? radix_tree_gang_lookup_tag+0x89/0xd0 [<ffffffffa018c512>] fscache_write_op+0x222/0x3b0 [fscache] [<ffffffffa018b35a>] fscache_op_work_func+0x3a/0x100 [fscache] [<ffffffff8107bfe9>] process_one_work+0x179/0x4a0 [<ffffffff8107d47b>] worker_thread+0x11b/0x370 [<ffffffff8107d360>] ? manage_workers.isra.21+0x2e0/0x2e0 [<ffffffff81083d69>] kthread+0xc9/0xe0 [<ffffffff81010000>] ? ftrace_raw_event_xen_mmu_release_ptpage+0x70/0x90 [<ffffffff81083ca0>] ? flush_kthread_worker+0xb0/0xb0 [<ffffffff8159eefc>] ret_from_fork+0x7c/0xb0 [<ffffffff81083ca0>] ? flush_kthread_worker+0xb0/0xb0 Signed-off-by: Milosz Tanski <milosz@adfin.com> Signed-off-by: David Howells <dhowells@redhat.com>
* | | Fix nasty 32-bit overflow bug in buffer i/o code.Anton Altaparmakov2014-09-221-2/+4
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On 32-bit architectures, the legacy buffer_head functions are not always handling the sector number with the proper 64-bit types, and will thus fail on 4TB+ disks. Any code that uses __getblk() (and thus bread(), breadahead(), sb_bread(), sb_breadahead(), sb_getblk()), and calls it using a 64-bit block on a 32-bit arch (where "long" is 32-bit) causes an inifinite loop in __getblk_slow() with an infinite stream of errors logged to dmesg like this: __find_get_block_slow() failed. block=6740375944, b_blocknr=2445408648 b_state=0x00000020, b_size=512 device sda1 blocksize: 512 Note how in hex block is 0x191C1F988 and b_blocknr is 0x91C1F988 i.e. the top 32-bits are missing (in this case the 0x1 at the top). This is because grow_dev_page() is broken and has a 32-bit overflow due to shifting the page index value (a pgoff_t - which is just 32 bits on 32-bit architectures) left-shifted as the block number. But the top bits to get lost as the pgoff_t is not type cast to sector_t / 64-bit before the shift. This patch fixes this issue by type casting "index" to sector_t before doing the left shift. Note this is not a theoretical bug but has been seen in the field on a 4TiB hard drive with logical sector size 512 bytes. This patch has been verified to fix the infinite loop problem on 3.17-rc5 kernel using a 4TB disk image mounted using "-o loop". Without this patch doing a "find /nt" where /nt is an NTFS volume causes the inifinite loop 100% reproducibly whilst with the patch it works fine as expected. Signed-off-by: Anton Altaparmakov <aia21@cantab.net> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'for-linus' of ↵Linus Torvalds2014-09-193-21/+19
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull btrfs fixes from Chris Mason: "I've got a revert to fix a regression with btrfs device registration, and Filipe has part two of his fsync fix from last week" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: Revert "Btrfs: device_list_add() should not update list when mounted" Btrfs: set inode's logged_trans/last_log_commit after ranged fsync
| * | Revert "Btrfs: device_list_add() should not update list when mounted"Chris Mason2014-09-181-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit b96de000bc8bc9688b3a2abea4332bd57648a49f. This commit is triggering failures to mount by subvolume id in some configurations. The main problem is how many different ways this scanning function is used, both for scanning while mounted and unmounted. A proper cleanup is too big for late rcs. For now, just revert the commit and we'll put a better fix into a later merge window. Signed-off-by: Chris Mason <clm@fb.com>
| * | Btrfs: set inode's logged_trans/last_log_commit after ranged fsyncFilipe Manana2014-09-172-14/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a ranged fsync finishes if there are still extent maps in the modified list, still set the inode's logged_trans and last_log_commit. This is important in case an inode is fsync'ed and unlinked in the same transaction, to ensure its inode ref gets deleted from the log and the respective dentries in its parent are deleted too from the log (if the parent directory was fsync'ed in the same transaction). Instead make btrfs_inode_in_log() return false if the list of modified extent maps isn't empty. This is an incremental on top of the v4 version of the patch: "Btrfs: fix fsync data loss after a ranged fsync" which was added to its v5, but didn't make it on time. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
* | | Merge tag 'nfs-for-3.17-5' of git://git.linux-nfs.org/projects/trondmy/linux-nfsLinus Torvalds2014-09-192-36/+42
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull NFS client fixes from Trond Myklebust: "Highligts: - fix an Oops in nfs4_open_and_get_state - fix an Oops in the nfs4_state_manager - fix another bug in the close/open_downgrade code" * tag 'nfs-for-3.17-5' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: NFSv4: Fix another bug in the close/open_downgrade code NFSv4: nfs4_state_manager() vs. nfs_server_remove_lists() NFS: remove BUG possibility in nfs4_open_and_get_state
| * | | NFSv4: Fix another bug in the close/open_downgrade codeTrond Myklebust2014-09-181-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | James Drew reports another bug whereby the NFS client is now sending an OPEN_DOWNGRADE in a situation where it should really have sent a CLOSE: the client is opening the file for O_RDWR, but then trying to do a downgrade to O_RDONLY, which is not allowed by the NFSv4 spec. Reported-by: James Drews <drews@engr.wisc.edu> Link: http://lkml.kernel.org/r/541AD7E5.8020409@engr.wisc.edu Fixes: aee7af356e15 (NFSv4: Fix problems with close in the presence...) Cc: stable@vger.kernel.org # 2.6.33+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * | | NFSv4: nfs4_state_manager() vs. nfs_server_remove_lists()Steve Dickson2014-09-181-18/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a race between nfs4_state_manager() and nfs_server_remove_lists() that happens during a nfsv3 mount. The v3 mount notices there is already a supper block so nfs_server_remove_lists() called which uses the nfs_client_lock spin lock to synchronize access to the client list. At the same time nfs4_state_manager() is running through the client list looking for work to do, using the same lock. When nfs4_state_manager() wins the race to the list, a v3 client pointer is found and not ignored properly which causes the panic. Moving some protocol checks before the state checking avoids the panic. CC: Stable Tree <stable@vger.kernel.org> Signed-off-by: Steve Dickson <steved@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * | | NFS: remove BUG possibility in nfs4_open_and_get_stateNeilBrown2014-09-121-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 4fa2c54b5198d09607a534e2fd436581064587ed NFS: nfs4_do_open should add negative results to the dcache. used "d_drop(); d_add();" to ensure that a dentry was hashed as a negative cached entry. This is not safe if the dentry has an non-NULL ->d_inode. It will trigger a BUG_ON in d_instantiate(). In that case, d_delete() is needed. Also, only d_add if the dentry is currently unhashed, it seems pointless removed and re-adding it unchanged. Reported-by: Christoph Hellwig <hch@infradead.org> Fixes: 4fa2c54b5198d09607a534e2fd436581064587ed Cc: Jeff Layton <jeff.layton@primarydata.com> Link: http://lkml.kernel.org/r/20140908144525.GB19811@infradead.org Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
* | | | Merge branch 'for-linus' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2014-09-185-25/+45
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull cifs/smb3 fixes from Steve French: "Fixes for problems found during testing and debugging at the SMB3 storage test event (plugfest) this week" * 'for-linus' of git://git.samba.org/sfrench/cifs-2.6: Fix mfsymlinks file size check Update version number displayed by modinfo for cifs.ko cifs: remove dead code Revert "cifs: No need to send SIGKILL to demux_thread during umount" [SMB3] Fix oops when creating symlinks on smb3 [CIFS] Fix setting time before epoch (negative time values)
| * | | | Fix mfsymlinks file size checkSteve French2014-09-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the mfsymlinks file size has changed (e.g. the file no longer represents an emulated symlink) we were not returning an error properly. Signed-off-by: Steve French <smfrench@gmail.com> Reviewed-by: Stefan Metzmacher <metze@samba.org>
| * | | | Update version number displayed by modinfo for cifs.koSteve French2014-09-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update cifs.ko version to 2.05 Signed-off-by: Steve French <smfrench@gmail.com>w
| * | | | cifs: remove dead codeArnd Bergmann2014-09-161-17/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cifs provides two dummy functions 'sess_auth_lanman' and 'sess_auth_kerberos' for the case in which the respective features are not defined. However, the caller is also under an #ifdef, so we just get warnings about unused code: fs/cifs/sess.c:1109:1: warning: 'sess_auth_kerberos' defined but not used [-Wunused-function] sess_auth_kerberos(struct sess_data *sess_data) Removing the dead functions gets rid of the warnings without any downsides that I can see. (Yalin Wang reported the identical problem and fix so added him) Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com> Signed-off-by: Steve French <smfrench@gmail.com>
| * | | | Revert "cifs: No need to send SIGKILL to demux_thread during umount"Steve French2014-09-161-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 52a36244443eabb594bdb63622ff2dd7a083f0e2. Causes rmmod to fail for at least 7 seconds after unmount which makes automated testing a little harder when reloading cifs.ko between test runs. Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> CC: Jeff Layton <jlayton@primarydata.com> Signed-off-by: Steve French <smfrench@gmail.com>
| * | | | [SMB3] Fix oops when creating symlinks on smb3Steve French2014-09-151-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were not checking for symlink support properly for SMB2/SMB3 mounts so could oops when mounted with mfsymlinks when try to create symlink when mfsymlinks on smb2/smb3 mounts Signed-off-by: Steve French <smfrench@gmail.com> Cc: <stable@vger.kernel.org> # 3.14+ CC: Sachin Prabhu <sprabhu@redhat.com>
| * | | | [CIFS] Fix setting time before epoch (negative time values)Steve French2014-09-151-4/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xfstest generic/258 sets the time on a file to a negative value (before 1970) which fails since do_div can not handle negative numbers. In addition 'normal' division of 64 bit values does not build on 32 bit arch so have to workaround this by special casing negative values in cifs_NTtimeToUnix Samba server also has a bug with this (see samba bugzilla 7771) but it works to Windows server. Signed-off-by: Steve French <smfrench@gmail.com>
* | | | | Merge tag 'gfs2-fixes' of ↵Linus Torvalds2014-09-165-22/+38
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixes Pull gfs2 fixes from Steven Whitehouse: "Here are a number of small fixes for GFS2. There is a fix for FIEMAP on large sparse files, a negative dentry hashing fix, a fix for flock, and a bug fix relating to d_splice_alias usage. There are also (patches 1 and 5) a couple of updates which are less critical, but small and low risk" * tag 'gfs2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixes: GFS2: fix d_splice_alias() misuses GFS2: Don't use MAXQUOTAS value GFS2: Hash the negative dentry during inode lookup GFS2: Request demote when a "try" flock fails GFS2: Change maxlen variables to size_t GFS2: fs/gfs2/super.c: replace seq_printf by seq_puts
| * | | | | GFS2: fix d_splice_alias() misusesAl Viro2014-09-121-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Callers of d_splice_alias(dentry, inode) don't need iput(), neither on success nor on failure. Either the reference to inode is stored in a previously negative dentry, or it's dropped. In either case inode reference the caller used to hold is consumed. __gfs2_lookup() does iput() in case when d_splice_alias() has failed. Double iput() if we ever hit that. And gfs2_create_inode() ends up not only with double iput(), but with link count dropped to zero - on an inode it has just found in directory. Cc: stable@vger.kernel.org # v3.14+ Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * | | | | GFS2: Don't use MAXQUOTAS valueJan Kara2014-09-111-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MAXQUOTAS value defines maximum number of quota types VFS supports. This isn't necessarily the number of types gfs2 supports and with addition of project quotas these two numbers stop matching. So make gfs2 use its private definition. CC: cluster-devel@redhat.com Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * | | | | GFS2: Hash the negative dentry during inode lookupBenjamin Coddington2014-09-111-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a regression introduced by: 6d4ade986f9c8df31e68 GFS2: Add atomic_open support where an early return misses d_splice_alias() which had been adding the negative dentry. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * | | | | GFS2: Request demote when a "try" flock failsBob Peterson2014-08-211-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the flock code so that it uses the TRY_1CB flag instead of the TRY flag on the first attempt. That forces any holding nodes to issue a dlm callback, which requests a demote of the glock. Then, if the "try" failed, it sleeps a small amount of time for the demote to occur. Then it tries again, for an increasing amount of time. Subsequent attempts to gain the "try" lock don't use "_1CB" so that only one callback is issued. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * | | | | GFS2: Change maxlen variables to size_tBob Peterson2014-08-211-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes some variables (especially maxlen in function gfs2_block_map) from unsigned int to size_t. We need 64-bit arithmetic for very large files (e.g. 1PB) where the variables otherwise get shifted to all 0's. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * | | | | GFS2: fs/gfs2/super.c: replace seq_printf by seq_putsFabian Frederick2014-08-211-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fix checkpatch warnings: "WARNING: Prefer seq_puts to seq_printf" Cc: cluster-devel@redhat.com Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* | | | | | vfs: workaround gcc <4.6 build error in link_path_walk()James Hogan2014-09-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit d6bb3e9075bb ("vfs: simplify and shrink stack frame of link_path_walk()") introduced build problems with GCC versions older than 4.6 due to the initialisation of a member of an anonymous union in struct qstr without enclosing braces. This hits GCC bug 10676 [1] (which was fixed in GCC 4.6 by [2]), and causes the following build error: fs/namei.c: In function 'link_path_walk': fs/namei.c:1778: error: unknown field 'hash_len' specified in initializer This is worked around by adding explicit braces. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=10676 [2] https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=159206 Fixes: d6bb3e9075bb (vfs: simplify and shrink stack frame of link_path_walk()) Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: linux-fsdevel@vger.kernel.org Cc: linux-metag@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>