summaryrefslogtreecommitdiffstats
path: root/arch/x86/entry/syscalls/syscall_64.tbl
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2016-03-20 02:52:29 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2016-03-20 02:52:29 +0100
commit3c2de27d793bf55167804fc47954711e94f27be7 (patch)
treeb554e41e350adc47cf983b3103f4b4b79451f67b /arch/x86/entry/syscalls/syscall_64.tbl
parentMerge branch 'stable-4.6' of git://git.infradead.org/users/pcmoore/audit (diff)
parentMerge branches 'work.lookups', 'work.misc' and 'work.preadv2' into for-next (diff)
downloadlinux-3c2de27d793bf55167804fc47954711e94f27be7.tar.xz
linux-3c2de27d793bf55167804fc47954711e94f27be7.zip
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull vfs updates from Al Viro: - Preparations of parallel lookups (the remaining main obstacle is the need to move security_d_instantiate(); once that becomes safe, the rest will be a matter of rather short series local to fs/*.c - preadv2/pwritev2 series from Christoph - assorted fixes * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (32 commits) splice: handle zero nr_pages in splice_to_pipe() vfs: show_vfsstat: do not ignore errors from show_devname method dcache.c: new helper: __d_add() don't bother with __d_instantiate(dentry, NULL) untangle fsnotify_d_instantiate() a bit uninline d_add() replace d_add_unique() with saner primitive quota: use lookup_one_len_unlocked() cifs_get_root(): use lookup_one_len_unlocked() nfs_lookup: don't bother with d_instantiate(dentry, NULL) kill dentry_unhash() ceph_fill_trace(): don't bother with d_instantiate(dn, NULL) autofs4: don't bother with d_instantiate(dentry, NULL) in ->lookup() configfs: move d_rehash() into configfs_create() for regular files ceph: don't bother with d_rehash() in splice_dentry() namei: teach lookup_slow() to skip revalidate namei: massage lookup_slow() to be usable by lookup_one_len_unlocked() lookup_one_len_unlocked(): use lookup_dcache() namei: simplify invalidation logics in lookup_dcache() namei: change calling conventions for lookup_{fast,slow} and follow_managed() ...
Diffstat (limited to 'arch/x86/entry/syscalls/syscall_64.tbl')
-rw-r--r--arch/x86/entry/syscalls/syscall_64.tbl2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 2e5b565adacc..cac6d17ce5db 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -333,6 +333,8 @@
324 common membarrier sys_membarrier
325 common mlock2 sys_mlock2
326 common copy_file_range sys_copy_file_range
+327 64 preadv2 sys_preadv2
+328 64 pwritev2 sys_pwritev2
#
# x32-specific system call numbers start at 512 to avoid cache impact