summaryrefslogtreecommitdiffstats
path: root/fs/io_uring.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'pull-work.fd-fixes' of ↵Linus Torvalds2022-06-061-4/+1
|\ | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull file descriptor fix from Al Viro: "Fix for breakage in #work.fd this window" * tag 'pull-work.fd-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fix the breakage in close_fd_get_file() calling conventions change
| * fix the breakage in close_fd_get_file() calling conventions changeAl Viro2022-06-051-4/+1
| | | | | | | | | | | | | | | | | | | | It used to grab an extra reference to struct file rather than just transferring to caller the one it had removed from descriptor table. New variant doesn't, and callers need to be adjusted. Reported-and-tested-by: syzbot+47dd250f527cb7bebf24@syzkaller.appspotmail.com Fixes: 6319194ec57b ("Unify the primitives for file descriptor closing") Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | Merge tag 'pull-18-rc1-work.fd' of ↵Linus Torvalds2022-06-051-11/+7
|\| | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull file descriptor updates from Al Viro. - Descriptor handling cleanups * tag 'pull-18-rc1-work.fd' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: Unify the primitives for file descriptor closing fs: remove fget_many and fput_many interface io_uring_enter(): don't leave f.flags uninitialized
| * Unify the primitives for file descriptor closingAl Viro2022-05-151-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we have 3 primitives for removing an opened file from descriptor table - pick_file(), __close_fd_get_file() and close_fd_get_file(). Their calling conventions are rather odd and there's a code duplication for no good reason. They can be unified - 1) have __range_close() cap max_fd in the very beginning; that way we don't need separate way for pick_file() to report being past the end of descriptor table. 2) make {__,}close_fd_get_file() return file (or NULL) directly, rather than returning it via struct file ** argument. Don't bother with (bogus) return value - nobody wants that -ENOENT. 3) make pick_file() return NULL on unopened descriptor - the only caller that used to care about the distinction between descriptor past the end of descriptor table and finding NULL in descriptor table doesn't give a damn after (1). 4) lift ->files_lock out of pick_file() That actually simplifies the callers, as well as the primitives themselves. Code duplication is also gone... Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * io_uring_enter(): don't leave f.flags uninitializedAl Viro2022-05-121-6/+5
| | | | | | | | | | | | | | simplifies logics on cleanup, as well... Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | io_uring: reinstate the inflight trackingJens Axboe2022-06-021-26/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | After some debugging, it was realized that we really do still need the old inflight tracking for any file type that has io_uring_fops assigned. If we don't, then trivial circular references will mean that we never get the ctx cleaned up and hence it'll leak. Just bring back the inflight tracking, which then also means we can eliminate the conditional dropping of the file when task_work is queued. Fixes: d5361233e9ab ("io_uring: drop the old style inflight file tracking") Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: fix deadlock on iowq file slot allocPavel Begunkov2022-06-011-21/+15
| | | | | | | | | | | | | | | | | | | | | | | | io_fixed_fd_install() can grab uring_lock in the slot allocation path when called from io-wq, and then call into io_install_fixed_file(), which will lock it again. Pull all locking out of io_install_fixed_file() into io_fixed_fd_install(). Fixes: 1339f24b336db ("io_uring: allow allocated fixed files for openat/openat2") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/64116172a9d0b85b85300346bb280f3657aafc26.1654087283.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: let IORING_OP_FILES_UPDATE support choosing fixed file slotsXiaoguang Wang2022-05-311-10/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One big issue with the file registration feature is that it needs user space apps to maintain free slot info about io_uring's fixed file table, which really is a burden for development. io_uring now supports choosing free file slot for user space apps by using IORING_FILE_INDEX_ALLOC flag in accept, open, and socket operations, but they need the app to use direct accept or direct open, which not all apps are prepared to use yet. To support apps that still need real fds, make use of the registration feature easier. Let IORING_OP_FILES_UPDATE support choosing fixed file slots, which will store picked fixed files slots in fd array and let cqe return the number of slots allocated. Suggested-by: Hao Xu <howeyxu@tencent.com> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> [axboe: move flag to uapi io_uring header, change goto to break, init] Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: defer alloc_hint update to io_file_bitmap_set()Xiaoguang Wang2022-05-311-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | io_file_bitmap_get() returns a free bitmap slot, but if it isn't used later, such as io_queue_rsrc_removal() returns error, in this case, we should not update alloc_hint at all, which still should be considered as a valid candidate for next io_file_bitmap_get() calls. To fix this issue, only update alloc_hint in io_file_bitmap_set(). Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> Link: https://lore.kernel.org/r/20220528015109.48039-1-xiaoguang.wang@linux.alibaba.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: ensure fput() called correspondingly when direct install failsXiaoguang Wang2022-05-311-0/+5
| | | | | | | | | | | | | | | | | | io_fixed_fd_install() may fail for short of free fixed file bitmap, in this case, need to call fput() correspondingly. Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> Link: https://lore.kernel.org/r/20220527025400.51048-1-xiaoguang.wang@linux.alibaba.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: wire up allocated direct descriptors for socketJens Axboe2022-05-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | The socket support was merged in an earlier branch that didn't yet have support for allocating direct descriptors, hence only open and accept got support for that. Do the one-liner to enable it now, so we have consistent support for any request that can instantiate a file/direct descriptor. Reviewed-by: Hao Xu <howeyxu@tencent.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: fix a memory leak of buffer group list on exitJens Axboe2022-05-311-0/+1
| | | | | | | | | | | | | | | | | | If we use a buffer group ID that is large enough to require io_uring to allocate it, then we don't correctly free it if the cleanup is deferred to the ring exit. The explicit removal paths are fine. Fixes: 9cfc7e94e42b ("io_uring: get rid of hashed provided buffer groups") Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: move shutdown under the general net sectionJens Axboe2022-05-311-36/+29
| | | | | | | | | | | | | | Gets rid of some ifdefs and enables use of the net defines for when CONFIG_NET isn't set. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: unify calling convention for async prep handlingJens Axboe2022-05-311-2/+12
| | | | | | | | | | | | | | | | Make them consistent in preparation for defining a req async prep handler. The readv/writev requests share a prep handler, move it one level down so the initial one is consistent with the others. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: add io_op_defs 'def' pointer in req init and issueJens Axboe2022-05-311-7/+10
| | | | | | | | | | | | | | | | Define and set it when appropriate, and use it consistently in the function rather than using io_op_defs[opcode]. Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: make prep and issue side of req handlers named consistentlyJens Axboe2022-05-251-6/+6
| | | | | | | | | | | | | | | | | | | | | | Almost all of them are, the odd ones out are the poll remove and the files update request. Name them like the others, which is: io_#cmdname_prep for request preparation io_#cmdname for request issue Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: make timeout prep handlers consistent with other prep handlersJens Axboe2022-05-251-4/+17
| | | | | | | | | | | | | | | | | | All other opcodes take a {req, sqe} set for prep handling, split out a timeout prep handler so that timeout and linked timeouts can use the same one. Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'for-5.19/io_uring-passthrough-2022-05-22' of ↵Linus Torvalds2022-05-231-74/+370
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.dk/linux-block Pull io_uring NVMe command passthrough from Jens Axboe: "On top of everything else, this adds support for passthrough for io_uring. The initial feature for this is NVMe passthrough support, which allows non-filesystem based IO commands and admin commands. To support this, io_uring grows support for SQE and CQE members that are twice as big, allowing to pass in a full NVMe command without having to copy data around. And to complete with more than just a single 32-bit value as the output" * tag 'for-5.19/io_uring-passthrough-2022-05-22' of git://git.kernel.dk/linux-block: (22 commits) io_uring: cleanup handling of the two task_work lists nvme: enable uring-passthrough for admin commands nvme: helper for uring-passthrough checks blk-mq: fix passthrough plugging nvme: add vectored-io support for uring-cmd nvme: wire-up uring-cmd support for io-passthru on char-device. nvme: refactor nvme_submit_user_cmd() block: wire-up support for passthrough plugging fs,io_uring: add infrastructure for uring-cmd io_uring: support CQE32 for nop operation io_uring: enable CQE32 io_uring: support CQE32 in /proc info io_uring: add tracing for additional CQE32 fields io_uring: overflow processing for CQE32 io_uring: flush completions for CQE32 io_uring: modify io_get_cqe for CQE32 io_uring: add CQE32 completion processing io_uring: add CQE32 setup processing io_uring: change ring size calculation for CQE32 io_uring: store add. return values for CQE32 ...
| * | io_uring: cleanup handling of the two task_work listsJens Axboe2022-05-211-25/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than pass in a bool for whether or not this work item needs to go into the priority list or not, provide separate helpers for it. For most use cases, this also then gets rid of the branch for non-priority task work. While at it, rename the prior_task_list to prio_task_list. Prior is a confusing name for it, as it would seem to indicate that this is the previous task_work list. prio makes it clear that this is a priority task_work list. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | fs,io_uring: add infrastructure for uring-cmdJens Axboe2022-05-111-18/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | file_operations->uring_cmd is a file private handler. This is somewhat similar to ioctl but hopefully a lot more sane and useful as it can be used to enable many io_uring capabilities for the underlying operation. IORING_OP_URING_CMD is a file private kind of request. io_uring doesn't know what is in this command type, it's for the provider of ->uring_cmd() to deal with. Co-developed-by: Kanchan Joshi <joshi.k@samsung.com> Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220511054750.20432-2-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: support CQE32 for nop operationStefan Roesch2022-05-091-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for filling the extra1 and extra2 fields for large CQE's. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-13-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: enable CQE32Stefan Roesch2022-05-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This enables large CQE's in the uring setup. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-12-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: support CQE32 in /proc infoStefan Roesch2022-05-091-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | This exposes the extra1 and extra2 fields in the /proc output. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-11-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: add tracing for additional CQE32 fieldsStefan Roesch2022-05-091-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds tracing for the extra1 and extra2 fields. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-10-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: overflow processing for CQE32Stefan Roesch2022-05-091-9/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the overflow processing for large CQE's. This adds two parameters to the io_cqring_event_overflow function and uses these fields to initialize the large CQE fields. Allocate enough space for large CQE's in the overflow structue. If no large CQE's are used, the size of the allocation is unchanged. The cqe field can have a different size depending if its a large CQE or not. To be able to allocate different sizes, the two fields in the structure are re-ordered. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-9-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: flush completions for CQE32Stefan Roesch2022-05-091-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This flushes the completions according to their CQE type: the same processing is done for the default CQE size, but for large CQE's the extra1 and extra2 fields are filled in. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-8-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: modify io_get_cqe for CQE32Stefan Roesch2022-05-091-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify accesses to the CQE array to take large CQE's into account. The index needs to be shifted by one for large CQE's. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-7-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: add CQE32 completion processingStefan Roesch2022-05-091-8/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the completion processing for the large CQE's and makes sure that the extra1 and extra2 fields are passed through. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-6-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: add CQE32 setup processingStefan Roesch2022-05-091-0/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | This adds two new function to setup and fill the CQE32 result structure. Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-5-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: change ring size calculation for CQE32Stefan Roesch2022-05-091-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This changes the function rings_size to take large CQE's into account. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-4-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: store add. return values for CQE32Stefan Roesch2022-05-091-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reuses the hash list node for the storage we need to hold the two 64-bit values that must be passed back. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-3-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | io_uring: add support for 128-byte SQEsJens Axboe2022-05-091-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normal SQEs are 64-bytes in length, which is fine for all the commands we support. However, in preparation for supporting passthrough IO, provide an option for setting up a ring with 128-byte SQEs. We continue to use the same type for io_uring_sqe, it's marked and commented with a zero sized array pad at the end. This provides up to 80 bytes of data for a passthrough command - 64 bytes for the extra added data, and 16 bytes available at the end of the existing SQE. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | Merge branch 'for-5.19/io_uring-socket' into for-5.19/io_uring-passthroughJens Axboe2022-05-091-0/+471
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * for-5.19/io_uring-socket: io_uring: use the text representation of ops in trace io_uring: rename op -> opcode io_uring: add io_uring_get_opcode io_uring: add type to op enum io_uring: add socket(2) support net: add __sys_socket_file() io_uring: fix trace for reduced sqe padding io_uring: add fgetxattr and getxattr support io_uring: add fsetxattr and setxattr support fs: split off do_getxattr from getxattr fs: split off setxattr_copy and do_setxattr function from setxattr
| * \ \ Merge branch 'for-5.19/io_uring' into for-5.19/io_uring-passthroughJens Axboe2022-05-091-936/+1142
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * for-5.19/io_uring: (85 commits) io_uring: don't clear req->kbuf when buffer selection is done io_uring: eliminate the need to track provided buffer ID separately io_uring: move provided buffer state closer to submit state io_uring: move provided and fixed buffers into the same io_kiocb area io_uring: abstract out provided buffer list selection io_uring: never call io_buffer_select() for a buffer re-select io_uring: get rid of hashed provided buffer groups io_uring: always use req->buf_index for the provided buffer group io_uring: ignore ->buf_index if REQ_F_BUFFER_SELECT isn't set io_uring: kill io_rw_buffer_select() wrapper io_uring: make io_buffer_select() return the user address directly io_uring: kill io_recv_buffer_select() wrapper io_uring: use 'sr' vs 'req->sr_msg' consistently io_uring: add POLL_FIRST support for send/sendmsg and recv/recvmsg io_uring: check IOPOLL/ioprio support upfront io_uring: replace smp_mb() with smp_mb__after_atomic() in io_sq_thread() io_uring: add IORING_SETUP_TASKRUN_FLAG io_uring: use TWA_SIGNAL_NO_IPI if IORING_SETUP_COOP_TASKRUN is used io_uring: set task_work notify method at init time io-wq: use __set_notify_signal() to wake workers ...
* | \ \ \ Merge tag 'for-5.19/io_uring-net-2022-05-22' of git://git.kernel.dk/linux-blockLinus Torvalds2022-05-231-4/+14
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring 'more data in socket' support from Jens Axboe: "To be able to fully utilize the 'poll first' support in the core io_uring branch, it's advantageous knowing if the socket was empty after a receive. This adds support for that" * tag 'for-5.19/io_uring-net-2022-05-22' of git://git.kernel.dk/linux-block: io_uring: return hint on whether more data is available after receive tcp: pass back data left in socket after receive
| * | | | | io_uring: return hint on whether more data is available after receiveJens Axboe2022-04-301-4/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For now just use a CQE flag for this, with big CQE support we could return the actual number of bytes left. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | Merge branch 'for-5.19/io_uring-socket' into for-5.19/io_uring-netJens Axboe2022-04-301-672/+1343
| |\ \ \ \ \ | | | |_|/ / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * for-5.19/io_uring-socket: (73 commits) io_uring: use the text representation of ops in trace io_uring: rename op -> opcode io_uring: add io_uring_get_opcode io_uring: add type to op enum io_uring: add socket(2) support net: add __sys_socket_file() io_uring: fix trace for reduced sqe padding io_uring: add fgetxattr and getxattr support io_uring: add fsetxattr and setxattr support fs: split off do_getxattr from getxattr fs: split off setxattr_copy and do_setxattr function from setxattr io_uring: return an error when cqe is dropped io_uring: use constants for cq_overflow bitfield io_uring: rework io_uring_enter to simplify return value io_uring: trace cqe overflows io_uring: add trace support for CQE overflow io_uring: allow re-poll if we made progress io_uring: support MSG_WAITALL for IORING_OP_SEND(MSG) io_uring: add support for IORING_ASYNC_CANCEL_ANY io_uring: allow IORING_OP_ASYNC_CANCEL with 'fd' key ...
* | | | | | Merge tag 'for-5.19/io_uring-socket-2022-05-22' of ↵Linus Torvalds2022-05-231-0/+177
|\ \ \ \ \ \ | | |/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.dk/linux-block Pull io_uring socket() support from Jens Axboe: "This adds support for socket(2) for io_uring. This is handy when using direct / registered file descriptors with io_uring. Outside of those two patches, a small series from Dylan on top that improves the tracing by providing a text representation of the opcode rather than needing to decode this by reading the header file every time. That sits in this branch as it was the last opcode added (until it wasn't...)" * tag 'for-5.19/io_uring-socket-2022-05-22' of git://git.kernel.dk/linux-block: io_uring: use the text representation of ops in trace io_uring: rename op -> opcode io_uring: add io_uring_get_opcode io_uring: add type to op enum io_uring: add socket(2) support net: add __sys_socket_file()
| * | | | | io_uring: add io_uring_get_opcodeDylan Yudaken2022-04-261-0/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some debug scenarios it is useful to have the text representation of the opcode. Add this function in preparation. Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220426082907.3600028-3-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | io_uring: add socket(2) supportJens Axboe2022-04-251-0/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Supports both regular socket(2) where a normal file descriptor is instantiated when called, or direct descriptors. Link: https://lore.kernel.org/r/20220412202240.234207-3-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | | | | | Merge tag 'for-5.19/io_uring-xattr-2022-05-22' of ↵Linus Torvalds2022-05-231-26/+296
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.dk/linux-block Pull io_uring xattr support from Jens Axboe: "Support for the xattr variants" * tag 'for-5.19/io_uring-xattr-2022-05-22' of git://git.kernel.dk/linux-block: io_uring: cleanup error-handling around io_req_complete io_uring: fix trace for reduced sqe padding io_uring: add fgetxattr and getxattr support io_uring: add fsetxattr and setxattr support fs: split off do_getxattr from getxattr fs: split off setxattr_copy and do_setxattr function from setxattr
| * | | | | | io_uring: cleanup error-handling around io_req_completeKanchan Joshi2022-04-251-29/+5
| |/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move common error-handling to io_req_complete, so that various callers avoid repeating that. Few callers (io_tee, io_splice) require slightly different handling. These are changed to use __io_req_complete instead. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220422101048.419942-1-joshi.k@samsung.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | io_uring: add fgetxattr and getxattr supportStefan Roesch2022-04-251-0/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support to io_uring for the fgetxattr and getxattr API. Signed-off-by: Stefan Roesch <shr@fb.com> Acked-by: Christian Brauner <brauner@kernel.org> Link: https://lore.kernel.org/r/20220323154420.3301504-5-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | io_uring: add fsetxattr and setxattr supportStefan Roesch2022-04-251-0/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support to io_uring for the fsetxattr and setxattr API. Signed-off-by: Stefan Roesch <shr@fb.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20220323154420.3301504-4-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | | | | | Merge tag 'for-5.19/io_uring-2022-05-22' of git://git.kernel.dk/linux-blockLinus Torvalds2022-05-231-1015/+1640
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring updates from Jens Axboe: "Here are the main io_uring changes for 5.19. This contains: - Fixes for sparse type warnings (Christoph, Vasily) - Support for multi-shot accept (Hao) - Support for io_uring managed fixed files, rather than always needing the applicationt o manage the indices (me) - Fix for a spurious poll wakeup (Dylan) - CQE overflow fixes (Dylan) - Support more types of cancelations (me) - Support for co-operative task_work signaling, rather than always forcing an IPI (me) - Support for doing poll first when appropriate, rather than always attempting a transfer first (me) - Provided buffer cleanups and support for mapped buffers (me) - Improve how io_uring handles inflight SCM files (Pavel) - Speedups for registered files (Pavel, me) - Organize the completion data in a struct in io_kiocb rather than keep it in separate spots (Pavel) - task_work improvements (Pavel) - Cleanup and optimize the submission path, in general and for handling links (Pavel) - Speedups for registered resource handling (Pavel) - Support sparse buffers and file maps (Pavel, me) - Various fixes and cleanups (Almog, Pavel, me)" * tag 'for-5.19/io_uring-2022-05-22' of git://git.kernel.dk/linux-block: (111 commits) io_uring: fix incorrect __kernel_rwf_t cast io_uring: disallow mixed provided buffer group registrations io_uring: initialize io_buffer_list head when shared ring is unregistered io_uring: add fully sparse buffer registration io_uring: use rcu_dereference in io_close io_uring: consistently use the EPOLL* defines io_uring: make apoll_events a __poll_t io_uring: drop a spurious inline on a forward declaration io_uring: don't use ERR_PTR for user pointers io_uring: use a rwf_t for io_rw.flags io_uring: add support for ring mapped supplied buffers io_uring: add io_pin_pages() helper io_uring: add buffer selection support to IORING_OP_NOP io_uring: fix locking state for empty buffer group io_uring: implement multishot mode for accept io_uring: let fast poll support multishot io_uring: add REQ_F_APOLL_MULTISHOT for requests io_uring: add IORING_ACCEPT_MULTISHOT for accept io_uring: only wake when the correct events are set io_uring: avoid io-wq -EAGAIN looping for !IOPOLL ...
| * | | | | | io_uring: disallow mixed provided buffer group registrationsJens Axboe2022-05-191-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's nonsensical to register a provided buffer ring, if a classic provided buffer group with the same ID exists. Depending on the order of which we decide what type to pick, the other type will never get used. Explicitly disallow it and return an error if this is attempted. Fixes: c7fb19428d67 ("io_uring: add support for ring mapped supplied buffers") Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | | io_uring: initialize io_buffer_list head when shared ring is unregisteredJens Axboe2022-05-191-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We use ->buf_pages != 0 to tell if this is a shared buffer ring or a classic provided buffer group. If we unregister the shared ring and then attempt to use it, buf_pages is zero yet the classic list head isn't properly initialized. This causes io_buffer_select() to think that we have classic buffers available, but then we crash when we try and get one from the list. Just initialize the list if we unregister a shared buffer ring, leaving it in a sane state for either re-registration or for attempting to use it. And do the same for the initial setup from the classic path. Fixes: c7fb19428d67 ("io_uring: add support for ring mapped supplied buffers") Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | | io_uring: add fully sparse buffer registrationPavel Begunkov2022-05-181-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Honour IORING_RSRC_REGISTER_SPARSE not only for direct files but fixed buffers as well. It makes the rsrc API more consistent. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/66f429e4912fe39fb3318217ff33a2853d4544be.1652879898.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | | io_uring: use rcu_dereference in io_closeChristoph Hellwig2022-05-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Accessing the file table needs a rcu_dereference_protected(). Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220518084005.3255380-7-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | | | | io_uring: consistently use the EPOLL* definesChristoph Hellwig2022-05-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | POLL* are unannotated values for the userspace ABI, while everything in-kernel should use EPOLL* and the __poll_t type. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220518084005.3255380-6-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>