summaryrefslogtreecommitdiffstats
path: root/fs/io_uring.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'vfs-5.10-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds2020-11-141-2/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull fs freeze fix and cleanups from Darrick Wong: "A single vfs fix for 5.10, along with two subsequent cleanups. A very long time ago, a hack was added to the vfs fs freeze protection code to work around lockdep complaints about XFS, which would try to run a transaction (which requires intwrite protection) to finalize an xfs freeze (by which time the vfs had already taken intwrite). Fast forward a few years, and XFS fixed the recursive intwrite problem on its own, and the hack became unnecessary. Fast forward almost a decade, and latent bugs in the code converting this hack from freeze flags to freeze locks combine with lockdep bugs to make this reproduce frequently enough to notice page faults racing with freeze. Since the hack is unnecessary and causes thread race errors, just get rid of it completely. Making this kind of vfs change midway through a cycle makes me nervous, but a large enough number of the usual VFS/ext4/XFS/btrfs suspects have said this looks good and solves a real problem vector. And once that removal is done, __sb_start_write is now simple enough that it becomes possible to refactor the function into smaller, simpler static inline helpers in linux/fs.h. The cleanup is straightforward. Summary: - Finally remove the "convert to trylock" weirdness in the fs freezer code. It was necessary 10 years ago to deal with nested transactions in XFS, but we've long since removed that; and now this is causing subtle race conditions when lockdep goes offline and sb_start_* aren't prepared to retry a trylock failure. - Minor cleanups of the sb_start_* fs freeze helpers" * tag 'vfs-5.10-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: vfs: move __sb_{start,end}_write* to fs.h vfs: separate __sb_start_write into blocking and non-blocking helpers vfs: remove lockdep bogosity in __sb_start_write
| * vfs: separate __sb_start_write into blocking and non-blocking helpersDarrick J. Wong2020-11-111-2/+1
| | | | | | | | | | | | | | | | | | | | | | Break this function into two helpers so that it's obvious that the trylock versions return a value that must be checked, and the blocking versions don't require that. While we're at it, clean up the return type mismatch. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Christoph Hellwig <hch@lst.de>
* | io_uring: round-up cq size before comparing with rounded sq sizeJens Axboe2020-11-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an application specifies IORING_SETUP_CQSIZE to set the CQ ring size to a specific size, we ensure that the CQ size is at least that of the SQ ring size. But in doing so, we compare the already rounded up to power of two SQ size to the as-of yet unrounded CQ size. This means that if an application passes in non power of two sizes, we can return -EINVAL when the final value would've been fine. As an example, an application passing in 100/100 for sq/cq size should end up with 128 for both. But since we round the SQ size first, we compare the CQ size of 100 to 128, and return -EINVAL as that is too small. Cc: stable@vger.kernel.org Fixes: 33a107f0a1b8 ("io_uring: allow application controlled CQ ring size") Reported-by: Dan Melnic <dmm@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: fix link lookup racing with link timeoutPavel Begunkov2020-11-051-1/+15
| | | | | | | | | | | | | | | | | | We can't just go over linked requests because it may race with linked timeouts. Take ctx->completion_lock in that case. Cc: stable@vger.kernel.org # v5.7+ Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: use correct pointer for io_uring_show_cred()Jens Axboe2020-11-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | Previous commit changed how we index the registered credentials, but neglected to update one spot that is used when the personalities are iterated through ->show_fdinfo(). Ensure we use the right struct type for the iteration. Reported-by: syzbot+a6d494688cdb797bdfce@syzkaller.appspotmail.com Fixes: 1e6fa5216a0e ("io_uring: COW io_identity on mismatch") Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: don't forget to task-cancel drained reqsPavel Begunkov2020-11-051-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there is a long-standing request of one task locking up execution of deferred requests, and the defer list contains requests of another task (all files-less), then a potential execution of __io_uring_task_cancel() by that another task will sleep until that first long-standing request completion, and that may take long. E.g. tsk1: req1/read(empty_pipe) -> tsk2: req(DRAIN) Then __io_uring_task_cancel(tsk2) waits for req1 completion. It seems we even can manufacture a complicated case with many tasks sharing many rings that can lock them forever. Cancel deferred requests for __io_uring_task_cancel() as well. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: fix overflowed cancel w/ linked ->filesPavel Begunkov2020-11-041-22/+21
| | | | | | | | | | | | | | | | | | Current io_match_files() check in io_cqring_overflow_flush() is useless because requests drop ->files before going to the overflow list, however linked to it request do not, and we don't check them. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: drop req/tctx io_identity separatelyJens Axboe2020-11-041-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can't bundle this into one operation, as the identity may not have originated from the tctx to begin with. Drop one ref for each of them separately, if they don't match the static assignment. If we don't, then if the identity is a lookup from registered credentials, we could be freeing that identity as we're dropping a reference assuming it came from the tctx. syzbot reports this as a use-after-free, as the identity is still referencable from idr lookup: ================================================================== BUG: KASAN: use-after-free in instrument_atomic_read_write include/linux/instrumented.h:101 [inline] BUG: KASAN: use-after-free in atomic_fetch_add_relaxed include/asm-generic/atomic-instrumented.h:142 [inline] BUG: KASAN: use-after-free in __refcount_add include/linux/refcount.h:193 [inline] BUG: KASAN: use-after-free in __refcount_inc include/linux/refcount.h:250 [inline] BUG: KASAN: use-after-free in refcount_inc include/linux/refcount.h:267 [inline] BUG: KASAN: use-after-free in io_init_req fs/io_uring.c:6700 [inline] BUG: KASAN: use-after-free in io_submit_sqes+0x15a9/0x25f0 fs/io_uring.c:6774 Write of size 4 at addr ffff888011e08e48 by task syz-executor165/8487 CPU: 1 PID: 8487 Comm: syz-executor165 Not tainted 5.10.0-rc1-next-20201102-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:118 print_address_description.constprop.0.cold+0xae/0x4c8 mm/kasan/report.c:385 __kasan_report mm/kasan/report.c:545 [inline] kasan_report.cold+0x1f/0x37 mm/kasan/report.c:562 check_memory_region_inline mm/kasan/generic.c:186 [inline] check_memory_region+0x13d/0x180 mm/kasan/generic.c:192 instrument_atomic_read_write include/linux/instrumented.h:101 [inline] atomic_fetch_add_relaxed include/asm-generic/atomic-instrumented.h:142 [inline] __refcount_add include/linux/refcount.h:193 [inline] __refcount_inc include/linux/refcount.h:250 [inline] refcount_inc include/linux/refcount.h:267 [inline] io_init_req fs/io_uring.c:6700 [inline] io_submit_sqes+0x15a9/0x25f0 fs/io_uring.c:6774 __do_sys_io_uring_enter+0xc8e/0x1b50 fs/io_uring.c:9159 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x440e19 Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 eb 0f fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007fff644ff178 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 0000000000440e19 RDX: 0000000000000000 RSI: 000000000000450c RDI: 0000000000000003 RBP: 0000000000000004 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000022b4850 R13: 0000000000000010 R14: 0000000000000000 R15: 0000000000000000 Allocated by task 8487: kasan_save_stack+0x1b/0x40 mm/kasan/common.c:48 kasan_set_track mm/kasan/common.c:56 [inline] __kasan_kmalloc.constprop.0+0xc2/0xd0 mm/kasan/common.c:461 kmalloc include/linux/slab.h:552 [inline] io_register_personality fs/io_uring.c:9638 [inline] __io_uring_register fs/io_uring.c:9874 [inline] __do_sys_io_uring_register+0x10f0/0x40a0 fs/io_uring.c:9924 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Freed by task 8487: kasan_save_stack+0x1b/0x40 mm/kasan/common.c:48 kasan_set_track+0x1c/0x30 mm/kasan/common.c:56 kasan_set_free_info+0x1b/0x30 mm/kasan/generic.c:355 __kasan_slab_free+0x102/0x140 mm/kasan/common.c:422 slab_free_hook mm/slub.c:1544 [inline] slab_free_freelist_hook+0x5d/0x150 mm/slub.c:1577 slab_free mm/slub.c:3140 [inline] kfree+0xdb/0x360 mm/slub.c:4122 io_identity_cow fs/io_uring.c:1380 [inline] io_prep_async_work+0x903/0xbc0 fs/io_uring.c:1492 io_prep_async_link fs/io_uring.c:1505 [inline] io_req_defer fs/io_uring.c:5999 [inline] io_queue_sqe+0x212/0xed0 fs/io_uring.c:6448 io_submit_sqe fs/io_uring.c:6542 [inline] io_submit_sqes+0x14f6/0x25f0 fs/io_uring.c:6784 __do_sys_io_uring_enter+0xc8e/0x1b50 fs/io_uring.c:9159 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 The buggy address belongs to the object at ffff888011e08e00 which belongs to the cache kmalloc-96 of size 96 The buggy address is located 72 bytes inside of 96-byte region [ffff888011e08e00, ffff888011e08e60) The buggy address belongs to the page: page:00000000a7104751 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x11e08 flags: 0xfff00000000200(slab) raw: 00fff00000000200 ffffea00004f8540 0000001f00000002 ffff888010041780 raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888011e08d00: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ffff888011e08d80: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc > ffff888011e08e00: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc ^ ffff888011e08e80: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ffff888011e08f00: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ================================================================== Reported-by: syzbot+625ce3bb7835b63f7f3d@syzkaller.appspotmail.com Fixes: 1e6fa5216a0e ("io_uring: COW io_identity on mismatch") Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: ensure consistent view of original task ->mm from SQPOLLJens Axboe2020-11-041-7/+20
| | | | | | | | | | | | | | | | | | Ensure we get a valid view of the task mm, by using task_lock() when attempting to grab the original task mm. Reported-by: syzbot+b57abf7ee60829090495@syzkaller.appspotmail.com Fixes: 2aede0e417db ("io_uring: stash ctx task reference for SQPOLL") Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: properly handle SQPOLL request cancelationsJens Axboe2020-11-041-12/+65
| | | | | | | | | | | | | | | | | | Track if a given task io_uring context contains SQPOLL instances, so we can iterate those for cancelation (and request counts). This ensures that we properly wait on SQPOLL contexts, and find everything that needs canceling. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'io_uring-5.10-2020-10-30' of git://git.kernel.dk/linux-blockLinus Torvalds2020-10-301-70/+38
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring fixes from Jens Axboe: - Fixes for linked timeouts (Pavel) - Set IO_WQ_WORK_CONCURRENT early for async offload (Pavel) - Two minor simplifications that make the code easier to read and follow (Pavel) * tag 'io_uring-5.10-2020-10-30' of git://git.kernel.dk/linux-block: io_uring: use type appropriate io_kiocb handler for double poll io_uring: simplify __io_queue_sqe() io_uring: simplify nxt propagation in io_queue_sqe io_uring: don't miss setting IO_WQ_WORK_CONCURRENT io_uring: don't defer put of cancelled ltimeout io_uring: always clear LINK_TIMEOUT after cancel io_uring: don't adjust LINK_HEAD in cancel ltimeout io_uring: remove opcode check on ltimeout kill
| * io_uring: use type appropriate io_kiocb handler for double pollJens Axboe2020-10-251-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | io_poll_double_wake() is called for both request types - both pure poll requests, and internal polls. This means that we should be using the right handler based on the request type. Use the one that the original caller already assigned for the waitqueue handling, that will always match the correct type. Cc: stable@vger.kernel.org # v5.8+ Reported-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: simplify __io_queue_sqe()Pavel Begunkov2020-10-231-17/+11
| | | | | | | | | | | | | | | | Restructure __io_queue_sqe() so it follows simple if/else if/else control flow. It's more readable and removes extra goto/labels. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: simplify nxt propagation in io_queue_sqePavel Begunkov2020-10-231-7/+3
| | | | | | | | | | | | | | | | Don't overuse goto's, complex control flow doesn't make compilers happy and makes code harder to read. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: don't miss setting IO_WQ_WORK_CONCURRENTPavel Begunkov2020-10-231-7/+3
| | | | | | | | | | | | | | | | Set IO_WQ_WORK_CONCURRENT for all REQ_F_FORCE_ASYNC requests, do that in that is also looks better. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: don't defer put of cancelled ltimeoutPavel Begunkov2020-10-231-38/+20
| | | | | | | | | | | | | | | | | | | | Inline io_link_cancel_timeout() and __io_kill_linked_timeout() into io_kill_linked_timeout(). That allows to easily move a put of a cancelled linked timeout out of completion_lock and to not deferring it. It is also much more readable when not scattered across three different functions. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: always clear LINK_TIMEOUT after cancelPavel Begunkov2020-10-231-1/+1
| | | | | | | | | | | | | | | | | | Move REQ_F_LINK_TIMEOUT clearing out of __io_kill_linked_timeout() because it might return early and leave the flag set. It's not a problem, but may be confusing. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: don't adjust LINK_HEAD in cancel ltimeoutPavel Begunkov2020-10-231-1/+0
| | | | | | | | | | | | | | | | An armed linked timeout can never be a head of a link, so we don't need to clear REQ_F_LINK_HEAD for it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: remove opcode check on ltimeout killPavel Begunkov2020-10-231-2/+1
| | | | | | | | | | | | | | | | | | __io_kill_linked_timeout() already checks for REQ_F_LTIMEOUT_ACTIVE and it's set only for linked timeouts. No need to verify next request's opcode. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'io_uring-5.10-2020-10-24' of git://git.kernel.dk/linux-blockLinus Torvalds2020-10-241-97/+76
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring fixes from Jens Axboe: - fsize was missed in previous unification of work flags - Few fixes cleaning up the flags unification creds cases (Pavel) - Fix NUMA affinities for completely unplugged/replugged node for io-wq - Two fallout fixes from the set_fs changes. One local to io_uring, one for the splice entry point that io_uring uses. - Linked timeout fixes (Pavel) - Removal of ->flush() ->files work-around that we don't need anymore with referenced files (Pavel) - Various cleanups (Pavel) * tag 'io_uring-5.10-2020-10-24' of git://git.kernel.dk/linux-block: splice: change exported internal do_splice() helper to take kernel offset io_uring: make loop_rw_iter() use original user supplied pointers io_uring: remove req cancel in ->flush() io-wq: re-set NUMA node affinities if CPUs come online io_uring: don't reuse linked_timeout io_uring: unify fsize with def->work_flags io_uring: fix racy REQ_F_LINK_TIMEOUT clearing io_uring: do poll's hash_node init in common code io_uring: inline io_poll_task_handler() io_uring: remove extra ->file check in poll prep io_uring: make cached_cq_overflow non atomic_t io_uring: inline io_fail_links() io_uring: kill ref get/drop in personality init io_uring: flags-based creds init in queue
| * io_uring: make loop_rw_iter() use original user supplied pointersJens Axboe2020-10-221-14/+12
| | | | | | | | | | | | | | | | | | | | | | | | We jump through a hoop for fixed buffers, where we first map these to a bvec(), then kmap() the bvec to obtain the pointer we copy to/from. This was always a bit ugly, and with the set_fs changes, it ends up being practically problematic as well. There's no need to jump through these hoops, just use the original user pointers and length for the non iter based read/write. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: remove req cancel in ->flush()Pavel Begunkov2020-10-221-23/+5
| | | | | | | | | | | | | | | | | | | | Every close(io_uring) causes cancellation of all inflight requests carrying ->files. That's not nice but was neccessary up until recently. Now task->files removal is handled in the core code, so that part of flush can be removed. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: don't reuse linked_timeoutPavel Begunkov2020-10-221-1/+3
| | | | | | | | | | | | | | | | | | Clear linked_timeout for next requests in __io_queue_sqe() so we won't queue it up unnecessary when it's going to be punted. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Cc: stable@vger.kernel.org # v5.9 Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: unify fsize with def->work_flagsJens Axboe2020-10-211-12/+11
| | | | | | | | | | | | | | | | | | This one was missed in the earlier conversion, should be included like any of the other IO identity flags. Make sure we restore to RLIM_INIFITY when dropping the personality again. Fixes: 98447d65b4a7 ("io_uring: move io identity items into separate struct") Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: fix racy REQ_F_LINK_TIMEOUT clearingPavel Begunkov2020-10-191-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | io_link_timeout_fn() removes REQ_F_LINK_TIMEOUT from the link head's flags, it's not atomic and may race with what the head is doing. If io_link_timeout_fn() doesn't clear the flag, as forced by this patch, then it may happen that for "req -> link_timeout1 -> link_timeout2", __io_kill_linked_timeout() would find link_timeout2 and try to cancel it, so miscounting references. Teach it to ignore such double timeouts by marking the active one with a new flag in io_prep_linked_timeout(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: do poll's hash_node init in common codePavel Begunkov2020-10-191-2/+1
| | | | | | | | | | | | | | | | | | Move INIT_HLIST_NODE(&req->hash_node) into __io_arm_poll_handler(), so that it doesn't duplicated and common poll code would be responsible for it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: inline io_poll_task_handler()Pavel Begunkov2020-10-191-19/+12
| | | | | | | | | | | | | | io_poll_task_handler() doesn't add clarity, inline it in its only user. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: remove extra ->file check in poll prepPavel Begunkov2020-10-191-2/+0
| | | | | | | | | | | | | | | | io_poll_add_prep() doesn't need to verify ->file because it's already done in io_init_req(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: make cached_cq_overflow non atomic_tPavel Begunkov2020-10-191-5/+6
| | | | | | | | | | | | | | | | | | ctx->cached_cq_overflow is changed only under completion_lock. Convert it from atomic_t to just int, and mark all places when it's read without lock with READ_ONCE, which guarantees atomicity (relaxed ordering). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: inline io_fail_links()Pavel Begunkov2020-10-191-10/+3
| | | | | | | | | | | | | | Inline io_fail_links() and kill extra io_cqring_ev_posted(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: kill ref get/drop in personality initPavel Begunkov2020-10-191-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | Don't take an identity on personality/creds init only to drop it a few lines after. Extract a function which prepares req->work but leaves it without identity. Note: it's safe to not check REQ_F_WORK_INITIALIZED there because it's nobody had a chance to init it before io_init_req(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: flags-based creds init in queuePavel Begunkov2020-10-191-2/+2
| | | | | | | | | | | | | | | | | | Use IO_WQ_WORK_CREDS to figure out if req has creds to be used. Since recently it should rely only on flags, but not value of work.creds. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'arch-cleanup-2020-10-22' of git://git.kernel.dk/linux-blockLinus Torvalds2020-10-231-6/+7
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull arch task_work cleanups from Jens Axboe: "Two cleanups that don't fit other categories: - Finally get the task_work_add() cleanup done properly, so we don't have random 0/1/false/true/TWA_SIGNAL confusing use cases. Updates all callers, and also fixes up the documentation for task_work_add(). - While working on some TIF related changes for 5.11, this TIF_NOTIFY_RESUME cleanup fell out of that. Remove some arch duplication for how that is handled" * tag 'arch-cleanup-2020-10-22' of git://git.kernel.dk/linux-block: task_work: cleanup notification modes tracehook: clear TIF_NOTIFY_RESUME in tracehook_notify_resume()
| * | task_work: cleanup notification modesJens Axboe2020-10-171-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A previous commit changed the notification mode from true/false to an int, allowing notify-no, notify-yes, or signal-notify. This was backwards compatible in the sense that any existing true/false user would translate to either 0 (on notification sent) or 1, the latter which mapped to TWA_RESUME. TWA_SIGNAL was assigned a value of 2. Clean this up properly, and define a proper enum for the notification mode. Now we have: - TWA_NONE. This is 0, same as before the original change, meaning no notification requested. - TWA_RESUME. This is 1, same as before the original change, meaning that we use TIF_NOTIFY_RESUME. - TWA_SIGNAL. This uses TIF_SIGPENDING/JOBCTL_TASK_WORK for the notification. Clean up all the callers, switching their 0/1/false/true to using the appropriate TWA_* mode for notifications. Fixes: e91b48162332 ("task_work: teach task_work_add() to do signal_wake_up()") Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | | Merge tag 'io_uring-5.10-2020-10-20' of git://git.kernel.dk/linux-blockLinus Torvalds2020-10-201-260/+388
|\ \ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring updates from Jens Axboe: "A mix of fixes and a few stragglers. In detail: - Revert the bogus __read_mostly that we discussed for the initial pull request. - Fix a merge window regression with fixed file registration error path handling. - Fix io-wq numa node affinities. - Series abstracting out an io_identity struct, making it both easier to see what the personality items are, and also easier to to adopt more. Use this to cover audit logging. - Fix for read-ahead disabled block condition in async buffered reads, and using single page read-ahead to unify what generic_file_buffer_read() path is used. - Series for REQ_F_COMP_LOCKED fix and removal of it (Pavel) - Poll fix (Pavel)" * tag 'io_uring-5.10-2020-10-20' of git://git.kernel.dk/linux-block: (21 commits) io_uring: use blk_queue_nowait() to check if NOWAIT supported mm: use limited read-ahead to satisfy read mm: mark async iocb read as NOWAIT once some data has been copied io_uring: fix double poll mask init io-wq: inherit audit loginuid and sessionid io_uring: use percpu counters to track inflight requests io_uring: assign new io_identity for task if members have changed io_uring: store io_identity in io_uring_task io_uring: COW io_identity on mismatch io_uring: move io identity items into separate struct io_uring: rely solely on work flags to determine personality. io_uring: pass required context in as flags io-wq: assign NUMA node locality if appropriate io_uring: fix error path cleanup in io_sqe_files_register() Revert "io_uring: mark io_uring_fops/io_op_defs as __read_mostly" io_uring: fix REQ_F_COMP_LOCKED by killing it io_uring: dig out COMP_LOCK from deep call chain io_uring: don't put a poll req under spinlock io_uring: don't unnecessarily clear F_LINK_TIMEOUT io_uring: don't set COMP_LOCKED if won't put ...
| * | io_uring: use blk_queue_nowait() to check if NOWAIT supportedJeffle Xu2020-10-191-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | commit 021a24460dc2 ("block: add QUEUE_FLAG_NOWAIT") adds a new helper function blk_queue_nowait() to check if the bdev supports handling of REQ_NOWAIT or not. Since then bio-based dm device can also support REQ_NOWAIT, and currently only dm-linear supports that since commit 6abc49468eea ("dm: add support for REQ_NOWAIT and enable it for linear target"). Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: fix double poll mask initPavel Begunkov2020-10-171-1/+3
| | | | | | | | | | | | | | | | | | __io_queue_proc() is used by both, poll reqs and apoll. Don't use req->poll.events to copy poll mask because for apoll it aliases with private data of the request. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io-wq: inherit audit loginuid and sessionidJens Axboe2020-10-171-1/+23
| | | | | | | | | | | | | | | | | | | | | | Make sure the async io-wq workers inherit the loginuid and sessionid from the original task, and restore them to unset once we're done with the async work item. While at it, disable the ability for kernel threads to write to their own loginuid. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: use percpu counters to track inflight requestsJens Axboe2020-10-171-22/+28
| | | | | | | | | | | | | | | | | | | | | | | | Even though we place the req_issued and req_complete in separate cachelines, there's considerable overhead in doing the atomics particularly on the completion side. Get rid of having the two counters, and just use a percpu_counter for this. That's what it was made for, after all. This considerably reduces the overhead in __io_free_req(). Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: assign new io_identity for task if members have changedJens Axboe2020-10-171-4/+15
| | | | | | | | | | | | | | | | This avoids doing a copy for each new async IO, if some parts of the io_identity has changed. We avoid reference counting for the normal fast path of nothing ever changing. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: store io_identity in io_uring_taskJens Axboe2020-10-171-10/+11
| | | | | | | | | | | | | | | | | | This is, by definition, a per-task structure. So store it in the task context, instead of doing carrying it in each io_kiocb. We're being a bit inefficient if members have changed, as that requires an alloc and copy of a new io_identity struct. The next patch will fix that up. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: COW io_identity on mismatchJens Axboe2020-10-171-52/+170
| | | | | | | | | | | | | | | | | | | | | | If the io_identity doesn't completely match the task, then create a copy of it and use that. The existing copy remains valid until the last user of it has gone away. This also changes the personality lookup to be indexed by io_identity, instead of creds directly. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: move io identity items into separate structJens Axboe2020-10-171-31/+31
| | | | | | | | | | | | | | | | | | | | | | io-wq contains a pointer to the identity, which we just hold in io_kiocb for now. This is in preparation for putting this outside io_kiocb. The only exception is struct files_struct, which we'll need different rules for to avoid a circular dependency. No functional changes in this patch. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: rely solely on work flags to determine personality.Jens Axboe2020-10-171-18/+33
| | | | | | | | | | | | | | We solely rely on work->work_flags now, so use that for proper checking and clearing/dropping of various identity items. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: pass required context in as flagsJens Axboe2020-10-171-60/+40
| | | | | | | | | | | | | | | | | | | | We have a number of bits that decide what context to inherit. Set up io-wq flags for these instead. This is in preparation for always having the various members set, but not always needing them for all requests. No intended functional changes in this patch. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: fix error path cleanup in io_sqe_files_register()Jens Axboe2020-10-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syzbot reports the following crash: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] CPU: 1 PID: 8927 Comm: syz-executor.3 Not tainted 5.9.0-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:io_file_from_index fs/io_uring.c:5963 [inline] RIP: 0010:io_sqe_files_register fs/io_uring.c:7369 [inline] RIP: 0010:__io_uring_register fs/io_uring.c:9463 [inline] RIP: 0010:__do_sys_io_uring_register+0x2fd2/0x3ee0 fs/io_uring.c:9553 Code: ec 03 49 c1 ee 03 49 01 ec 49 01 ee e8 57 61 9c ff 41 80 3c 24 00 0f 85 9b 09 00 00 4d 8b af b8 01 00 00 4c 89 e8 48 c1 e8 03 <80> 3c 28 00 0f 85 76 09 00 00 49 8b 55 00 89 d8 c1 f8 09 48 98 4c RSP: 0018:ffffc90009137d68 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc9000ef2a000 RDX: 0000000000040000 RSI: ffffffff81d81dd9 RDI: 0000000000000005 RBP: dffffc0000000000 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: ffffed1012882a37 R13: 0000000000000000 R14: ffffed1012882a38 R15: ffff888094415000 FS: 00007f4266f3c700(0000) GS:ffff8880ae500000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000118c000 CR3: 000000008e57d000 CR4: 00000000001506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x45de59 Code: 0d b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db b3 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f4266f3bc78 EFLAGS: 00000246 ORIG_RAX: 00000000000001ab RAX: ffffffffffffffda RBX: 00000000000083c0 RCX: 000000000045de59 RDX: 0000000020000280 RSI: 0000000000000002 RDI: 0000000000000005 RBP: 000000000118bf68 R08: 0000000000000000 R09: 0000000000000000 R10: 40000000000000a1 R11: 0000000000000246 R12: 000000000118bf2c R13: 00007fff2fa4f12f R14: 00007f4266f3c9c0 R15: 000000000118bf2c Modules linked in: ---[ end trace 2a40a195e2d5e6e6 ]--- RIP: 0010:io_file_from_index fs/io_uring.c:5963 [inline] RIP: 0010:io_sqe_files_register fs/io_uring.c:7369 [inline] RIP: 0010:__io_uring_register fs/io_uring.c:9463 [inline] RIP: 0010:__do_sys_io_uring_register+0x2fd2/0x3ee0 fs/io_uring.c:9553 Code: ec 03 49 c1 ee 03 49 01 ec 49 01 ee e8 57 61 9c ff 41 80 3c 24 00 0f 85 9b 09 00 00 4d 8b af b8 01 00 00 4c 89 e8 48 c1 e8 03 <80> 3c 28 00 0f 85 76 09 00 00 49 8b 55 00 89 d8 c1 f8 09 48 98 4c RSP: 0018:ffffc90009137d68 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc9000ef2a000 RDX: 0000000000040000 RSI: ffffffff81d81dd9 RDI: 0000000000000005 RBP: dffffc0000000000 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: ffffed1012882a37 R13: 0000000000000000 R14: ffffed1012882a38 R15: ffff888094415000 FS: 00007f4266f3c700(0000) GS:ffff8880ae400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000074a918 CR3: 000000008e57d000 CR4: 00000000001506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 which is a copy of fget failure condition jumping to cleanup, but the cleanup requires ctx->file_data to be assigned. Assign it when setup, and ensure that we clear it again for the error path exit. Fixes: 5398ae698525 ("io_uring: clean file_data access in files_register") Reported-by: syzbot+f4ebcc98223dafd8991e@syzkaller.appspotmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * Revert "io_uring: mark io_uring_fops/io_op_defs as __read_mostly"Jens Axboe2020-10-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 738277adc81929b3e7c9b63fec6693868cc5f931. This change didn't make a lot of sense, and as Linus reports, it actually fails on clang: /tmp/io_uring-dd40c4.s:26476: Warning: ignoring changed section attributes for .data..read_mostly The arrays are already marked const so, by definition, they are not just read-mostly, they are read-only. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: fix REQ_F_COMP_LOCKED by killing itPavel Begunkov2020-10-171-80/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | REQ_F_COMP_LOCKED is used and implemented in a buggy way. The problem is that the flag is set before io_put_req() but not cleared after, and if that wasn't the final reference, the request will be freed with the flag set from some other context, which may not hold a spinlock. That means possible races with removing linked timeouts and unsynchronised completion (e.g. access to CQ). Instead of fixing REQ_F_COMP_LOCKED, kill the flag and use task_work_add() to move such requests to a fresh context to free from it, as was done with __io_free_req_finish(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: dig out COMP_LOCK from deep call chainPavel Begunkov2020-10-171-24/+9
| | | | | | | | | | | | | | | | | | io_req_clean_work() checks REQ_F_COMP_LOCK to pass this two layers up. Move the check up into __io_free_req(), so at least it doesn't looks so ugly and would facilitate further changes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: don't put a poll req under spinlockPavel Begunkov2020-10-171-2/+1
| | | | | | | | | | | | | | | | | | | | Move io_put_req() in io_poll_task_handler() from under spinlock. This eliminates the need to use REQ_F_COMP_LOCKED, at the expense of potentially having to grab the lock again. That's still a better trade off than relying on the locked flag. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>