summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'signal-for-v5.11' of ↵Linus Torvalds2020-12-161-2/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull signal cleanup from Eric Biederman: "Remove a never used HP-UX compatibility from parisc headers and consolidating the SA_* flags definitions into a generic header as much as possible. We only have 32 SA_* flag bits total, so we need to be careful. But as this is the first addition in a decade or so I think we are fine for the forseeable future" * 'signal-for-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: signal/parisc: Remove parisc specific definition of __ARCH_UAPI_SA_FLAGS
| * signal/parisc: Remove parisc specific definition of __ARCH_UAPI_SA_FLAGSEric W. Biederman2020-11-301-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Randy Dunlap wrote: > On 11/27/20 10:43 AM, Randy Dunlap wrote: > > > on parisc, _SA_SIGGFAULT is undefined and causing build errors. > > > > commit 23acdc76f1798b090bb9dcc90671cd29d929834e > > Author: Peter Collingbourne <pcc@google.com> > > Date: Thu Nov 12 18:53:34 2020 -0800 > > > > signal: clear non-uapi flag bits when passing/returning sa_flags > > > > > > > > _SA_SIGGFAULT is not used or defined anywhere else in the > > kernel source tree. > > > Here is the build error (although it should be obvious): > > ../kernel/signal.c: In function 'do_sigaction': > ../arch/parisc/include/asm/signal.h:24:30: error: '_SA_SIGGFAULT' undeclared (first use in this function) > 24 | #define __ARCH_UAPI_SA_FLAGS _SA_SIGGFAULT > | ^~~~~~~~~~~~~ Stephen Rothwell pointed out: > _SA_SIGGFAULT was removed by commit > > 41f5a81c07cd ("parisc: Drop HP-UX specific fcntl and signal flags") > > which was added to Linus' tree in v5.10-rc1. Solve this by removing the the parisc specific definition of __ARCH_UAPI_SA_FLAGS that was just added. Reported-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Fixes: 23acdc76f179 ("signal: clear non-uapi flag bits when passing/returning sa_flags") Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* | Merge tag 'close-range-openat2-v5.11' of ↵Linus Torvalds2020-12-165-11/+122
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux Pull close_range/openat2 updates from Christian Brauner: "This contains a fix for openat2() to make RESOLVE_BENEATH and RESOLVE_IN_ROOT mutually exclusive. It doesn't make sense to specify both at the same time. The openat2() selftests have been extended to verify that these two flags can't be specified together. This also adds the CLOSE_RANGE_CLOEXEC flag to close_range() which allows to mark a range of file descriptors as close-on-exec without actually closing them. This is useful in general but the use-case that triggered the patch is installing a seccomp profile in the calling task before exec. If the seccomp profile wants to block the close_range() syscall it obviously can't use it to close all fds before exec. If it calls close_range() before installing the seccomp profile it needs to take care not to close fds that it will still need before the exec meaning it would have to call close_range() multiple times on different ranges and then still fall back to closing fds one by one right before the exec. CLOSE_RANGE_CLOEXEC allows to solve this problem relying on the exec codepath to get rid of the unwanted fds. The close_range() tests have been expanded to verify that CLOSE_RANGE_CLOEXEC works" * tag 'close-range-openat2-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux: selftests: core: add tests for CLOSE_RANGE_CLOEXEC fs, close_range: add flag CLOSE_RANGE_CLOEXEC selftests: openat2: add RESOLVE_ conflict test openat2: reject RESOLVE_BENEATH|RESOLVE_IN_ROOT
| * | selftests: core: add tests for CLOSE_RANGE_CLOEXECGiuseppe Scrivano2020-12-041-0/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check that close_range(initial_fd, last_fd, CLOSE_RANGE_CLOEXEC) correctly sets the close-on-exec bit for the specified file descriptors. Open 100 file descriptors and set the close-on-exec flag for a subset of them first, then set it for every file descriptor above 2. Make sure RLIMIT_NOFILE doesn't affect the result. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com> Link: https://lore.kernel.org/r/20201118104746.873084-3-gscrivan@redhat.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
| * | fs, close_range: add flag CLOSE_RANGE_CLOEXECGiuseppe Scrivano2020-12-042-10/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the flag CLOSE_RANGE_CLOEXEC is set, close_range doesn't immediately close the files but it sets the close-on-exec bit. It is useful for e.g. container runtimes that usually install a seccomp profile "as late as possible" before execv'ing the container process itself. The container runtime could either do: 1 2 - install_seccomp_profile(); - close_range(MIN_FD, MAX_INT, 0); - close_range(MIN_FD, MAX_INT, 0); - install_seccomp_profile(); - execve(...); - execve(...); Both alternative have some disadvantages. In the first variant the seccomp_profile cannot block the close_range syscall, as well as opendir/read/close/... for the fallback on older kernels. In the second variant, close_range() can be used only on the fds that are not going to be needed by the runtime anymore, and it must be potentially called multiple times to account for the different ranges that must be closed. Using close_range(..., ..., CLOSE_RANGE_CLOEXEC) solves these issues. The runtime is able to use the existing open fds, the seccomp profile can block close_range() and the syscalls used for its fallback. Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com> Link: https://lore.kernel.org/r/20201118104746.873084-2-gscrivan@redhat.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
| * | selftests: openat2: add RESOLVE_ conflict testAleksa Sarai2020-12-031-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we reject conflicting RESOLVE_ flags, add a selftest to avoid regressions. Signed-off-by: Aleksa Sarai <cyphar@cyphar.com> Link: https://lore.kernel.org/r/20201027235044.5240-3-cyphar@cyphar.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
| * | openat2: reject RESOLVE_BENEATH|RESOLVE_IN_ROOTAleksa Sarai2020-12-031-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was an oversight in the original implementation, as it makes no sense to specify both scoping flags to the same openat2(2) invocation (before this patch, the result of such an invocation was equivalent to RESOLVE_IN_ROOT being ignored). This is a userspace-visible ABI change, but the only user of openat2(2) at the moment is LXC which doesn't specify both flags and so no userspace programs will break as a result. Fixes: fddb5d430ad9 ("open: introduce openat2(2) syscall") Signed-off-by: Aleksa Sarai <cyphar@cyphar.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: <stable@vger.kernel.org> # v5.6+ Link: https://lore.kernel.org/r/20201027235044.5240-2-cyphar@cyphar.com Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
* | | Merge branch 'regset.followup' of ↵Linus Torvalds2020-12-168-44/+28
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull regset updates from Al Viro: "Dead code removal, mostly. The only exception is a bit of cleanups on itanic (getting rid of redundant stack unwinds - each access_uarea() call does it and we call that 7 times in a row in ptrace_[sg]etregs(), *after* having done it ourselves in the caller; location where the user registers have been spilled won't change under us, and we can bloody well just call access_elf_reg() directly, giving it the unw_frame_info we'd calculated for our own purposes)" * 'regset.followup' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: c6x: kill ELF_CORE_COPY_FPREGS whack-a-mole: USE_ELF_CORE_DUMP [ia64] ptrace_[sg]etregs(): use access_elf_reg() instead of access_uarea() [ia64] missed cleanups from switch to regset coredumps arm: kill dump_task_regs()
| * | | c6x: kill ELF_CORE_COPY_FPREGSAl Viro2020-10-261-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | not used anymore, now that fdpic coredump got switched to regset Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | whack-a-mole: USE_ELF_CORE_DUMPAl Viro2020-10-264-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's been killed off back in 2009. Not a damn thing checks it. Just die, already... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | [ia64] ptrace_[sg]etregs(): use access_elf_reg() instead of access_uarea()Al Viro2020-10-261-20/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of positions passed by ptrace_[sg]etregs() to access_uarea() the latter sets the stack unwind up, walks all the way up the stack and proceeds to pass the resulting info to access_elf_reg(). The thing is, we'd *already* obtained that info, so we can bloody well call access_elf_reg() directly. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | [ia64] missed cleanups from switch to regset coredumpsAl Viro2020-10-262-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a bunch of function could've been made static back in 2008 when ia64 switched to regset-based coredumps Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | arm: kill dump_task_regs()Al Viro2020-10-262-13/+0
| | | | | | | | | | | | | | | | | | | | | | | | the last user had been fdpic Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | | Merge branch 'work.epoll' of ↵Linus Torvalds2020-12-164-429/+305
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull epoll updates from Al Viro: "Deal with epoll loop check/removal races sanely (among other things). The solution merged last cycle (pinning a bunch of struct file instances) had been forced by the wrong data structures; untangling that takes a bunch of preparations, but it's worth doing - control flow in there is ridiculously overcomplicated. Memory footprint has also gone down, while we are at it. This is not all I want to do in the area, but since I didn't get around to posting the followups they'll have to wait for the next cycle" * 'work.epoll' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (27 commits) epoll: take epitem list out of struct file epoll: massage the check list insertion lift rcu_read_lock() into reverse_path_check() convert ->f_ep_links/->fllink to hlist ep_insert(): move creation of wakeup source past the fl_ep_links insertion fold ep_read_events_proc() into the only caller take the common part of ep_eventpoll_poll() and ep_item_poll() into helper ep_insert(): we only need tep->mtx around the insertion itself ep_insert(): don't open-code ep_remove() on failure exits lift locking/unlocking ep->mtx out of ep_{start,done}_scan() ep_send_events_proc(): fold into the caller lift the calls of ep_send_events_proc() into the callers lift the calls of ep_read_events_proc() into the callers ep_scan_ready_list(): prepare to splitup ep_loop_check_proc(): saner calling conventions get rid of ep_push_nested() ep_loop_check_proc(): lift pushing the cookie into callers clean reverse_path_check_proc() a bit reverse_path_check_proc(): don't bother with cookies reverse_path_check_proc(): sane arguments ...
| * | | | epoll: take epitem list out of struct fileAl Viro2020-10-264-56/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the head of epitem list out of struct file; for epoll ones it's moved into struct eventpoll (->refs there), for non-epoll - into the new object (struct epitem_head). In place of ->f_ep_links we leave a pointer to the list head (->f_ep). ->f_ep is protected by ->f_lock and it's zeroed as soon as the list of epitems becomes empty (that can happen only in ep_remove() by now). The list of files for reverse path check is *not* going through struct file now - it's a single-linked list going through epitem_head instances. It's terminated by ERR_PTR(-1) (== EP_UNACTIVE_POINTER), so the elements of list can be distinguished by head->next != NULL. epitem_head instances are allocated at ep_insert() time (by attach_epitem()) and freed either by ep_remove() (if it empties the set of epitems *and* epitem_head does not belong to the reverse path check list) or by clear_tfile_check_list() when the list is emptied (if the set of epitems is empty by that point). Allocations are done from a separate slab - minimal kmalloc() size is too large on some architectures. As the result, we trim struct file _and_ get rid of the games with temporary file references. Locking and barriers are interesting (aren't they always); see unlist_file() and ep_remove() for details. The non-obvious part is that ep_remove() needs to decide if it will be the one to free the damn thing *before* actually storing NULL to head->epitems.first - that's what smp_load_acquire is for in there. unlist_file() lockless path is safe, since we hit it only if we observe NULL in head->epitems.first and whoever had done that store is guaranteed to have observed non-NULL in head->next. IOW, their last access had been the store of NULL into ->epitems.first and we can safely free the sucker. OTOH, we are under rcu_read_lock() and both epitem and epitem->file have their freeing RCU-delayed. So if we see non-NULL ->epitems.first, we can grab ->f_lock (all epitems in there share the same struct file) and safely recheck the emptiness of ->epitems; again, ->next is still non-NULL, so ep_remove() couldn't have freed head yet. ->f_lock serializes us wrt ep_remove(); the rest is trivial. Note that once head->epitems becomes NULL, nothing can get inserted into it - the only remaining reference to head after that point is from the reverse path check list. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | epoll: massage the check list insertionAl Viro2020-10-261-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | in the "non-epoll target" cases do it in ep_insert() rather than in do_epoll_ctl(), so that we do it only with some epitem is already guaranteed to exist. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | lift rcu_read_lock() into reverse_path_check()Al Viro2020-10-261-2/+2
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | convert ->f_ep_links/->fllink to hlistAl Viro2020-10-263-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | we don't care about the order of elements there Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_insert(): move creation of wakeup source past the fl_ep_links insertionAl Viro2020-10-261-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | That's the beginning of preparations for taking f_ep_links out of struct file. If insertion might fail, we will need a new failure exit. Having wakeup source creation done after that point will simplify life there; ep_remove() can (and commonly does) live with NULL epi->ws, so it can be used for cleanup after ep_create_wakeup_source() failure. It can't be used before the rbtree insertion, though, so if we are to unify all old failure exits, we need to move that thing down. Then we would be free to do simple kmem_cache_free() on the failure to insert into f_ep_links - no wakeup source to leak on that failure exit. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | fold ep_read_events_proc() into the only callerAl Viro2020-10-261-29/+20
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | take the common part of ep_eventpoll_poll() and ep_item_poll() into helperAl Viro2020-10-261-30/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only reason why ep_item_poll() can't simply call ep_eventpoll_poll() (or, better yet, call vfs_poll() in all cases) is that we need to tell lockdep how deep into the hierarchy of ->mtx we are. So let's add a variant of ep_eventpoll_poll() that would take depth explicitly and turn ep_eventpoll_poll() into wrapper for that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_insert(): we only need tep->mtx around the insertion itselfAl Viro2020-10-261-18/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We do need ep->mtx (and we are holding it all along), but that's the lock on the epoll we are inserting into; locking of the epoll being inserted is not needed for most of that work - as the matter of fact, we only need it to provide barriers for the fastpath check (for now). Move taking and releasing it into ep_insert(). The caller (do_epoll_ctl()) doesn't need to bother with that at all. Moreover, that way we kill the kludge in ep_item_poll() - now it's always called with tep unlocked. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_insert(): don't open-code ep_remove() on failure exitsAl Viro2020-10-261-37/+14
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | lift locking/unlocking ep->mtx out of ep_{start,done}_scan()Al Viro2020-10-261-31/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | get rid of depth/ep_locked arguments there and document the kludge in ep_item_poll() that has lead to ep_locked existence in the first place Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_send_events_proc(): fold into the callerAl Viro2020-10-261-40/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and get rid of struct ep_send_events_data - not needed anymore. The weird way of passing the arguments in (and real return value out - nominal return value of ep_send_events_proc() is ignored) was due to the signature forced on ep_scan_ready_list() callbacks. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | lift the calls of ep_send_events_proc() into the callersAl Viro2020-10-261-28/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and kill ep_scan_ready_list() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | lift the calls of ep_read_events_proc() into the callersAl Viro2020-10-261-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Expand the calls of ep_scan_ready_list() that get ep_read_events_proc(). As a side benefit we can pass depth to ep_read_events_proc() by value and not by address - the latter used to be forced by the signature expected from ep_scan_ready_list() callback. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_scan_ready_list(): prepare to splitupAl Viro2020-10-261-27/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | take the stuff done before and after the callback into separate helpers Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_loop_check_proc(): saner calling conventionsAl Viro2020-10-261-22/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) 'cookie' argument is unused; kill it. 2) 'priv' one is always an epoll struct file, and we only care about its associated struct eventpoll; pass that instead. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | get rid of ep_push_nested()Al Viro2020-10-261-25/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only remaining user is loop checking. But there we only need to check that we have not walked into the epoll we are inserting into - we are adding an edge to acyclic graph, so any loop being created will have to pass through the source of that edge. So we don't need that array of cookies - we have only one eventpoll to watch out for. RIP ep_push_nested(), along with the cookies array. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | ep_loop_check_proc(): lift pushing the cookie into callersAl Viro2020-10-261-6/+12
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | clean reverse_path_check_proc() a bitAl Viro2020-10-261-17/+9
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | reverse_path_check_proc(): don't bother with cookiesAl Viro2020-10-261-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We know there's no loops by the time we call it; the only thing we care about is too deep reverse paths. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | reverse_path_check_proc(): sane argumentsAl Viro2020-10-261-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | no need to force its calling conventions to match the callback for late unlamented ep_call_nested()... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | untangling ep_call_nested(): and there was much rejoicingAl Viro2020-10-261-32/+11
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | untangling ep_call_nested(): move push/pop of cookie into the callbacksAl Viro2020-10-261-9/+9
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | untangling ep_call_nested(): take pushing cookie into a helperAl Viro2020-10-261-9/+17
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | untangling ep_call_nested(): it's all serialized on epmutex.Al Viro2020-10-261-69/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IOW, * no locking is needed to protect the list * the list is actually a stack * no need to check ->ctx * it can bloody well be a static 5-element array - nobody is going to be accessing it in parallel. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | untangling ep_call_nested(): get rid of useless argumentsAl Viro2020-10-261-19/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ctx is always equal to current, ncalls - to &poll_loop_ncalls. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | epoll: get rid of epitem->nwaitAl Viro2020-10-261-26/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | we use it only to indicate allocation failures within queueing callback back to ep_insert(). Might as well use epq.epi for that reporting... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | epoll: switch epitem->pwqlist to single-linked listAl Viro2020-10-261-26/+25
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | We only traverse it once to destroy all associated eppoll_entry at epitem destruction time. The order of traversal is irrelevant there. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | | Merge tag 'erofs-for-5.11-rc1' of ↵Linus Torvalds2020-12-166-111/+149
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs Pull erofs updates from Gao Xiang: "This cycle we got rid of magical page->mapping type marks for temporary pages which had some concern before, now such usage is replaced with specific page->private. Also switch to inplace I/O instead of allocating extra cached pages to avoid direct reclaim under low memory scenario. There are some bmap bugfix and minor cleanups as well" * tag 'erofs-for-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs: erofs: avoid using generic_block_bmap erofs: force inplace I/O under low memory scenario erofs: simplify try_to_claim_pcluster() erofs: insert to managed cache after adding to pcl erofs: get rid of magical Z_EROFS_MAPPING_STAGING erofs: remove a void EROFS_VERSION macro set in Makefile
| * | | | erofs: avoid using generic_block_bmapHuang Jianan2020-12-101-19/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Surprisingly, `block' in sector_t indicates the number of i_blkbits-sized blocks rather than sectors for bmap. In addition, considering buffer_head limits mapped size to 32-bits, should avoid using generic_block_bmap. Link: https://lore.kernel.org/r/20201209115740.18802-1-huangjianan@oppo.com Fixes: 9da681e017a3 ("staging: erofs: support bmap") Reviewed-by: Chao Yu <yuchao0@huawei.com> Reviewed-by: Gao Xiang <hsiangkao@redhat.com> Signed-off-by: Huang Jianan <huangjianan@oppo.com> Signed-off-by: Guo Weichao <guoweichao@oppo.com> [ Gao Xiang: slightly update the commit message description. ] Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
| * | | | erofs: force inplace I/O under low memory scenarioGao Xiang2020-12-092-8/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Try to forcely switch to inplace I/O under low memory scenario in order to avoid direct memory reclaim due to cached page allocation. Link: https://lore.kernel.org/r/20201209123717.12430-1-hsiangkao@aol.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
| * | | | erofs: simplify try_to_claim_pcluster()Gao Xiang2020-12-081-27/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | simplify try_to_claim_pcluster() by directly using cmpxchg() here (the retry loop caused more overhead.) Also, move the chain loop detection in and rename it to z_erofs_try_to_claim_pcluster(). Link: https://lore.kernel.org/r/20201208095834.3133565-3-hsiangkao@redhat.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
| * | | | erofs: insert to managed cache after adding to pclGao Xiang2020-12-081-17/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, it could be some concern to call add_to_page_cache_lru() with page->mapping == Z_EROFS_MAPPING_STAGING (!= NULL). In contrast, page->private is used instead now, so partially revert commit 5ddcee1f3a1c ("erofs: get rid of __stagingpage_alloc helper") with some adaption for simplicity. Link: https://lore.kernel.org/r/20201208095834.3133565-2-hsiangkao@redhat.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
| * | | | erofs: get rid of magical Z_EROFS_MAPPING_STAGINGGao Xiang2020-12-084-40/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we played around with magical page->mapping for short-lived temporary pages since we need to identify different types of pages in the same pcluster but both invalidated and short-lived temporary pages can have page->mapping == NULL. It was considered as safe because that temporary pages are all non-LRU / non-movable pages. This patch tends to use specific page->private to identify short-lived pages instead so it won't rely on page->mapping anymore. Details are described in "compress.h" as well. Link: https://lore.kernel.org/r/20201208095834.3133565-1-hsiangkao@redhat.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
| * | | | erofs: remove a void EROFS_VERSION macro set in MakefileVladimir Zapolskiy2020-12-081-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit 4f761fa253b4 ("erofs: rename errln/infoln/debugln to erofs_{err, info, dbg}") the defined macro EROFS_VERSION has no affect, therefore removing it from the Makefile is a non-functional change. Link: https://lore.kernel.org/r/20201030122839.25431-1-vladimir@tuxera.com Reviewed-by: Gao Xiang <hsiangkao@redhat.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Vladimir Zapolskiy <vladimir@tuxera.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
* | | | | Merge tag 'nfsd-5.11' of git://git.linux-nfs.org/projects/cel/cel-2.6Linus Torvalds2020-12-1658-2187/+3462
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull nfsd updates from Chuck Lever: "Several substantial changes this time around: - Previously, exporting an NFS mount via NFSD was considered to be an unsupported feature. With v5.11, the community has attempted to make re-exporting a first-class feature of NFSD. This would enable the Linux in-kernel NFS server to be used as an intermediate cache for a remotely-located primary NFS server, for example, even with other NFS server implementations, like a NetApp filer, as the primary. - A short series of patches brings support for multiple RPC/RDMA data chunks per RPC transaction to the Linux NFS server's RPC/RDMA transport implementation. This is a part of the RPC/RDMA spec that the other premiere NFS/RDMA implementation (Solaris) has had for a very long time, and completes the implementation of RPC/RDMA version 1 in the Linux kernel's NFS server. - Long ago, NFSv4 support was introduced to NFSD using a series of C macros that hid dprintk's and goto's. Over time, the kernel's XDR implementation has been greatly improved, but these C macros have remained and become fallow. A series of patches in this pull request completely replaces those macros with the use of current kernel XDR infrastructure. Benefits include: - More robust input sanitization in NFSD's NFSv4 XDR decoders. - Make it easier to use common kernel library functions that use XDR stream APIs (for example, GSS-API). - Align the structure of the source code with the RFCs so it is easier to learn, verify, and maintain our XDR implementation. - Removal of more than a hundred hidden dprintk() call sites. - Removal of some explicit manipulation of pages to help make the eventual transition to xdr->bvec smoother. - On top of several related fixes in 5.10-rc, there are a few more fixes to get the Linux NFSD implementation of NFSv4.2 inter-server copy up to speed. And as usual, there is a pinch of seasoning in the form of a collection of unrelated minor bug fixes and clean-ups. Many thanks to all who contributed this time around!" * tag 'nfsd-5.11' of git://git.linux-nfs.org/projects/cel/cel-2.6: (131 commits) nfsd: Record NFSv4 pre/post-op attributes as non-atomic nfsd: Set PF_LOCAL_THROTTLE on local filesystems only nfsd: Fix up nfsd to ensure that timeout errors don't result in ESTALE exportfs: Add a function to return the raw output from fh_to_dentry() nfsd: close cached files prior to a REMOVE or RENAME that would replace target nfsd: allow filesystems to opt out of subtree checking nfsd: add a new EXPORT_OP_NOWCC flag to struct export_operations Revert "nfsd4: support change_attr_type attribute" nfsd4: don't query change attribute in v2/v3 case nfsd: minor nfsd4_change_attribute cleanup nfsd: simplify nfsd4_change_info nfsd: only call inode_query_iversion in the I_VERSION case nfs_common: need lock during iterate through the list NFSD: Fix 5 seconds delay when doing inter server copy NFSD: Fix sparse warning in nfs4proc.c SUNRPC: Remove XDRBUF_SPARSE_PAGES flag in gss_proxy upcall sunrpc: clean-up cache downcall nfsd: Fix message level for normal termination NFSD: Remove macros that are no longer used NFSD: Replace READ* macros in nfsd4_decode_compound() ...
| * | | | | nfsd: Record NFSv4 pre/post-op attributes as non-atomicTrond Myklebust2020-12-095-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the case of NFSv4, specify to the client that the pre/post-op attributes were not recorded atomically with the main operation. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>