summaryrefslogtreecommitdiffstats
path: root/arch/um (follow)
Commit message (Collapse)AuthorAgeFilesLines
* minmax: make generic MIN() and MAX() macros available everywhereLinus Torvalds2024-07-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This just standardizes the use of MIN() and MAX() macros, with the very traditional semantics. The goal is to use these for C constant expressions and for top-level / static initializers, and so be able to simplify the min()/max() macros. These macro names were used by various kernel code - they are very traditional, after all - and all such users have been fixed up, with a few different approaches: - trivial duplicated macro definitions have been removed Note that 'trivial' here means that it's obviously kernel code that already included all the major kernel headers, and thus gets the new generic MIN/MAX macros automatically. - non-trivial duplicated macro definitions are guarded with #ifndef This is the "yes, they define their own versions, but no, the include situation is not entirely obvious, and maybe they don't get the generic version automatically" case. - strange use case #1 A couple of drivers decided that the way they want to describe their versioning is with #define MAJ 1 #define MIN 2 #define DRV_VERSION __stringify(MAJ) "." __stringify(MIN) which adds zero value and I just did my Alexander the Great impersonation, and rewrote that pointless Gordian knot as #define DRV_VERSION "1.2" instead. - strange use case #2 A couple of drivers thought that it's a good idea to have a random 'MIN' or 'MAX' define for a value or index into a table, rather than the traditional macro that takes arguments. These values were re-written as C enum's instead. The new function-line macros only expand when followed by an open parenthesis, and thus don't clash with enum use. Happily, there weren't really all that many of these cases, and a lot of users already had the pattern of using '#ifndef' guarding (or in one case just using '#undef MIN') before defining their own private version that does the same thing. I left such cases alone. Cc: David Laight <David.Laight@aculab.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'uml-for-linus-6.11-rc1' of ↵Linus Torvalds2024-07-2550-1319/+1043
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/uml/linux Pull UML updates from Richard Weinberger: - Support for preemption - i386 Rust support - Huge cleanup by Benjamin Berg - UBSAN support - Removal of dead code * tag 'uml-for-linus-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/uml/linux: (41 commits) um: vector: always reset vp->opened um: vector: remove vp->lock um: register power-off handler um: line: always fill *error_out in setup_one_line() um: remove pcap driver from documentation um: Enable preemption in UML um: refactor TLB update handling um: simplify and consolidate TLB updates um: remove force_flush_all from fork_handler um: Do not flush MM in flush_thread um: Delay flushing syscalls until the thread is restarted um: remove copy_context_skas0 um: remove LDT support um: compress memory related stub syscalls while adding them um: Rework syscall handling um: Add generic stub_syscall6 function um: Create signal stack memory assignment in stub_data um: Remove stub-data.h include from common-offsets.h um: time-travel: fix signal blocking race/hang um: time-travel: remove time_exit() ...
| * um: vector: always reset vp->openedJohannes Berg2024-07-041-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If open fails, we have already set vp->opened, but it's not reset so that any further attempts will just return -ENXIO. Reset vp->opened even if close has nothing to do. This helps e.g. with slirp4netns handling only a single connection, you can then restart it and open the device again. Link: https://patch.msgid.link/20240703184622.df40c5c38461.Id4e20b48938c6019d99e6133227a34ac059db466@changeid Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: vector: remove vp->lockJohannes Berg2024-07-042-16/+1
| | | | | | | | | | | | | | | | | | | | This lock is useless, all the places that are using it for some locking will already hold the RTNL. Just remove it. Link: https://patch.msgid.link/20240703184606.19aa35b14959.I9cf5f2c4e35abd06cc89bf2e990fa755eb8e5f0f@changeid Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: register power-off handlerJohannes Berg2024-07-041-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | Otherwise we always get reboot: Power off not available: System halted instead which is really quite pointless. Link: https://patch.msgid.link/20240703173839.fcbb538c6686.I3d333f4773cff93c4337c4d128ee0b1b501b3dfa@changeid Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: line: always fill *error_out in setup_one_line()Johannes Berg2024-07-041-0/+2
| | | | | | | | | | | | | | | | | | | | The pointer isn't initialized by callers, but I have encountered cases where it's still printed; initialize it in all possible cases in setup_one_line(). Link: https://patch.msgid.link/20240703172235.ad863568b55f.Iaa1eba4db8265d7715ba71d5f6bb8c7ff63d27e9@changeid Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Enable preemption in UMLAnton Ivanov2024-07-032-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Since userspace state is saved in the MM process, kernel using FPU still doesn't really need to do anything, so this really is as simple as enabling preemption. The irq critical section in sigio_handler() needs preempt_disable()/preempt_enable(). Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com> Link: https://patch.msgid.link/20240702102549.d2fcea450854.I12f5a53d80ec1e425e66ef272b1e95cb523b608e@changeid [rebase, remove FPU save/restore, fix x86/um Makefile, rewrite commit message] Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: refactor TLB update handlingBenjamin Berg2024-07-039-132/+110
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conceptually, we want the memory mappings to always be up to date and represent whatever is in the TLB. To ensure that, we need to sync them over in the userspace case and for the kernel we need to process the mappings. The kernel will call flush_tlb_* if page table entries that were valid before become invalid. Unfortunately, this is not the case if entries are added. As such, change both flush_tlb_* and set_ptes to track the memory range that has to be synchronized. For the kernel, we need to execute a flush_tlb_kern_* immediately but we can wait for the first page fault in case of set_ptes. For userspace in contrast we only store that a range of memory needs to be synced and do so whenever we switch to that process. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-13-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: simplify and consolidate TLB updatesBenjamin Berg2024-07-033-317/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The HVC update was mostly used to compress consecutive calls into one. This is mostly relevant for userspace where it is already handled by the syscall stub code. Simplify the whole logic and consolidate it for both kernel and userspace. This does remove the sequential syscall compression for the kernel, however that shouldn't be the main factor in most runs. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-12-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: remove force_flush_all from fork_handlerBenjamin Berg2024-07-033-33/+15
| | | | | | | | | | | | | | | | | | There should be no need for this. It may be that this used to work around another issue where after a clone the MM was in a bad state. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-11-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Do not flush MM in flush_threadBenjamin Berg2024-07-032-5/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There should be no need to flush the memory in flush_thread. Doing this likely worked around some issue where memory was still incorrectly mapped when creating or cloning an MM. With the removal of the special clone path, that isn't relevant anymore. However, add the flush into MM initialization so that any new userspace MM is guaranteed to be clean. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-10-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Delay flushing syscalls until the thread is restartedBenjamin Berg2024-07-037-42/+49
| | | | | | | | | | | | | | | | | | | | | | As running the syscalls is expensive due to context switches, we should do so as late as possible in case more syscalls need to be queued later on. This will also benefit a later move to a SECCOMP enabled userspace as in that case the need for extra context switches is removed entirely. Signed-off-by: Benjamin Berg <benjamin@sipsolutions.net> Link: https://patch.msgid.link/20240703134536.1161108-9-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: remove copy_context_skas0Benjamin Berg2024-07-036-178/+10
| | | | | | | | | | | | | | | | | | | | The kernel flushes the memory ranges anyway for CoW and does not assume that the userspace process has anything set up already. So, start with a fresh process for the new mm context. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-8-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: remove LDT supportBenjamin Berg2024-07-035-39/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current LDT code has a few issues that mean it should be redone in a different way once we always start with a fresh MM even when cloning. In a new and better world, the kernel would just ensure its own LDT is clear at startup. At that point, all that is needed is a simple function to populate the LDT from another MM in arch_dup_mmap combined with some tracking of the installed LDT entries for each MM. Note that the old implementation was even incorrect with regard to reading, as it copied out the LDT entries in the internal format rather than converting them to the userspace structure. Removal should be fine as the LDT is not used for thread-local storage anymore. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-7-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: compress memory related stub syscalls while adding themBenjamin Berg2024-07-031-0/+39
| | | | | | | | | | | | | | | | | | | | To keep the number of syscalls that the stub has to do lower, compress two consecutive syscalls of the same type if the second is just a continuation of the first. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-6-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Rework syscall handlingBenjamin Berg2024-07-0311-163/+258
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Rework syscall handling to be platform independent. Also create a clean split between queueing of syscalls and flushing them out, removing the need to keep state in the code that triggers the syscalls. The code adds syscall_data_len to the global mm_id structure. This will be used later to allow surrounding code to track whether syscalls still need to run and if errors occurred. Signed-off-by: Benjamin Berg <benjamin@sipsolutions.net> Link: https://patch.msgid.link/20240703134536.1161108-5-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Create signal stack memory assignment in stub_dataBenjamin Berg2024-07-035-8/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we switch to use seccomp, we need both the signal stack and other data (i.e. syscall information) to co-exist in the stub data. To facilitate this, start by defining separate memory areas for the stack and syscall data. This moves the signal stack onto a new page as the memory area is not sufficient to hold both signal stack and syscall information. Only change the signal stack setup for now, as the syscall code will be reworked later. Signed-off-by: Benjamin Berg <benjamin@sipsolutions.net> Link: https://patch.msgid.link/20240703134536.1161108-3-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Remove stub-data.h include from common-offsets.hBenjamin Berg2024-07-031-5/+0
| | | | | | | | | | | | | | | | | | | | Further commits will require values from common-offsets.h inside stub-data.h. Resolve the possible circular dependency and simply use offsetof() inside stub_32.h and stub_64.h. Signed-off-by: Benjamin Berg <benjamin@sipsolutions.net> Link: https://patch.msgid.link/20240703134536.1161108-2-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: time-travel: fix signal blocking race/hangJohannes Berg2024-07-031-20/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When signals are hard-blocked in order to do time-travel socket processing, we set signals_blocked and then handle SIGIO signals by setting the SIGIO bit in signals_pending. When unblocking, we first set signals_blocked to 0, and then handle all pending signals. We have to set it first, so that we can again properly block/unblock inside the unblock, if the time-travel handlers need to be processed. Unfortunately, this is racy. We can get into this situation: // signals_pending = SIGIO_MASK unblock_signals_hard() signals_blocked = 0; if (signals_pending && signals_enabled) { block_signals(); unblock_signals() ... sig_handler_common(SIGIO, NULL, NULL); sigio_handler() ... sigio_reg_handler() irq_do_timetravel_handler() reg->timetravel_handler() == vu_req_interrupt_comm_handler() vu_req_read_message() vhost_user_recv_req() vhost_user_recv() vhost_user_recv_header() // reads 12 bytes header of // 20 bytes message <-- receive SIGIO here <-- sig_handler() int enabled = signals_enabled; // 1 if ((signals_blocked || !enabled) && (sig == SIGIO)) { if (!signals_blocked && time_travel_mode == TT_MODE_EXTERNAL) sigio_run_timetravel_handlers() _sigio_handler() sigio_reg_handler() ... as above ... vhost_user_recv_header() // reads 8 bytes that were message payload // as if it were header - but aborts since // it then gets -EAGAIN ... --> end signal handler --> // continue in vhost_user_recv() // full_read() for 8 bytes payload busy loops // entire process hangs here Conceptually, to fix this, we need to ensure that the signal handler cannot run while we hard-unblock signals. The thing that makes this more complex is that we can be doing hard-block/unblock while unblocking. Introduce a new signals_blocked_pending variable that we can keep at non-zero as long as pending signals are being processed, then we only need to ensure it's decremented safely and the signal handler will only increment it if it's already non-zero (or signals_blocked is set, of course.) Note also that only the outermost call to hard-unblock is allowed to decrement signals_blocked_pending, since it could otherwise reach zero in an inner call, and leave the same race happening if the timetravel_handler loops, but that's basically required of it. Fixes: d6b399a0e02a ("um: time-travel/signals: fix ndelay() in interrupt") Link: https://patch.msgid.link/20240703110144.28034-2-johannes@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: time-travel: remove time_exit()Johannes Berg2024-07-031-6/+0
| | | | | | | | | | | | | | This function is unused and unneeded, remove it. Link: https://patch.msgid.link/20240703130105.02b3a974acb7.I7264821f7cfa17ea713b7a3e4787aa41a3107d01@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: harddog: add missing MODULE_DESCRIPTION() macroJeff Johnson2024-07-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | With ARCH=um, make allmodconfig && make W=1 C=1 reports: WARNING: modpost: missing MODULE_DESCRIPTION() in arch/um/drivers/harddog.o Add the missing invocation of the MODULE_DESCRIPTION() macro. Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com> Link: https://patch.msgid.link/20240702-md-um-arch-um-drivers-v1-1-79e4f50b5bab@quicinc.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: add shared memory optimisation for time-travel=extJohannes Berg2024-07-031-12/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With external time travel, a LOT of message can end up being exchanged on the socket, taking a significant amount of time just to do that. Add a new shared memory optimisation to that, where a number of changes are made: - the controller sends a client ID and a shared memory FD (and a logging FD we don't use) in the ACK message to the initial START - the shared memory holds the current time and the free_until value, so that there's no need to exchange messages for that - if the client that's running has shared memory support, any client (the running one included) can request the next time it wants to run inside the shared memory, rather than sending a message, by also updating the free_until value - when shared memory is enabled, RUN/WAIT messages no longer have an ACK, further cutting down on messages Together, this can reduce the number of messages very significantly, and reduce overall test/simulation run time. Co-developed-by: Mordechay Goodstein <mordechay.goodstein@intel.com> Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com> Link: https://patch.msgid.link/20240702192118.6ad0a083f574.Ie41206c8ce4507fe26b991937f47e86c24ca7a31@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: add mmap/mremap OS callsJohannes Berg2024-07-032-0/+25
| | | | | | | | | | | | | | | | | | For the upcoming shared-memory time-travel external optimisations, we need to be able to mmap/mremap. Add the necessary OS calls. Link: https://patch.msgid.link/20240702192118.ca4472963638.Ic2da1d3a983fe57340c1b693badfa9c5bd2d8c61@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: generalize os_rcv_fdJohannes Berg2024-07-036-41/+54
| | | | | | | | | | | | | | | | Change os_rcv_fd() to os_rcv_fd_msg() that can more generally receive any number of FDs in any kind of message. Link: https://patch.msgid.link/20240702192118.40b78b2bfe4e.Ic6ec12d72630e5bcae1e597d6bd5c6f29f441563@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: time-travel: support time-travel protocol broadcast messagesMordechay Goodstein2024-07-033-0/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a message type to the time-travel protocol to broadcast a small (64-bit) value to all participants in a simulation. The main use case is to have an identical message come to all participants in a simulation, e.g. to separate out logs for different tests running in a single simulation. Down in the guts of time_travel_handle_message() we can't use printk() and not even printk_deferred(), so just store the message and print it at the start of the userspace() function. Unfortunately this means that other prints in the kernel can actually bypass the message, but in most cases where this is used, for example to separate test logs, userspace will be involved. Also, even if we could use printk_deferred(), we'd still need to flush it out in the userspace() function since otherwise userspace messages might cross it. As a result, this is a reasonable compromise, there's no need to have any core changes and it solves the main use case we have for it. Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com> Link: https://patch.msgid.link/20240702192118.c4093bc5b15e.I2ca8d006b67feeb866ac2017af7b741c9e06445a@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: enable UBSANJohannes Berg2024-07-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | We can select ARCH_HAS_UBSAN, it works just fine. It had been enabled and we even used it, but then commit 890a64810d59 ("ubsan: Restore dependency on ARCH_HAS_UBSAN") (correctly) disabled it again, enable ARCH_HAS_UBSAN to get it. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Link: https://patch.msgid.link/20240701220034.995eb04d656d.Ia29fe091b207fe66b5e26298c1e427ebcf131642@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um/mm: remove redundant assignment of max_low_pfnWei Yang2024-07-031-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current calculation of max_low_pfn is introduced in commit af84eab20891 ("[PATCH] uml: fix LVM crash"). It is intended to set max_low_pfn to the same value as max_pfn. But I am not sure why the max_pfn is set to totalram_pages, which represents the number of usable pages in system instead of an absolute page frame number. (The change history stops there.) While we have already calculate it in setup_physmem(), so not necessary to do it again. Also this would help changing totalram_pages accounting, since we plan to move the accounting into __free_pages_core(). With this change, totalram_pages may not represent the total usable pages at this point, since some pages would be deferred initialized. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> CC: Jeff Dike <jdike@linux.intel.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Alasdair G Kergon <agk@redhat.com> CC: Andrew Morton <akpm@linux-foundation.org> CC: Mike Rapoport (IBM) <rppt@kernel.org> CC: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Link: https://patch.msgid.link/20240615034150.2958-1-richard.weiyang@gmail.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * arch: um: rust: Add i386 support for RustDavid Gow2024-07-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At present, Rust in the kernel only supports 64-bit x86, so UML has followed suit. However, it's significantly easier to support 32-bit i386 on UML than on bare metal, as UML does not use the -mregparm option (which alters the ABI), which is not yet supported by rustc[1]. Add support for CONFIG_RUST on um/i386, by adding a new target config to generate_rust_target, and replacing various checks on CONFIG_X86_64 to also support CONFIG_X86_32. We still use generate_rust_target, rather than a built-in rustc target, in order to match x86_64, provide a future place for -mregparm, and more easily disable floating point instructions. With these changes, the KUnit tests pass with: kunit.py run --make_options LLVM=1 --kconfig_add CONFIG_RUST=y --kconfig_add CONFIG_64BIT=n --kconfig_add CONFIG_FORTIFY_SOURCE=n An earlier version of these changes was proposed on the Rust-for-Linux github[2]. [1]: https://github.com/rust-lang/rust/issues/116972 [2]: https://github.com/Rust-for-Linux/linux/pull/966 Signed-off-by: David Gow <davidgow@google.com> Link: https://patch.msgid.link/20240604224052.3138504-1-davidgow@google.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Remove /proc/sysemu support codeTiwei Bie2024-07-031-67/+0
| | | | | | | | | | | | | | | | | | | | Currently /proc/sysemu will never be registered, as sysemu_supported is initialized to zero implicitly and no code updates it. And there is also nothing to configure via sysemu in UML anymore. Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Link: https://patch.msgid.link/20240527134024.1539848-3-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Remove unused ncpus variableTiwei Bie2024-07-032-4/+0
| | | | | | | | | | | | | | | | It's no longer used. And uml_ncpus_setup doesn't exist anymore. Signed-off-by: Tiwei Bie <tiwei.btw@antgroup.com> Link: https://patch.msgid.link/20240527134024.1539848-2-tiwei.btw@antgroup.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * ubd: Remove unused mutex 'ubd_mutex'Dr. David Alan Gilbert2024-07-031-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit fb5d1d389c9e ("ubd: open the backing files in ubd_add") removed the last use of ubd_mutex. Remove it. Build and kernel startup test only. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://patch.msgid.link/20240505001508.255096-1-linux@treblig.org Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: time-travel: fix time-travel-start optionJohannes Berg2024-07-031-2/+2
| | | | | | | | | | | | | | | | | | | | We need to have the = as part of the option so that the value can be parsed properly. Also document that it must be given in nanoseconds, not seconds. Fixes: 065038706f77 ("um: Support time travel mode") Link: https://patch.msgid.link/20240417102744.14b9a9d4eba0.Ib22e9136513126b2099d932650f55f193120cd97@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Select HAS_IOREMAP for UML_IOMEM_EMULATIONNiklas Schnelle2024-07-031-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | In a future patch HAS_IOPORT=n will disable inb()/outb() and friends at compile time. UML supports these via its UML_IOMEM_EMULATION so let that select HAS_IOPORT and also reflect this in NO_IOPORT_MAP. Co-developed-by: Arnd Bergmann <arnd@kernel.org> Signed-off-by: Arnd Bergmann <arnd@kernel.org> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Link: https://patch.msgid.link/20240403124300.65379-2-schnelle@linux.ibm.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: Remove obsolete pcap driverAnton Ivanov2024-07-035-299/+2
| | | | | | | | | | | | | | | | | | | | | | | | Remove the pcap driver in UML. It is obsolete. It does not build on recent systems due to changes in libpcap and its dependencies. The vector driver's raw transport in UML provides identical functionality. Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com> Link: https://patch.msgid.link/20240328132424.376456-1-anton.ivanov@cambridgegreys.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: chan: use blocking IO for console output for time-travelBenjamin Berg2024-07-034-21/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When in time-travel mode (infinite-cpu or external) time should not pass for writing to the console. As such, it makes sense to put the FD for the output side into blocking mode and simply let any write to it hang. If we did not do this, then time could pass waiting for the console to become writable again. This is not desirable as it has random effects on the clock between runs. Implement this by duplicating the FD if output is active in a relevant mode and setting the duplicate to be blocking. This avoids changing the input channel to be blocking should it exists. After this, use the blocking FD for all write operations and do not allocate an IRQ it is set. Without time-travel mode fd_out will always match fd_in and IRQs are registered. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20231018123643.1255813-4-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: chan_user: retry partial writesBenjamin Berg2024-07-031-3/+15
| | | | | | | | | | | | | | | | | | | | | | In the next commit, we are going to set the output FD to be blocking. Once that is done, the write() may be short if an interrupt happens while it is writing out data. As such, to properly catch an EINTR error, we need to retry the write. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20231018123643.1255813-3-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: chan_user: catch EINTR when reading and writingBenjamin Berg2024-07-031-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | If the read/write function returns an error then we expect to see an event/IRQ later on. However, this will only happen after an EAGAIN as we are using edge based event triggering. As such, EINTR needs to be caught should it happen. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20231018123643.1255813-2-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
| * um: irqs: process outstanding IRQs when unblocking signalsBenjamin Berg2024-07-031-29/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When in time-travel mode, the eventfd events are read even when signals are blocked as SIGIO still needs to be processed. In this case, the event is cleared on the eventfd but the IRQ still needs to be fired later. We did already ensure that the SIGIO handler is run again. However, the FDs are configured to be level triggered, so that eventfd will not notify again. As such, add some logic to mark the IRQ as pending and process it at the next opportunity. To avoid duplication, reuse the logic used for the suspend/resume case. This does not really change anything except for delaying running the IRQs with timetravel_handler at a slightly later point in time (and possibly running non-timetravel IRQs that shouldn't happen earlier). While at it, move marking as pending into irq_event_handler as that is the more logical place for it to happen. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20231018123643.1255813-1-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
* | Merge tag 'irq-core-2024-07-15' of ↵Linus Torvalds2024-07-221-6/+10
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull interrupt subsystem updates from Thomas Gleixner: "Core: - Provide a new mechanism to create interrupt domains. The existing interfaces have already too many parameters and it's a pain to expand any of this for new required functionality. The new function takes a pointer to a data structure as argument. The data structure combines all existing parameters and allows for easy extension. The first extension for this is to handle the instantiation of generic interrupt chips at the core level and to allow drivers to provide extra init/exit callbacks. This is necessary to do the full interrupt chip initialization before the new domain is published, so that concurrent usage sites won't see a half initialized interrupt domain. Similar problems exist on teardown. This has turned out to be a real problem due to the deferred and parallel probing which was added in recent years. Handling this at the core level allows to remove quite some accrued boilerplate code in existing drivers and avoids horrible workarounds at the driver level. - The usual small improvements all over the place Drivers: - Add support for LAN966x OIC and RZ/Five SoC - Split the STM ExtI driver into a microcontroller and a SMP version to allow building the latter as a module for multi-platform kernels - Enable MSI support for Armada 370XP on platforms which do not support IPIs - The usual small fixes and enhancements all over the place" * tag 'irq-core-2024-07-15' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (59 commits) irqdomain: Fix the kernel-doc and plug it into Documentation genirq: Set IRQF_COND_ONESHOT in request_irq() irqchip/imx-irqsteer: Handle runtime power management correctly irqchip/gic-v3: Pass #redistributor-regions to gic_of_setup_kvm_info() irqchip/bcm2835: Enable SKIP_SET_WAKE and MASK_ON_SUSPEND irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued irqchip/gic-v4: Substitute vmovp_lock for a per-VM lock irqchip/gic-v4: Always configure affinity on VPE activation Revert "irqchip/dw-apb-ictl: Support building as module" Revert "Loongarch: Support loongarch avec" arm64: Kconfig: Allow build irq-stm32mp-exti driver as module ARM: stm32: Allow build irq-stm32mp-exti driver as module irqchip/stm32mp-exti: Allow building as module irqchip/stm32mp-exti: Rename internal symbols irqchip/stm32-exti: Split MCU and MPU code arm64: Kconfig: Select STM32MP_EXTI on STM32 platforms ARM: stm32: Use different EXTI driver on ARMv7m and ARMv7a irqchip/stm32-exti: Add CONFIG_STM32MP_EXTI irqchip/dw-apb-ictl: Support building as module irqchip/riscv-aplic: Simplify the initialization code ...
| * | _PATCH_19_23_um_virt_pci_Use_irq_domain_instantiate_Herve Codina2024-06-171-6/+10
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | um_pci_init() uses __irq_domain_add(). With the introduction of irq_domain_instantiate(), __irq_domain_add() becomes obsolete. In order to fully remove __irq_domain_add(), use directly irq_domain_instantiate(). [ tglx: Fixup struct initializer ] Signed-off-by: Herve Codina <herve.codina@bootlin.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240614173232.1184015-20-herve.codina@bootlin.com
* | Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds2024-07-192-8/+12
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull virtio updates from Michael Tsirkin: "Several new features here: - Virtio find vqs API has been reworked (required to fix the scalability issue we have with adminq, which I hope to merge later in the cycle) - vDPA driver for Marvell OCTEON - virtio fs performance improvement - mlx5 migration speedups Fixes, cleanups all over the place" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (56 commits) virtio: rename virtio_find_vqs_info() to virtio_find_vqs() virtio: remove unused virtio_find_vqs() and virtio_find_vqs_ctx() helpers virtio: convert the rest virtio_find_vqs() users to virtio_find_vqs_info() virtio_balloon: convert to use virtio_find_vqs_info() virtiofs: convert to use virtio_find_vqs_info() scsi: virtio_scsi: convert to use virtio_find_vqs_info() virtio_net: convert to use virtio_find_vqs_info() virtio_crypto: convert to use virtio_find_vqs_info() virtio_console: convert to use virtio_find_vqs_info() virtio_blk: convert to use virtio_find_vqs_info() virtio: rename find_vqs_info() op to find_vqs() virtio: remove the original find_vqs() op virtio: call virtio_find_vqs_info() from virtio_find_single_vq() directly virtio: convert find_vqs() op implementations to find_vqs_info() virtio_pci: convert vp_*find_vqs() ops to find_vqs_info() virtio: introduce virtio_queue_info struct and find_vqs_info() config op virtio: make virtio_find_single_vq() call virtio_find_vqs() virtio: make virtio_find_vqs() call virtio_find_vqs_ctx() caif_virtio: use virtio_find_single_vq() for single virtqueue finding vdpa/mlx5: Don't enable non-active VQs in .set_vq_ready() ...
| * | virtio: rename virtio_find_vqs_info() to virtio_find_vqs()Jiri Pirko2024-07-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since the original virtio_find_vqs() is no longer present, rename virtio_find_vqs_info() back to virtio_find_vqs(). Signed-off-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20240708074814.1739223-20-jiri@resnulli.us> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * | virtio: convert the rest virtio_find_vqs() users to virtio_find_vqs_info()Jiri Pirko2024-07-171-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of passing separate names and callbacks arrays to virtio_find_vqs(), have one of virtual_queue_info structs and pass it to virtio_find_vqs_info(). Suggested-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Signed-off-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20240708074814.1739223-18-jiri@resnulli.us> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * | virtio: rename find_vqs_info() op to find_vqs()Jiri Pirko2024-07-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since the original find_vqs() is no longer present, rename find_vqs_info() back to find_vqs(). Signed-off-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20240708074814.1739223-10-jiri@resnulli.us> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * | virtio: convert find_vqs() op implementations to find_vqs_info()Jiri Pirko2024-07-171-6/+8
| |/ | | | | | | | | | | | | | | | | Convert existing find_vqs() transport implementations to use find_vqs_info() config op. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Message-Id: <20240708074814.1739223-7-jiri@resnulli.us> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* | um: Use generic runtime constant implementationLinus Torvalds2024-07-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | UML should not be using the architecture native runtime constants, since that requires also having the appropriate instruction fixups (and all the linker script details). Not that using that code would be impossible, but it's not worth it. Just point UML at the generic version. Reported-by: Nathan Chancellor <nathan@kernel.org> Fixes: e3c92e81711d ("runtime constants: add x86 architecture support") Link: https://lore.kernel.org/all/20240716143644.GA1827132@thelio-3990X/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge tag 'asm-generic-6.11' of ↵Linus Torvalds2024-07-162-1/+9
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull asm-generic updates from Arnd Bergmann: "Most of this is part of my ongoing work to clean up the system call tables. In this bit, all of the newer architectures are converted to use the machine readable syscall.tbl format instead in place of complex macros in include/uapi/asm-generic/unistd.h. This follows an earlier series that fixed various API mismatches and in turn is used as the base for planned simplifications. The other two patches are dead code removal and a warning fix" * tag 'asm-generic-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: vmlinux.lds.h: catch .bss..L* sections into BSS") fixmap: Remove unused set_fixmap_offset_io() riscv: convert to generic syscall table openrisc: convert to generic syscall table nios2: convert to generic syscall table loongarch: convert to generic syscall table hexagon: use new system call table csky: convert to generic syscall table arm64: rework compat syscall macros arm64: generate 64-bit syscall.tbl arm64: convert unistd_32.h to syscall.tbl format arc: convert to generic syscall table clone3: drop __ARCH_WANT_SYS_CLONE3 macro kbuild: add syscall table generation to scripts/Makefile.asm-headers kbuild: verify asm-generic header list loongarch: avoid generating extra header files um: don't generate asm/bpf_perf_event.h csky: drop asm/gpio.h wrapper syscalls: add generic scripts/syscall.tbl
| * | um: don't generate asm/bpf_perf_event.hArnd Bergmann2024-07-102-1/+9
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we start validating the existence of the asm-generic side of generated headers, this one causes a warning: make[3]: *** No rule to make target 'arch/um/include/generated/asm/bpf_perf_event.h', needed by 'all'. Stop. The problem is that the asm-generic header only exists for the uapi variant, but arch/um has no uapi headers and instead uses the x86 userspace API. Add a custom file with an explicit redirect to avoid this. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
* | block: move the nonrot flag to queue_limitsChristoph Hellwig2024-06-191-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the nonrot flag into the queue_limits feature field so that it can be set atomically with the queue frozen. Use the chance to switch to defaulting to non-rotational and require the driver to opt into rotational, which matches the polarity of the sysfs interface. For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new rotational flag is not set as they clearly are not rotational despite this being a behavior change. There are some other drivers that unconditionally set the rotational flag to keep the existing behavior as they arguably can be used on rotational devices even if that is probably not their main use today (e.g. virtio_blk and drbd). The flag is automatically inherited in blk_stack_limits matching the existing behavior in dm and md. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-15-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | block: move cache control settings out of queue->flagsChristoph Hellwig2024-06-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the cache control settings into the queue_limits so that the flags can be set atomically with the device queue frozen. Add new features and flags field for the driver set flags, and internal (usually sysfs-controlled) flags in the block layer. Note that we'll eventually remove enough field from queue_limits to bring it back to the previous size. The disable flag is inverted compared to the previous meaning, which means it now survives a rescan, similar to the max_sectors and max_discard_sectors user limits. The FLUSH and FUA flags are now inherited by blk_stack_limits, which simplified the code in dm a lot, but also causes a slight behavior change in that dm-switch and dm-unstripe now advertise a write cache despite setting num_flush_bios to 0. The I/O path will handle this gracefully, but as far as I can tell the lack of num_flush_bios and thus flush support is a pre-existing data integrity bug in those targets that really needs fixing, after which a non-zero num_flush_bios should be required in dm for targets that map to underlying devices. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20240617060532.127975-14-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>