diff options
author | Benjamin Berg <benjamin.berg@intel.com> | 2024-07-03 15:45:33 +0200 |
---|---|---|
committer | Johannes Berg <johannes.berg@intel.com> | 2024-07-03 17:09:49 +0200 |
commit | 5168f6b4a4d8fb5f731f2107924f72dffeae84fc (patch) | |
tree | 5b3f6caa14658ba1956d57eae0a92b4606a3950f /arch/um/kernel/skas/mmu.c | |
parent | um: Delay flushing syscalls until the thread is restarted (diff) | |
download | linux-5168f6b4a4d8fb5f731f2107924f72dffeae84fc.tar.xz linux-5168f6b4a4d8fb5f731f2107924f72dffeae84fc.zip |
um: Do not flush MM in flush_thread
There should be no need to flush the memory in flush_thread. Doing this
likely worked around some issue where memory was still incorrectly
mapped when creating or cloning an MM.
With the removal of the special clone path, that isn't relevant anymore.
However, add the flush into MM initialization so that any new userspace
MM is guaranteed to be clean.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Link: https://patch.msgid.link/20240703134536.1161108-10-benjamin@sipsolutions.net
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Diffstat (limited to 'arch/um/kernel/skas/mmu.c')
-rw-r--r-- | arch/um/kernel/skas/mmu.c | 24 |
1 files changed, 24 insertions, 0 deletions
diff --git a/arch/um/kernel/skas/mmu.c b/arch/um/kernel/skas/mmu.c index 697dad49c36b..47f98d87ea3c 100644 --- a/arch/um/kernel/skas/mmu.c +++ b/arch/um/kernel/skas/mmu.c @@ -40,6 +40,30 @@ int init_new_context(struct task_struct *task, struct mm_struct *mm) goto out_free; } + /* + * Ensure the new MM is clean and nothing unwanted is mapped. + * + * TODO: We should clear the memory up to STUB_START to ensure there is + * nothing mapped there, i.e. we (currently) have: + * + * |- user memory -|- unused -|- stub -|- unused -| + * ^ TASK_SIZE ^ STUB_START + * + * Meaning we have two unused areas where we may still have valid + * mappings from our internal clone(). That isn't really a problem as + * userspace is not going to access them, but it is definitely not + * correct. + * + * However, we are "lucky" and if rseq is configured, then on 32 bit + * it will fall into the first empty range while on 64 bit it is going + * to use an anonymous mapping in the second range. As such, things + * continue to work for now as long as we don't start unmapping these + * areas. + * + * Change this to STUB_START once we have a clean userspace. + */ + unmap(new_id, 0, TASK_SIZE); + return 0; out_free: |