summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/uaccess.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* x86/kasan: instrument user memory access APIAndrey Ryabinin2016-05-211-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Exchange between user and kernel memory is coded in assembly language. Which means that such accesses won't be spotted by KASAN as a compiler instruments only C code. Add explicit KASAN checks to user memory access API to ensure that userspace writes to (or reads from) a valid kernel memory. Note: Unlike others strncpy_from_user() is written mostly in C and KASAN sees memory accesses in it. However, it makes sense to add explicit check for all @count bytes that *potentially* could be written to the kernel. [aryabinin@virtuozzo.com: move kasan check under the condition] Link: http://lkml.kernel.org/r/1462869209-21096-1-git-send-email-aryabinin@virtuozzo.com Link: http://lkml.kernel.org/r/1462538722-1574-4-git-send-email-aryabinin@virtuozzo.com Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'x86-asm-for-linus' of ↵Linus Torvalds2016-05-171-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 asm updates from Ingo Molnar: "The main changes in this cycle were: - MSR access API fixes and enhancements (Andy Lutomirski) - early exception handling improvements (Andy Lutomirski) - user-space FS/GS prctl usage fixes and improvements (Andy Lutomirski) - Remove the cpu_has_*() APIs and replace them with equivalents (Borislav Petkov) - task switch micro-optimization (Brian Gerst) - 32-bit entry code simplification (Denys Vlasenko) - enhance PAT handling in enumated CPUs (Toshi Kani) ... and lots of other cleanups/fixlets" * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits) x86/arch_prctl/64: Restore accidentally removed put_cpu() in ARCH_SET_GS x86/entry/32: Remove asmlinkage_protect() x86/entry/32: Remove GET_THREAD_INFO() from entry code x86/entry, sched/x86: Don't save/restore EFLAGS on task switch x86/asm/entry/32: Simplify pushes of zeroed pt_regs->REGs selftests/x86/ldt_gdt: Test set_thread_area() deletion of an active segment x86/tls: Synchronize segment registers in set_thread_area() x86/asm/64: Rename thread_struct's fs and gs to fsbase and gsbase x86/arch_prctl/64: Remove FSBASE/GSBASE < 4G optimization x86/segments/64: When load_gs_index fails, clear the base x86/segments/64: When loadsegment(fs, ...) fails, clear the base x86/asm: Make asm/alternative.h safe from assembly x86/asm: Stop depending on ptrace.h in alternative.h x86/entry: Rename is_{ia32,x32}_task() to in_{ia32,x32}_syscall() x86/asm: Make sure verify_cpu() has a good stack x86/extable: Add a comment about early exception handlers x86/msr: Set the return value to zero when native_rdmsr_safe() fails x86/paravirt: Make "unsafe" MSR accesses unsafe even if PARAVIRT=y x86/paravirt: Add paravirt_{read,write}_msr() x86/msr: Carry on after a non-"safe" MSR access fails ...
| * x86/head: Move early exception panic code into early_fixup_exception()Andy Lutomirski2016-04-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This removes a bunch of assembly and adds some C code instead. It changes the actual printouts on both 32-bit and 64-bit kernels, but they still seem okay. Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: KVM list <kvm@vger.kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: xen-devel <Xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/4085070316fc3ab29538d3fcfe282648d1d4ee2e.1459605520.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86/head: Pass a real pt_regs and trapnr to early_fixup_exception()Andy Lutomirski2016-04-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | early_fixup_exception() is limited by the fact that it doesn't have a real struct pt_regs. Change both the 32-bit and 64-bit asm and the C code to pass and accept a real pt_regs. Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: KVM list <kvm@vger.kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: xen-devel <Xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/e3fb680fcfd5e23e38237e8328b64a25cc121d37.1459605520.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | x86/extable: ensure entries are swapped completely when sortingMathias Krause2016-05-111-0/+8
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The x86 exception table sorting was changed in commit 29934b0fb8ff ("x86/extable: use generic search and sort routines") to use the arch independent code in lib/extable.c. However, the patch was mangled somehow on its way into the kernel from the last version posted at [1]. The committed version kind of attempted to incorporate the changes of commit 548acf19234d ("x86/mm: Expand the exception table logic to allow new handling options") as in _completely_ _ignoring_ the x86 specific 'handler' member of struct exception_table_entry. This effectively broke the sorting as entries will only partly be swapped now. Fortunately, the x86 Kconfig selects BUILDTIME_EXTABLE_SORT, so the exception table doesn't need to be sorted at runtime. However, in case that ever changes, we better not break the exception table sorting just because of that. [ Ard Biesheuvel points out that BUILDTIME_EXTABLE_SORT applies to the core image only, but we still rely on the sorting routines for modules in that case - Linus ] Fix this by providing a swap_ex_entry_fixup() macro that takes care of the 'handler' member. [1] https://lkml.org/lkml/2016/1/27/232 Signed-off-by: Mathias Krause <minipli@googlemail.com> Fixes: 29934b0fb8f ("x86/extable: use generic search and sort routines") Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86/extable: use generic search and sort routinesArd Biesheuvel2016-03-221-3/+2
| | | | | | | | | | | Replace the arch specific versions of search_extable() and sort_extable() with calls to the generic ones, which now support relative exception tables as well. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'ras/core' into core/objtool, to pick up the new exception ↵Ingo Molnar2016-02-251-8/+8
|\ | | | | | | | | | | table format Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86/mm: Expand the exception table logic to allow new handling optionsTony Luck2016-02-181-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Huge amounts of help from Andy Lutomirski and Borislav Petkov to produce this. Andy provided the inspiration to add classes to the exception table with a clever bit-squeezing trick, Boris pointed out how much cleaner it would all be if we just had a new field. Linus Torvalds blessed the expansion with: ' I'd rather not be clever in order to save just a tiny amount of space in the exception table, which isn't really criticial for anybody. ' The third field is another relative function pointer, this one to a handler that executes the actions. We start out with three handlers: 1: Legacy - just jumps the to fixup IP 2: Fault - provide the trap number in %ax to the fixup code 3: Cleaned up legacy for the uaccess error hack Signed-off-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/f6af78fcbd348cf4939875cfda9c19689b5e50b8.1455732970.git.tony.luck@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | x86/uaccess: Add stack frame output operand in get_user() inline asmChris J Arges2016-02-241-2/+3
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Numerous 'call without frame pointer save/setup' warnings are introduced by stacktool because of functions using the get_user() macro. Bad stack traces could occur due to lack of or misplacement of stack frame setup code. This patch forces a stack frame to be created before the inline asm code if CONFIG_FRAME_POINTER is enabled by listing the stack pointer as an output operand for the get_user() inline assembly statement. Signed-off-by: Chris J Arges <chris.j.arges@canonical.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michal Marek <mmarek@suse.cz> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Pedro Alves <palves@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: live-patching@vger.kernel.org Link: http://lkml.kernel.org/r/bc85501f221ee512670797c7f110022e64b12c81.1453405861.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* Merge branch 'uaccess' (batched user access infrastructure)Linus Torvalds2016-01-211-18/+60
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Expose an interface to allow users to mark several accesses together as being user space accesses, allowing batching of the surrounding user space access markers (SMAP on x86, PAN on arm64, domain register switching on arm). This is currently only used for the user string lenth and copying functions, where the SMAP overhead on x86 drowned the actual user accesses (only noticeable on newer microarchitectures that support SMAP in the first place, of course). * user access batching branch: Use the new batched user accesses in generic user string handling Add 'unsafe' user access functions for batched accesses x86: reorganize SMAP handling in user space accesses
| * Add 'unsafe' user access functions for batched accessesLinus Torvalds2015-12-171-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The naming is meant to discourage random use: the helper functions are not really any more "unsafe" than the traditional double-underscore functions (which need the address range checking), but they do need even more infrastructure around them, and should not be used willy-nilly. In addition to checking the access range, these user access functions require that you wrap the user access with a "user_acess_{begin,end}()" around it. That allows architectures that implement kernel user access control (x86: SMAP, arm64: PAN) to do the user access control in the wrapping user_access_begin/end part, and then batch up the actual user space accesses using the new interfaces. The main (and hopefully only) use for these are for core generic access helpers, initially just the generic user string functions (strnlen_user() and strncpy_from_user()). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * x86: reorganize SMAP handling in user space accessesLinus Torvalds2015-12-171-18/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reorganizes how we do the stac/clac instructions in the user access code. Instead of adding the instructions directly to the same inline asm that does the actual user level access and exception handling, add them at a higher level. This is mainly preparation for the next step, where we will expose an interface to allow users to mark several accesses together as being user space accesses, but it does already clean up some code: - the inlined trivial cases of copy_in_user() now do stac/clac just once over the accesses: they used to do one pair around the user space read, and another pair around the write-back. - the {get,put}_user_ex() macros that are used with the catch/try handling don't do any stac/clac at all, because that happens in the try/catch surrounding them. Other than those two cleanups that happened naturally from the re-organization, this should not make any difference. Yet. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | x86: Add an inlined __copy_from_user_nmi() variantAndi Kleen2015-11-231-0/+9
|/ | | | | | | | | | | | | | | | | | | | | | | | Add a inlined __ variant of copy_from_user_nmi. The inlined variant allows the user to: - batch the access_ok() check for multiple accesses - avoid having a pagefault_disable/enable() on every access if the caller already ensures disabled page faults due to its context. - get all the optimizations in copy_*_user() for small constant sized transfers It is just a define to __copy_from_user_inatomic(). Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1445551641-13379-1-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/uaccess: Add unlikely() to __chk_range_not_ok() failure pathsAndy Lutomirski2015-10-071-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | This should improve code quality a bit. It also shrinks the kernel text: Before: text data bss dec filename 21828379 5194760 1277952 28301091 vmlinux After: text data bss dec filename 21827997 5194760 1277952 28300709 vmlinux ... by 382 bytes. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/f427b8002d932e5deab9055e0074bb4e7e80ee39.1444091584.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/uaccess: Tell the compiler that uaccess is unlikely to faultAndy Lutomirski2015-10-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCC doesn't realize that get_user(), put_user(), and their __ variants are unlikely to fail. Tell it. I noticed this while playing with the C entry code. Before: text data bss dec filename 21828763 5194760 1277952 28301475 vmlinux.baseline After: text data bss dec filename 21828379 5194760 1277952 28301091 vmlinux.new The generated code shrunk by 384 bytes. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/dc37bed7024319c3004d950d57151fca6aeacf97.1444091584.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* mm/uaccess, mm/fault: Clarify that uaccess may only sleep if pagefaults are ↵David Hildenbrand2015-05-191-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enabled In general, non-atomic variants of user access functions must not sleep if pagefaults are disabled. Let's update all relevant comments in uaccess code. This also reflects the might_sleep() checks in might_fault(). Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-4-git-send-email-dahi@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/uaccess: fix sparse errorsMichael S. Tsirkin2015-01-131-1/+1
| | | | | | | | | | | virtio wants to read bitwise types from userspace using get_user. At the moment this triggers sparse errors, since the value is passed through an integer. Fix that up using __force. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
* Merge branch 'x86/mpx' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds2014-01-201-0/+92
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull x86 cpufeature and mpx updates from Peter Anvin: "This includes the basic infrastructure for MPX (Memory Protection Extensions) support, but does not include MPX support itself. It is, however, a prerequisite for KVM support for MPX, which I believe will be pushed later this merge window by the KVM team. This includes moving the functionality in futex_atomic_cmpxchg_inatomic() into a new function in uaccess.h so it can be reused - this will be used by the final MPX patches. The actual MPX functionality (map management and so on) will be pushed in a future merge window, when ready" * 'x86/mpx' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/intel/mpx: Remove unused LWP structure x86, mpx: Add MPX related opcodes to the x86 opcode map x86: replace futex_atomic_cmpxchg_inatomic() with user_atomic_cmpxchg_inatomic x86: add user_atomic_cmpxchg_inatomic at uaccess.h x86, xsave: Support eager-only xsave features, add MPX support x86, cpufeature: Define the Intel MPX feature flag
| * x86: add user_atomic_cmpxchg_inatomic at uaccess.hQiaowei Ren2013-12-161-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds user_atomic_cmpxchg_inatomic() to use CMPXCHG instruction against a user space address. This generalizes the already existing futex_atomic_cmpxchg_inatomic() so it can be used in other contexts. This will be used in the upcoming support for Intel MPX (Memory Protection Extensions.) [ hpa: replaced #ifdef inside a macro with IS_ENABLED() ] Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com> Link: http://lkml.kernel.org/r/1387002303-6620-1-git-send-email-qiaowei.ren@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
* | x86: Slightly tweak the access_ok() C variant for better codeH. Peter Anvin2013-12-281-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gcc can under very specific circumstances realize that the code sequence: foo += bar; if (foo < bar) ... ... is equivalent to a carry out from the addition. Tweak the implementation of access_ok() (specifically __chk_range_not_ok()) to make it more likely that gcc will make that connection. It isn't fool-proof (sometimes gcc seems to think it can make better code with lea, and ends up with a second comparison), still, but it seems to be able to connect the two more frequently this way. Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/CA%2B55aFzPBdbfKovMT8Edr4SmE2_=%2BOKJFac9XW2awegogTkVTA@mail.gmail.com Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* | x86: Replace assembly access_ok() with a C variantLinus Torvalds2013-12-281-11/+17
|/ | | | | | | | | | | | | | | It turns out that the assembly variant doesn't actually produce that good code, presumably partly because it creates a long dependency chain with no scheduling, and partly because we cannot get a flags result out of gcc (which could be fixed with asm goto, but it turns out not to be worth it.) The C code allows gcc to schedule and generate multiple (easily predictable) branches, and as a side benefit we can really optimize the case where the size is constant. Link: http://lkml.kernel.org/r/CA%2B55aFzPBdbfKovMT8Edr4SmE2_=%2BOKJFac9XW2awegogTkVTA@mail.gmail.com Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86: Unify copy_to_user() and add size checking to itJan Beulich2013-10-261-0/+30
| | | | | | | | | | | | | | | | Similarly to copy_from_user(), where the range check is to protect against kernel memory corruption, copy_to_user() can benefit from such checking too: Here it protects against kernel information leaks. Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: <arjan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/5265059502000078000FC4F6@nat28.tlf.novell.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com>
* x86: Unify copy_from_user() size checkingJan Beulich2013-10-261-0/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 4a3127693001c61a21d1ce680db6340623f52e93 ("x86: Turn the copy_from_user check into an (optional) compile time warning") and 63312b6a6faae3f2e5577f2b001e3b504f10a2aa ("x86: Add a Kconfig option to turn the copy_from_user warnings into errors") touched only the 32-bit variant of copy_from_user(), whereas the original commit 9f0cf4adb6aa0bfccf675c938124e68f7f06349d ("x86: Use __builtin_object_size() to validate the buffer size for copy_from_user()") also added the same code to the 64-bit one. Further the earlier conversion from an inline WARN() to the call to copy_from_user_overflow() went a little too far: When the number of bytes to be copied is not a constant (e.g. [looking at 3.11] in drivers/net/tun.c:__tun_chr_ioctl() or drivers/pci/pcie/aer/aer_inject.c:aer_inject_write()), the compiler will always have to keep the funtion call, and hence there will always be a warning. By using __builtin_constant_p() we can avoid this. And then this slightly extends the effect of CONFIG_DEBUG_STRICT_USER_COPY_CHECKS in that apart from converting warnings to errors in the constant size case, it retains the (possibly wrong) warnings in the non-constant size case, such that if someone is prepared to get a few false positives, (s)he'll be able to recover the current behavior (except that these diagnostics now will never be converted to errors). Since the 32-bit variant (intentionally) didn't call might_fault(), the unification results in this being called twice now. Adding a suitable #ifdef would be the alternative if that's a problem. I'd like to point out though that with __compiletime_object_size() being restricted to gcc before 4.6, the whole construct is going to become more and more pointless going forward. I would question however that commit 2fb0815c9ee6b9ac50e15dd8360ec76d9fa46a2 ("gcc4: disable __compiletime_object_size for GCC 4.6+") was really necessary, and instead this should have been dealt with as is done here from the beginning. Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/5265056D02000078000FC4F3@nat28.tlf.novell.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86, doc: Update uaccess.h comment to reflect clang changesH. Peter Anvin2013-08-291-1/+4
| | | | | | | | | | | | Update comment in uaccess.h to reflect the changes for clang support: gcc only cares about the base register (most architectures don't encode the size of the operation in the operands like x86 does, and so it is treated effectively like a register number), whereas clang tries to enforce the size -- but not for register pairs. Link: http://lkml.kernel.org/r/1377803585-5913-3-git-send-email-dl9pf@gmx.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Jan-Simon Möller <dl9pf@gmx.de>
* x86, asm: Fix a compilation issue with clangJan-Simon Möller2013-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Clang does not support the "shortcut" we're taking here for gcc (see below). The patch uses the macro _ASM_DX to do the job. From arch/x86/include/asm/uaccess.h: /* * Careful: we have to cast the result to the type of the pointer * for sign reasons. * * The use of %edx as the register specifier is a bit of a * simplification, as gcc only cares about it as the starting point * and not size: for a 64-bit value it will use %ecx:%edx on 32 bits * (%ecx being the next register in gcc's x86 register sequence), and * %rdx on 64 bits. */ [ hpa: I consider this a compatibility bug in clang as this reflects a bit of a misunderstanding about how register strings are used by gcc, but the workaround is straightforward and there is no particular reason to not do it. ] Signed-off-by: Jan-Simon Möller <dl9pf@gmx.de> Link: http://lkml.kernel.org/r/1377803585-5913-3-git-send-email-dl9pf@gmx.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, doc: Clarify the use of asm("%edx") in uaccess.hH. Peter Anvin2013-02-131-1/+8
| | | | | | | | | | | | | | Put in a comment that explains that the use of asm("%edx") in uaccess.h doesn't actually necessarily mean %edx alone. Cc: Jamie Lokier <jamie@shareable.org> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: H. J. Lu <hjl.tools@gmail.com> Link: http://lkml.kernel.org/r/511ACDFB.1050707@zytor.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, mm: Redesign get_user with a __builtin_choose_expr hackH. Peter Anvin2013-02-121-43/+14
| | | | | | | | | | | | | | | | | | | | Instead of using a bitfield, use an odd little trick using typeof, __builtin_choose_expr, and sizeof. __builtin_choose_expr is explicitly defined to not convert its type (its argument is required to be a constant expression) so this should be well-defined. The code is still not 100% preturbation-free versus the baseline before 64-bit get_user(), but the differences seem to be very small, mostly related to padding and to gcc deciding when to spill registers. Cc: Jamie Lokier <jamie@shareable.org> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: H. J. Lu <hjl.tools@gmail.com> Link: http://lkml.kernel.org/r/511A8922.6050908@zytor.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, mm: Use a bitfield to mask nuisance get_user() warningsH. Peter Anvin2013-02-121-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even though it is never executed, gcc wants to warn for casting from a large integer to a pointer. Furthermore, using a variable with __typeof__() doesn't work because __typeof__ retains storage specifiers (const, restrict, volatile). However, we can declare a bitfield using sizeof(), which is legal because sizeof() is a constant expression. This quiets the warning, although the code generated isn't 100% identical from the baseline before 96477b4 x86-32: Add support for 64bit get_user(): [x86-mb is baseline, x86-mm is this commit] text data bss filename 113716147 15858380 35037184 tip.x86-mb/o.i386-allconfig/vmlinux 113716145 15858380 35037184 tip.x86-mm/o.i386-allconfig/vmlinux 12989837 3597944 12255232 tip.x86-mb/o.i386-modconfig/vmlinux 12989831 3597944 12255232 tip.x86-mm/o.i386-modconfig/vmlinux 1462784 237608 1401988 tip.x86-mb/o.i386-noconfig/vmlinux 1462837 237608 1401964 tip.x86-mm/o.i386-noconfig/vmlinux 7938994 553688 7639040 tip.x86-mb/o.i386-pae/vmlinux 7943136 557784 7639040 tip.x86-mm/o.i386-pae/vmlinux 7186126 510572 6574080 tip.x86-mb/o.i386/vmlinux 7186124 510572 6574080 tip.x86-mm/o.i386/vmlinux 103747269 33578856 65888256 tip.x86-mb/o.x86_64-allconfig/vmlinux 103746949 33578856 65888256 tip.x86-mm/o.x86_64-allconfig/vmlinux 12116695 11035832 20160512 tip.x86-mb/o.x86_64-modconfig/vmlinux 12116567 11035832 20160512 tip.x86-mm/o.x86_64-modconfig/vmlinux 1700790 380524 511808 tip.x86-mb/o.x86_64-noconfig/vmlinux 1700790 380524 511808 tip.x86-mm/o.x86_64-noconfig/vmlinux 12413612 1133376 1101824 tip.x86-mb/o.x86_64/vmlinux 12413484 1133376 1101824 tip.x86-mm/o.x86_64/vmlinux Cc: Jamie Lokier <jamie@shareable.org> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20130209110031.GA17833@n2100.arm.linux.org.uk Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86-32: Add support for 64bit get_user()Ville Syrjälä2013-02-081-4/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement __get_user_8() for x86-32. It will return the 64-bit result in edx:eax register pair, and ecx is used to pass in the address and return the error value. For consistency, change the register assignment for all other __get_user_x() variants, so that address is passed in ecx/rcx, the error value is returned in ecx/rcx, and eax/rax contains the actual value. [ hpa: I modified the patch so that it does NOT change the calling conventions for the existing callsites, this also means that the code is completely unchanged for 64 bits. Instead, continue to use eax for address input/error output and use the ecx:edx register pair for the output. ] This is a partial refresh of a patch [1] by Jamie Lokier from 2004. Only the minimal changes to implement 64bit get_user() were picked from the original patch. [1] http://article.gmane.org/gmane.linux.kernel/198823 Originally-by: Jamie Lokier <jamie@shareable.org> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://lkml.kernel.org/r/1355312043-11467-1-git-send-email-ville.syrjala@linux.intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, 386 removal: Remove CONFIG_X86_WP_WORKS_OKH. Peter Anvin2012-11-291-42/+0
| | | | | | | | All 486+ CPUs support WP in supervisor mode, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-7-git-send-email-hpa@linux.intel.com
* UAPI: (Scripted) Convert #include "..." to #include <path/...> in kernel ↵David Howells2012-10-021-2/+2
| | | | | | | | | | | | system headers Convert #include "..." to #include <path/...> in kernel system headers. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Dave Jones <davej@redhat.com>
* x86, smap: Reduce the SMAP overhead for signal handlingH. Peter Anvin2012-09-211-8/+6
| | | | | | | | | | | | | | | | | Signal handling contains a bunch of accesses to individual user space items, which causes an excessive number of STAC and CLAC instructions. Instead, let get/put_user_try ... get/put_user_catch() contain the STAC and CLAC instructions. This means that get/put_user_try no longer nests, and furthermore that it is no longer legal to use user space access functions other than __get/put_user_ex() inside those blocks. However, these macros are x86-specific anyway and are only used in the signal-handling paths; a simple reordering of moving the larger subroutine calls out of the try...catch blocks resolves that problem. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1348256595-29119-12-git-send-email-hpa@linux.intel.com
* x86, smap: Add STAC and CLAC instructions to control user space accessH. Peter Anvin2012-09-211-12/+19
| | | | | | | | | | | | | | | When Supervisor Mode Access Prevention (SMAP) is enabled, access to userspace from the kernel is controlled by the AC flag. To make the performance of manipulating that flag acceptable, there are two new instructions, STAC and CLAC, to set and clear it. This patch adds those instructions, via alternative(), when the SMAP feature is enabled. It also adds X86_EFLAGS_AC unconditionally to the SYSCALL entry mask; there is simply no reason to make that one conditional. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1348256595-29119-9-git-send-email-hpa@linux.intel.com
* x86, uaccess: Merge prototypes for clear_user/__clear_userH. Peter Anvin2012-09-211-0/+3
| | | | | | | | The prototypes for clear_user() and __clear_user() are identical in the 32- and 64-bit headers. No functionality change. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1348256595-29119-8-git-send-email-hpa@linux.intel.com
* perf/x86: Check if user fp is validArun Sharma2012-06-061-6/+6
| | | | | | | | | Signed-off-by: Arun Sharma <asharma@fb.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1334961696-19580-4-git-send-email-asharma@fb.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86: use the new generic strnlen_user() functionLinus Torvalds2012-05-261-0/+3
| | | | | | | This throws away the old x86-specific functions in favor of the generic optimized version. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: use generic strncpy_from_user routineLinus Torvalds2012-05-261-0/+1
| | | | | | | | | | | | | | | | | The generic strncpy_from_user() is not really optimal, since it is designed to work on both little-endian and big-endian. And on little-endian you can simplify much of the logic to find the first zero byte, since little-endian arithmetic doesn't have to worry about the carry bit propagating into earlier bytes (only later bytes, which we don't care about). But I have patches to make the generic routines use the architecture- specific <asm/word-at-a-time.h> infrastructure, so that we can regain the little-endian optimizations. But before we do that, switch over to the generic routines to make the patches each do just one well-defined thing. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86, extable: Switch to relative exception table entriesH. Peter Anvin2012-04-211-6/+11
| | | | | | | | | | | | | | | | | | | | | Switch to using relative exception table entries on x86. On i386, this has the advantage that the exception table entries don't need to be relocated; on x86-64 this means the exception table entries take up only half the space. In either case, a 32-bit delta is sufficient, as the range of kernel code addresses is limited. Since part of the goal is to avoid needing to adjust the entries when the kernel is relocated, the old trick of using addresses in the NULL pointer range to indicate uaccess_err no longer works (and unlike RISC architectures we can't use a flag bit); instead use an delta just below +2G to indicate these special entries. The reach is still limited to a single instruction. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: David Daney <david.daney@cavium.com> Link: http://lkml.kernel.org/r/CA%2B55aFyijf43qSu3N9nWHEBwaGbb7T2Oq9A=9EyR=Jtyqfq_cQ@mail.gmail.com
* x86, extable: Add _ASM_EXTABLE_EX() macroH. Peter Anvin2012-04-211-4/+4
| | | | | | | | | | Add _ASM_EXTABLE_EX() to generate the special extable entries that are associated with uaccess_err. This allows us to change the protocol associated with these special entries. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: David Daney <david.daney@cavium.com> Link: http://lkml.kernel.org/r/CA%2B55aFyijf43qSu3N9nWHEBwaGbb7T2Oq9A=9EyR=Jtyqfq_cQ@mail.gmail.com
* x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it upLinus Torvalds2012-04-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This merges the 32- and 64-bit versions of the x86 strncpy_from_user() by just rewriting it in C rather than the ancient inline asm versions that used lodsb/stosb and had been duplicated for (trivial) differences between the 32-bit and 64-bit versions. While doing that, it also speeds them up by doing the accesses a word at a time. Finally, the new routines also properly handle the case of hitting the end of the address space, which we have never done correctly before (fs/namei.c has a hack around it for that reason). Despite all these improvements, it actually removes more lines than it adds, due to the de-duplication. Also, we no longer export (or define) the legacy __strncpy_from_user() function (that was defined to not do the user permission checks), since it's not actually used anywhere, and the user address space checks are built in to the new code. Other architecture maintainers have been notified that the old hack in fs/namei.c will be going away in the 3.5 merge window, in case they copied the x86 approach of being a bit cavalier about the end of the address space. Cc: linux-arch@vger.kernel.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Anvin" <hpa@zytor.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86-64: Set siginfo and context on vsyscall emulation faultsAndy Lutomirski2011-12-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | To make this work, we teach the page fault handler how to send signals on failed uaccess. This only works for user addresses (kernel addresses will never hit the page fault handler in the first place), so we need to generate signals for those separately. This gets the tricky case right: if the user buffer spans multiple pages and only the second page is invalid, we set cr2 and si_addr correctly. UML relies on this behavior to "fault in" pages as needed. We steal a bit from thread_info.uaccess_err to enable this. Before this change, uaccess_err was a 32-bit boolean value. This fixes issues with UML when vsyscall=emulate. Reported-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Cc: richard -rw- weinberger <richard.weinberger@gmail.com> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/4c8f91de7ec5cd2ef0f59521a04e1015f11e42b4.1320712291.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, perf: Make copy_from_user_nmi() a library functionRobert Richter2011-07-211-0/+3
| | | | | | | | | | | copy_from_user_nmi() is used in oprofile and perf. Moving it to other library functions like copy_from_user(). As this is x86 code for 32 and 64 bits, create a new file usercopy.c for unified code. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110607172413.GJ20052@erda.amd.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sanitize <linux/prefetch.h> usageLinus Torvalds2011-05-201-1/+0
| | | | | | | | | | | | | | | | | | | | | Commit e66eed651fd1 ("list: remove prefetching from regular list iterators") removed the include of prefetch.h from list.h, which uncovered several cases that had apparently relied on that rather obscure header file dependency. So this fixes things up a bit, using grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]') grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]') to guide us in finding files that either need <linux/prefetch.h> inclusion, or have it despite not needing it. There are more of them around (mostly network drivers), but this gets many core ones. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86, 64-bit: Fix copy_[to/from]_user() checks for the userspace address limitJiri Olsa2011-05-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As reported in BZ #30352: https://bugzilla.kernel.org/show_bug.cgi?id=30352 there's a kernel bug related to reading the last allowed page on x86_64. The _copy_to_user() and _copy_from_user() functions use the following check for address limit: if (buf + size >= limit) fail(); while it should be more permissive: if (buf + size > limit) fail(); That's because the size represents the number of bytes being read/write from/to buf address AND including the buf address. So the copy function will actually never touch the limit address even if "buf + size == limit". Following program fails to use the last page as buffer due to the wrong limit check: #include <sys/mman.h> #include <sys/socket.h> #include <assert.h> #define PAGE_SIZE (4096) #define LAST_PAGE ((void*)(0x7fffffffe000)) int main() { int fds[2], err; void * ptr = mmap(LAST_PAGE, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); assert(ptr == LAST_PAGE); err = socketpair(AF_LOCAL, SOCK_STREAM, 0, fds); assert(err == 0); err = send(fds[0], ptr, PAGE_SIZE, 0); perror("send"); assert(err == PAGE_SIZE); err = recv(fds[1], ptr, PAGE_SIZE, MSG_WAITALL); perror("recv"); assert(err == PAGE_SIZE); return 0; } The other place checking the addr limit is the access_ok() function, which is working properly. There's just a misleading comment for the __range_not_ok() macro - which this patch fixes as well. The last page of the user-space address range is a guard page and Brian Gerst observed that the guard page itself due to an erratum on K8 cpus (#121 Sequential Execution Across Non-Canonical Boundary Causes Processor Hang). However, the test code is using the last valid page before the guard page. The bug is that the last byte before the guard page can't be read because of the off-by-one error. The guard page is left in place. This bug would normally not show up because the last page is part of the process stack and never accessed via syscalls. Signed-off-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Brian Gerst <brgerst@gmail.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: <stable@kernel.org> Link: http://lkml.kernel.org/r/1305210630-7136-1-git-send-email-jolsa@redhat.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: Move K8 B step iret fixup to fault entry asmBrian Gerst2009-10-121-1/+0
| | | | | | | | | | | | | Move the handling of truncated %rip from an iret fault to the fault entry path. This allows x86-64 to use the standard search_extable() function. Signed-off-by: Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jan Beulich <jbeulich@novell.com> LKML-Reference: <1255357103-5418-1-git-send-email-brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Fix movq immediate operand constraints in uaccess.hH. Peter Anvin2009-07-211-2/+2
| | | | | | | | | | The movq instruction, generated by __put_user_asm() when used for 64-bit data, takes a sign-extended immediate ("e") not a zero-extended immediate ("Z"). Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Uros Bizjak <ubizjak@gmail.com> Cc: stable@kernel.org
* x86, 64-bit: Clean up user address maskingLinus Torvalds2009-06-211-6/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The discussion about using "access_ok()" in get_user_pages_fast() (see commit 7f8189068726492950bf1a2dcfd9b51314560abf: "x86: don't use 'access_ok()' as a range check in get_user_pages_fast()" for details and end result), made us notice that x86-64 was really being very sloppy about virtual address checking. So be way more careful and straightforward about masking x86-64 virtual addresses: - All the VIRTUAL_MASK* variants now cover half of the address space, it's not like we can use the full mask on a signed integer, and the larger mask just invites mistakes when applying it to either half of the 48-bit address space. - /proc/kcore's kc_offset_to_vaddr() becomes a lot more obvious when it transforms a file offset into a (kernel-half) virtual address. - Unify/simplify the 32-bit and 64-bit USER_DS definition to be based on TASK_SIZE_MAX. This cleanup and more careful/obvious user virtual address checking also uncovered a buglet in the x86-64 implementation of strnlen_user(): it would do an "access_ok()" check on the whole potential area, even if the string itself was much shorter, and thus return an error even for valid strings. Our sloppy checking had hidden this. So this fixes 'strnlen_user()' to do this properly, the same way we already handled user strings in 'strncpy_from_user()'. Namely by just checking the first byte, and then relying on fault handling for the rest. That always works, since we impose a guard page that cannot be mapped at the end of the user space address space (and even if we didn't, we'd have the address space hole). Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* perf_counter, x86: Improve interactions with fast-gupIngo Molnar2009-06-191-1/+6
| | | | | | | | | | | | | | | | | Improve a few details in perfcounter call-chain recording that makes use of fast-GUP: - Use ACCESS_ONCE() to observe the pte value. ptes are fundamentally racy and can be changed on another CPU, so we have to be careful about how we access them. The PAE branch is already careful with read-barriers - but the non-PAE and 64-bit side needs an ACCESS_ONCE() to make sure the pte value is observed only once. - make the checks a bit stricter so that we can feed it any kind of cra^H^H^H user-space input ;-) Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: uaccess: use errret as error value in __put_user_size()Hiroshi Shimamoto2009-02-051-5/+6
| | | | | | | | | | | Impact: cleanup In __put_user_size() macro errret is used for error value. But if size is 8, errret isn't passed to__put_user_asm_u64(). This behavior is inconsistent. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86: uaccess: fix compilation error on CONFIG_M386Hiroshi Shimamoto2009-01-291-2/+20
| | | | | | | | In case of !CONFIG_X86_WP_WORKS_OK, __put_user_size_ex() is not defined. Add macros for !CONFIG_X86_WP_WORKS_OK case. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>