diff options
author | Andi Kleen <ak@linux.intel.com> | 2015-10-23 00:07:20 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-11-23 09:58:24 +0100 |
commit | 10013ebb5d7856c243541870f4e62fed68253e88 (patch) | |
tree | cdbb1182419f3dc05b3414a2abcca9f59d7c1268 /arch/x86/include/asm/uaccess.h | |
parent | Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/... (diff) | |
download | linux-10013ebb5d7856c243541870f4e62fed68253e88.tar.xz linux-10013ebb5d7856c243541870f4e62fed68253e88.zip |
x86: Add an inlined __copy_from_user_nmi() variant
Add a inlined __ variant of copy_from_user_nmi. The inlined variant allows
the user to:
- batch the access_ok() check for multiple accesses
- avoid having a pagefault_disable/enable() on every access if the
caller already ensures disabled page faults due to its context.
- get all the optimizations in copy_*_user() for small constant sized
transfers
It is just a define to __copy_from_user_inatomic().
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1445551641-13379-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/include/asm/uaccess.h')
-rw-r--r-- | arch/x86/include/asm/uaccess.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 09b1b0ab94b7..660458af425d 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -745,5 +745,14 @@ copy_to_user(void __user *to, const void *from, unsigned long n) #undef __copy_from_user_overflow #undef __copy_to_user_overflow +/* + * We rely on the nested NMI work to allow atomic faults from the NMI path; the + * nested NMI paths are careful to preserve CR2. + * + * Caller must use pagefault_enable/disable, or run in interrupt context, + * and also do a uaccess_ok() check + */ +#define __copy_from_user_nmi __copy_from_user_inatomic + #endif /* _ASM_X86_UACCESS_H */ |