summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorRob Gardner <rob.gardner@oracle.com>2015-12-23 05:16:06 +0100
committerDavid S. Miller <davem@davemloft.net>2015-12-24 18:10:29 +0100
commit3f74306ac84cf7f2da2fdc87014fc455f5e67bad (patch)
tree84ca393a2bf06e0d8fdc085689c663697dc01e30 /arch
parentsparc64: Don't set %pil in rtrap_nmi too early (diff)
downloadlinux-3f74306ac84cf7f2da2fdc87014fc455f5e67bad.tar.xz
linux-3f74306ac84cf7f2da2fdc87014fc455f5e67bad.zip
sparc64: Ensure perf can access user stacks
When an interrupt (such as a perf counter interrupt) is delivered while executing in user space, the trap entry code puts ASI_AIUS in %asi so that copy_from_user() and copy_to_user() will access the correct memory. But if a perf counter interrupt is delivered while the cpu is already executing in kernel space, then the trap entry code will put ASI_P in %asi, and this will prevent copy_from_user() from reading any useful stack data in either of the perf_callchain_user_X functions, and thus no user callgraph data will be collected for this sample period. An additional problem is that a fault is guaranteed to occur, and though it will be silently covered up, it wastes time and could perturb state. In perf_callchain_user(), we ensure that %asi contains ASI_AIUS because we know for a fact that the subsequent calls to copy_from_user() are intended to read the user's stack. [ Use get_fs()/set_fs() -DaveM ] Signed-off-by: Rob Gardner <rob.gardner@oracle.com> Signed-off-by: Dave Aldridge <david.j.aldridge@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'arch')
-rw-r--r--arch/sparc/kernel/perf_event.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
index 3091267c5cc3..b1144d6acffe 100644
--- a/arch/sparc/kernel/perf_event.c
+++ b/arch/sparc/kernel/perf_event.c
@@ -1828,11 +1828,16 @@ static void perf_callchain_user_32(struct perf_callchain_entry *entry,
void
perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
{
+ mm_segment_t old_fs;
+
perf_callchain_store(entry, regs->tpc);
if (!current->mm)
return;
+ old_fs = get_fs();
+ set_fs(USER_DS);
+
flushw_user();
pagefault_disable();
@@ -1843,4 +1848,6 @@ perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
perf_callchain_user_64(entry, regs);
pagefault_enable();
+
+ set_fs(old_fs);
}