summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* ipc: Directly call the security hook in ipc_ops.associateEric W. Biederman2018-03-273-27/+3
| | | | | | | | | After the last round of cleanups the shm, sem, and msg associate operations just became trivial wrappers around the appropriate security method. Simplify things further by just calling the security method directly. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* ipc/sem: Fix semctl(..., GETPID, ...) between pid namespacesEric W. Biederman2018-03-271-10/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Today the last process to update a semaphore is remembered and reported in the pid namespace of that process. If there are processes in any other pid namespace querying that process id with GETPID the result will be unusable nonsense as it does not make any sense in your own pid namespace. Due to ipc_update_pid I don't think you will be able to get System V ipc semaphores into a troublesome cache line ping-pong. Using struct pids from separate process are not a problem because they do not share a cache line. Using struct pid from different threads of the same process are unlikely to be a problem as the reference count update can be avoided. Further linux futexes are a much better tool for the job of mutual exclusion between processes than System V semaphores. So I expect programs that are performance limited by their interprocess mutual exclusion primitive will be using futexes. So while it is possible that enhancing the storage of the last rocess of a System V semaphore from an integer to a struct pid will cause a performance regression because of the effect of frequently updating the pid reference count. I don't expect that to happen in practice. This change updates semctl(..., GETPID, ...) to return the process id of the last process to update a semphore inthe pid namespace of the calling process. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* ipc/msg: Fix msgctl(..., IPC_STAT, ...) between pid namespacesEric W. Biederman2018-03-271-10/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Today msg_lspid and msg_lrpid are remembered in the pid namespace of the creator and the processes that last send or received a sysvipc message. If you have processes in multiple pid namespaces that is just wrong. The process ids reported will not make the least bit of sense. This fix is slightly more susceptible to a performance problem than the related fix for System V shared memory. By definition the pids are updated by msgsnd and msgrcv, the fast path of System V message queues. The only concern over the previous implementation is the incrementing and decrementing of the pid reference count. As that is the only difference and multiple updates by of the task_tgid by threads in the same process have been shown in af_unix sockets to create a cache line ping-pong between cpus of the same processor. In this case I don't expect cache lines holding pid reference counts to ping pong between cpus. As senders and receivers update different pids there is a natural separation there. Further if multiple threads of the same process either send or receive messages the pid will be updated to the same value and ipc_update_pid will avoid the reference count update. Which means in the common case I expect msg_lspid and msg_lrpid to remain constant, and reference counts not to be updated when messages are sent. In rare cases it may be possible to trigger the issue which was observed for af_unix sockets, but it will require multiple processes with multiple threads to be either sending or receiving messages. It just does not feel likely that anyone would do that in practice. This change updates msgctl(..., IPC_STAT, ...) to return msg_lspid and msg_lrpid in the pid namespace of the process calling stat. This change also updates cat /proc/sysvipc/msg to return print msg_lspid and msg_lrpid in the pid namespace of the process that opened the proc file. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Reviewed-by: Nagarathnam Muthusamy <nagarathnam.muthusamy@oracle.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* ipc/shm: Fix shmctl(..., IPC_STAT, ...) between pid namespaces.Eric W. Biederman2018-03-271-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Today shm_cpid and shm_lpid are remembered in the pid namespace of the creator and the processes that last touched a sysvipc shared memory segment. If you have processes in multiple pid namespaces that is just wrong, and I don't know how this has been over-looked for so long. As only creation and shared memory attach and shared memory detach update the pids I do not expect there to be a repeat of the issues when struct pid was attached to each af_unix skb, which in some notable cases cut the performance in half. The problem was threads of the same process updating same struct pid from different cpus causing the cache line to be highly contended and bounce between cpus. As creation, attach, and detach are expected to be rare operations for sysvipc shared memory segments I do not expect that kind of cache line ping pong to cause probems. In addition because the pid is at a fixed location in the structure instead of being dynamic on a skb, the reference count of the pid does not need to be updated on each operation if the pid is the same. This ability to simply skip the pid reference count changes if the pid is unchanging further reduces the likelihood of the a cache line holding a pid reference count ping-ponging between cpus. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Reviewed-by: Nagarathnam Muthusamy <nagarathnam.muthusamy@oracle.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* ipc/util: Helpers for making the sysvipc operations pid namespace awareEric W. Biederman2018-03-242-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Capture the pid namespace when /proc/sysvipc/msg /proc/sysvipc/shm and /proc/sysvipc/sem are opened, and make it available through the new helper ipc_seq_pid_ns. This makes it possible to report the pids in these files in the pid namespace of the opener of the files. Implement ipc_update_pid. A simple impline helper that will only update a struct pid pointer if the new value does not equal the old value. This removes the need for wordy code sequences like: old = object->pid; object->pid = new; put_pid(old); and old = object->pid; if (old != new) { object->pid = new; put_pid(old); } Allowing the following to be written instead: ipc_update_pid(&object->pid, new); Which is easier to read and ensures that the pid reference count is not touched the old and the new values are the same. Not touching the reference count in this case is important to help avoid issues like af_unix experienced, where multiple threads of the same process managed to bounce the struct pid between cpu cache lines, but updating the pids reference count. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* ipc: Move IPCMNI from include/ipc.h into ipc/util.hEric W. Biederman2018-03-242-2/+1
| | | | | | | | The definition IPCMNI is only used in ipc/util.h and ipc/util.c. So there is no reason to keep it in a header file that the whole kernel can see. Move it into util.h to simplify future maintenance. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* msg: Move struct msg_queue into ipc/msg.cEric W. Biederman2018-03-242-18/+17
| | | | | | | | All of the users are now in ipc/msg.c so make the definition local to that file to make code maintenance easier. AKA to prevent rebuilding the entire kernel when struct msg_queue changes. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* shm: Move struct shmid_kernel into ipc/shm.cEric W. Biederman2018-03-242-22/+22
| | | | | | | | All of the users are now in ipc/shm.c so make the definition local to that file to make code maintenance easier. AKA to prevent rebuilding the entire kernel when struct shmid_kernel changes. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* sem: Move struct sem and struct sem_array into ipc/sem.cEric W. Biederman2018-03-232-39/+35
| | | | | | | | | All of the users are now in ipc/sem.c so make the definitions local to that file to make code maintenance easier. AKA to prevent rebuilding the entire kernel when one of these files is changed. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* msg/security: Pass kern_ipc_perm not msg_queue into the msg_queue security hooksEric W. Biederman2018-03-236-65/+62
| | | | | | | | | | | | All of the implementations of security hooks that take msg_queue only access q_perm the struct kern_ipc_perm member. This means the dependencies of the msg_queue security hooks can be simplified by passing the kern_ipc_perm member of msg_queue. Making this change will allow struct msg_queue to become private to ipc/msg.c. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* shm/security: Pass kern_ipc_perm not shmid_kernel into the shm security hooksEric W. Biederman2018-03-236-56/+52
| | | | | | | | | | | All of the implementations of security hooks that take shmid_kernel only access shm_perm the struct kern_ipc_perm member. This means the dependencies of the shm security hooks can be simplified by passing the kern_ipc_perm member of shmid_kernel.. Making this change will allow struct shmid_kernel to become private to ipc/shm.c. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* sem/security: Pass kern_ipc_perm not sem_array into the sem security hooksEric W. Biederman2018-03-236-57/+53
| | | | | | | | | | | | All of the implementations of security hooks that take sem_array only access sem_perm the struct kern_ipc_perm member. This means the dependencies of the sem security hooks can be simplified by passing the kern_ipc_perm member of sem_array. Making this change will allow struct sem and struct sem_array to become private to ipc/sem.c. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* pidns: simpler allocation of pid_* cachesAlexey Dobriyan2018-03-211-43/+24
| | | | | | | | | | | | Those pid_* caches are created on demand when a process advances to the new level of pid namespace. Which means pointers are stable, write only and thus can be packed into an array instead of spreading them over and using lists(!) to find them. Both first and subsequent clone/unshare(CLONE_NEWPID) become faster. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
* Linux 4.16-rc2v4.16-rc2Linus Torvalds2018-02-191-1/+1
|
* Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds2018-02-182-3/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 Kconfig fixes from Thomas Gleixner: "Three patchlets to correct HIGHMEM64G and CMPXCHG64 dependencies in Kconfig when CPU selections are explicitely set to M586 or M686" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/Kconfig: Explicitly enumerate i686-class CPUs in Kconfig x86/Kconfig: Exclude i586-class CPUs lacking PAE support from the HIGHMEM64G Kconfig group x86/Kconfig: Add missing i586-class CPUs to the X86_CMPXCHG64 Kconfig group
| * x86/Kconfig: Explicitly enumerate i686-class CPUs in KconfigMatthew Whitehead2018-02-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The X86_P6_NOP config class leaves out many i686-class CPUs. Instead, explicitly enumerate all these CPUs. Using a configuration with M686 currently sets X86_MINIMUM_CPU_FAMILY=5 instead of the correct value of 6. Booting on an i586 it will fail to generate the "This kernel requires an i686 CPU, but only detected an i586 CPU" message and intentional halt as expected. It will instead just silently hang when it hits i686-specific instructions. Signed-off-by: Matthew Whitehead <tedheadster@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1518713696-11360-3-git-send-email-tedheadster@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86/Kconfig: Exclude i586-class CPUs lacking PAE support from the HIGHMEM64G ↵Matthew Whitehead2018-02-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kconfig group i586-class machines also lack support for Physical Address Extension (PAE), so add them to the exclusion list. Signed-off-by: Matthew Whitehead <tedheadster@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1518713696-11360-2-git-send-email-tedheadster@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * x86/Kconfig: Add missing i586-class CPUs to the X86_CMPXCHG64 Kconfig groupMatthew Whitehead2018-02-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several i586-class CPUs supporting this instruction are missing from the X86_CMPXCHG64 config group. Using a configuration with either M586TSC or M586MMX currently sets X86_MINIMUM_CPU_FAMILY=4 instead of the correct value of 5. Booting on an i486 it will fail to generate the "This kernel requires an i586 CPU, but only detected an i486 CPU" message and intentional halt as expected. It will instead just silently hang when it hits i586-specific instructions. The M586 CPU is not in this list because at least the Cyrix 5x86 lacks this instruction, and perhaps others. Signed-off-by: Matthew Whitehead <tedheadster@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1518713696-11360-1-git-send-email-tedheadster@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds2018-02-1834-628/+1195
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates from Thomas Gleixner: "Perf tool updates and kprobe fixes: - perf_mmap overwrite mode fixes/overhaul, prep work to get 'perf top' using it, making it bearable to use it in large core count systems such as Knights Landing/Mill Intel systems (Kan Liang) - s/390 now uses syscall.tbl, just like x86-64 to generate the syscall table id -> string tables used by 'perf trace' (Hendrik Brueckner) - Use strtoull() instead of home grown function (Andy Shevchenko) - Synchronize kernel ABI headers, v4.16-rc1 (Ingo Molnar) - Document missing 'perf data --force' option (Sangwon Hong) - Add perf vendor JSON metrics for ARM Cortex-A53 Processor (William Cohen) - Improve error handling and error propagation of ftrace based kprobes so failures when installing kprobes are not silently ignored and create disfunctional tracepoints" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits) kprobes: Propagate error from disarm_kprobe_ftrace() kprobes: Propagate error from arm_kprobe_ftrace() Revert "tools include s390: Grab a copy of arch/s390/include/uapi/asm/unistd.h" perf s390: Rework system call table creation by using syscall.tbl perf s390: Grab a copy of arch/s390/kernel/syscall/syscall.tbl tools/headers: Synchronize kernel ABI headers, v4.16-rc1 perf test: Fix test trace+probe_libc_inet_pton.sh for s390x perf data: Document missing --force option perf tools: Substitute yet another strtoull() perf top: Check the latency of perf_top__mmap_read() perf top: Switch default mode to overwrite mode perf top: Remove lost events checking perf hists browser: Add parameter to disable lost event warning perf top: Add overwrite fall back perf evsel: Expose the perf_missing_features struct perf top: Check per-event overwrite term perf mmap: Discard legacy interface for mmap read perf test: Update mmap read functions for backward-ring-buffer test perf mmap: Introduce perf_mmap__read_event() perf mmap: Introduce perf_mmap__read_done() ...
| * | kprobes: Propagate error from disarm_kprobe_ftrace()Jessica Yu2018-02-161-25/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve error handling when disarming ftrace-based kprobes. Like with arm_kprobe_ftrace(), propagate any errors from disarm_kprobe_ftrace() so that we do not disable/unregister kprobes that are still armed. In other words, unregister_kprobe() and disable_kprobe() should not report success if the kprobe could not be disarmed. disarm_all_kprobes() keeps its current behavior and attempts to disarm all kprobes. It returns the last encountered error and gives a warning if not all probes could be disarmed. This patch is based on Petr Mladek's original patchset (patches 2 and 3) back in 2015, which improved kprobes error handling, found here: https://lkml.org/lkml/2015/2/26/452 However, further work on this had been paused since then and the patches were not upstreamed. Based-on-patches-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jessica Yu <jeyu@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S . Miller <davem@davemloft.net> Cc: Jiri Kosina <jikos@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: live-patching@vger.kernel.org Link: http://lkml.kernel.org/r/20180109235124.30886-3-jeyu@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | kprobes: Propagate error from arm_kprobe_ftrace()Jessica Yu2018-02-161-25/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve error handling when arming ftrace-based kprobes. Specifically, if we fail to arm a ftrace-based kprobe, register_kprobe()/enable_kprobe() should report an error instead of success. Previously, this has lead to confusing situations where register_kprobe() would return 0 indicating success, but the kprobe would not be functional if ftrace registration during the kprobe arming process had failed. We should therefore take any errors returned by ftrace into account and propagate this error so that we do not register/enable kprobes that cannot be armed. This can happen if, for example, register_ftrace_function() finds an IPMODIFY conflict (since kprobe_ftrace_ops has this flag set) and returns an error. Such a conflict is possible since livepatches also set the IPMODIFY flag for their ftrace_ops. arm_all_kprobes() keeps its current behavior and attempts to arm all kprobes. It returns the last encountered error and gives a warning if not all probes could be armed. This patch is based on Petr Mladek's original patchset (patches 2 and 3) back in 2015, which improved kprobes error handling, found here: https://lkml.org/lkml/2015/2/26/452 However, further work on this had been paused since then and the patches were not upstreamed. Based-on-patches-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jessica Yu <jeyu@kernel.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S . Miller <davem@davemloft.net> Cc: Jiri Kosina <jikos@kernel.org> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: live-patching@vger.kernel.org Link: http://lkml.kernel.org/r/20180109235124.30886-2-jeyu@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | Merge tag 'perf-core-for-mingo-4.17-20180215' of ↵Ingo Molnar2018-02-1633-578/+1067
| |\ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/core fixes from Arnaldo Carvalho de Melo: - perf_mmap overwrite mode fixes/overhaul, prep work to get 'perf top' using it, making it bearable to use it in large core count systems such as Knights Landing/Mill Intel systems (Kan Liang) - s/390 now uses syscall.tbl, just like x86-64 to generate the syscall table id -> string tables used by 'perf trace' (Hendrik Brueckner) - Use strtoull() instead of home grown function (Andy Shevchenko) - Synchronize kernel ABI headers, v4.16-rc1 (Ingo Molnar) - Document missing 'perf data --force' option (Sangwon Hong) - Add perf vendor JSON metrics for ARM Cortex-A53 Processor (William Cohen) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * Revert "tools include s390: Grab a copy of arch/s390/include/uapi/asm/unistd.h"Hendrik Brueckner2018-02-152-413/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit f120c7b187e6c418238710b48723ce141f467543 which is no longer required with the introduction of a syscall.tbl on s390. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: linux-s390@vger.kernel.org LPU-Reference: 1518090470-2899-2-git-send-email-brueckner@linux.vnet.ibm.com Link: https://lkml.kernel.org/n/tip-q1lg0nvhha1tk39ri9aqalcb@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf s390: Rework system call table creation by using syscall.tblHendrik Brueckner2018-02-152-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recently, s390 uses a syscall.tbl input file to generate its system call table and unistd uapi header files. Hence, update mksyscalltbl to use it as input to create the system table for perf. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: linux-s390@vger.kernel.org LPU-Reference: 1518090470-2899-4-git-send-email-brueckner@linux.vnet.ibm.com Link: https://lkml.kernel.org/n/tip-bdyhllhsq1zgxv2qx4m377y6@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf s390: Grab a copy of arch/s390/kernel/syscall/syscall.tblHendrik Brueckner2018-02-151-0/+390
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Grab a copy of the s390 system call table file introduced with commit 857f46bfb07f53dc112d69bdfb137cc5ec3da7c5 "s390/syscalls: add system call table". Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: linux-s390@vger.kernel.org LPU-Reference: 1518090470-2899-3-git-send-email-brueckner@linux.vnet.ibm.com Link: https://lkml.kernel.org/n/tip-hpw7vdjp7g92ivgpddrp5ydq@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * tools/headers: Synchronize kernel ABI headers, v4.16-rc1Ingo Molnar2018-02-155-0/+171
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sync the following tooling headers with the latest kernel version: tools/arch/powerpc/include/uapi/asm/kvm.h tools/arch/x86/include/asm/cpufeatures.h tools/include/uapi/drm/i915_drm.h tools/include/uapi/linux/if_link.h tools/include/uapi/linux/kvm.h All the changes are new ABI additions which don't impact their use in existing tooling. Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * perf test: Fix test trace+probe_libc_inet_pton.sh for s390xThomas Richter2018-02-151-5/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Intel test case trace+probe_libc_inet_pton.sh succeeds and the output is: [root@f27 perf]# ./perf trace --no-syscalls -e probe_libc:inet_pton/max-stack=3/ ping -6 -c 1 ::1 PING ::1(::1) 56 data bytes 64 bytes from ::1: icmp_seq=1 ttl=64 time=0.037 ms --- ::1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 0.000 probe_libc:inet_pton:(7fa40ac618a0)) __GI___inet_pton (/usr/lib64/libc-2.26.so) getaddrinfo (/usr/lib64/libc-2.26.so) main (/usr/bin/ping) The kernel stack unwinder is used, it is specified implicitly as call-graph=fp (frame pointer). On s390x only dwarf is available for stack unwinding. It is also done in user space. This requires different parameter setup and result checking for s390x and Intel. This patch adds separate perf trace setup and result checking for Intel and s390x. On s390x specify this command line to get a call-graph and handle the different call graph result checking: [root@s35lp76 perf]# ./perf trace --no-syscalls -e probe_libc:inet_pton/call-graph=dwarf/ ping -6 -c 1 ::1 PING ::1(::1) 56 data bytes 64 bytes from ::1: icmp_seq=1 ttl=64 time=0.041 ms --- ::1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 0.000 probe_libc:inet_pton:(3ffb9942060)) __GI___inet_pton (/usr/lib64/libc-2.26.so) gaih_inet (inlined) __GI_getaddrinfo (inlined) main (/usr/bin/ping) __libc_start_main (/usr/lib64/libc-2.26.so) _start (/usr/bin/ping) [root@s35lp76 perf]# Before: [root@s8360047 perf]# ./perf test -vv 58 58: probe libc's inet_pton & backtrace it with ping : --- start --- test child forked, pid 26349 PING ::1(::1) 56 data bytes 64 bytes from ::1: icmp_seq=1 ttl=64 time=0.079 ms --- ::1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 0.000 probe_libc:inet_pton:(3ff925c2060)) test child finished with -1 ---- end ---- probe libc's inet_pton & backtrace it with ping: FAILED! [root@s8360047 perf]# After: [root@s35lp76 perf]# ./perf test -vv 57 57: probe libc's inet_pton & backtrace it with ping : --- start --- test child forked, pid 38708 PING ::1(::1) 56 data bytes 64 bytes from ::1: icmp_seq=1 ttl=64 time=0.038 ms --- ::1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 0.000 probe_libc:inet_pton:(3ff87342060)) __GI___inet_pton (/usr/lib64/libc-2.26.so) gaih_inet (inlined) __GI_getaddrinfo (inlined) main (/usr/bin/ping) __libc_start_main (/usr/lib64/libc-2.26.so) _start (/usr/bin/ping) test child finished with 0 ---- end ---- probe libc's inet_pton & backtrace it with ping: Ok [root@s35lp76 perf]# On Intel the test case runs unchanged and succeeds. Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Link: http://lkml.kernel.org/r/20180117083831.101001-1-tmricht@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf data: Document missing --force optionSangwon Hong2018-02-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the --force option to the man page. Signed-off-by: Sangwon Hong <qpakzk@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Taeung Song <treeze.taeung@gmail.com> Link: http://lkml.kernel.org/r/1517831315-31490-1-git-send-email-qpakzk@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf tools: Substitute yet another strtoull()Andy Shevchenko2018-02-151-22/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of home grown function let's use what library provides us. Signed-off-by: Andriy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20180129130359.1490-1-andriy.shevchenko@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf top: Check the latency of perf_top__mmap_read()Kan Liang2018-02-151-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The latency of perf_top__mmap_read() should be lower than refresh time. If not, give some hints to reduce the latency. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-18-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf top: Switch default mode to overwrite modeKan Liang2018-02-151-9/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perf_top__mmap_read() has a severe performance issue in the Knights Landing/Mill platform, when monitoring heavy load systems. It costs several minutes to finish, which is unacceptable. Currently, 'perf top' uses the non overwrite mode. For non overwrite mode, it tries to read everything in the ringbuffer and doesn't pause it. Once there are lots of samples delivered persistently, the processing time could be very long. Also, the latest samples could be lost when the ringbuffer is full. For overwrite mode, it takes a snapshot for the system by pausing the ringbuffer, which could significantly reduce the processing time. Also, the overwrite mode always keep the latest samples. Considering the real time requirement for 'perf top', the overwrite mode is more suitable for it. Actually, 'perf top' was overwrite mode. It is changed to non overwrite mode since commit 93fc64f14472 ("perf top: Switch to non overwrite mode"). It's better to change it back to overwrite mode by default. For the kernel which doesn't support overwrite mode, it will fall back to non overwrite mode. There would be some records lost in overwrite mode because of pausing the ringbuffer. It has little impact for the accuracy of the snapshot and can be tolerated. For overwrite mode, unconditionally wait 100 ms before each snapshot. It also reduces the overhead caused by pausing ringbuffer, especially on light load system. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-17-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf top: Remove lost events checkingKan Liang2018-02-151-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There would be some records lost in overwrite mode because of pausing the ringbuffer. It has little impact for the accuracy of the snapshot and could be tolerated by 'perf top'. Remove the lost events checking. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-16-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf hists browser: Add parameter to disable lost event warningKan Liang2018-02-156-20/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For overwrite mode, the ringbuffer will be paused. The event lost is expected. It needs a way to notify the browser not print the warning. It will be used later for perf top to disable lost event warning in overwrite mode. There is no behavior change for now. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-15-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf top: Add overwrite fall backKan Liang2018-02-151-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switch to non-overwrite mode if kernel doesnot support overwrite ringbuffer. It's only effect when overwrite mode is supported. No change to current behavior. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-14-git-send-email-kan.liang@intel.com [ Use perf_missing_features.write_backward instead of the non merged is_write_backward_fail() ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf evsel: Expose the perf_missing_features structArnaldo Carvalho de Melo2018-02-152-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As tools may need to adjust to missing features, as 'perf top' will, in the next csets, to cope with a missing 'write_backward' feature. Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-jelngl9q1ooaizvkcput9tic@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf top: Check per-event overwrite termKan Liang2018-02-151-0/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Per-event overwrite term is not forbidden in 'perf top', which can bring problems. Because 'perf top' only support non-overwrite mode now. Add new rules and check regarding to overwrite term for 'perf top'. - All events either have same per-event term or don't have per-event mode setting. Otherwise, it will error out. - Per-event overwrite term should be consistent as opts->overwrite. If not, updating the opts->overwrite according to per-event term. Make it possible to support either non-overwrite or overwrite mode. The overwrite mode is forbidden now, which will be removed when the overwrite mode is supported later. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-12-git-send-email-kan.liang@intel.com [ Renamed perf_top_overwrite_check to perf_top__overwrite_check, to follow existing convention ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Discard legacy interface for mmap readKan Liang2018-02-152-49/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discards perf_mmap__read_backward() and perf_mmap__read_catchup(). No tools use them. There are tools still use perf_mmap__read_forward(). Keep it, but add comments to point to the new interface for future use. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-11-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf test: Update mmap read functions for backward-ring-buffer testKan Liang2018-02-151-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the new perf_mmap__read_* interfaces for overwrite ringbuffer test. Commiter notes: Testing: [root@seventh ~]# perf test -v backward 48: Read backward ring buffer : --- start --- test child forked, pid 8309 Using CPUID GenuineIntel-6-9E mmap size 1052672B mmap size 8192B Finished reading overwrite ring buffer: rewind test child finished with 0 ---- end ---- Read backward ring buffer: Ok [root@seventh ~]# Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-10-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Introduce perf_mmap__read_event()Kan Liang2018-02-152-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Except for 'perf record', the other perf tools read events one by one from the ring buffer using perf_mmap__read_forward(). But it only supports non-overwrite mode. Introduce perf_mmap__read_event() to support both non-overwrite and overwrite mode. Usage: perf_mmap__read_init() while(event = perf_mmap__read_event()) { //process the event perf_mmap__consume() } perf_mmap__read_done() It cannot use perf_mmap__read_backward(). Because it always reads the stale buffer which is already processed. Furthermore, the forward and backward concepts have been removed. The perf_mmap__read_backward() will be replaced and discarded later. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-9-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Introduce perf_mmap__read_done()Kan Liang2018-02-152-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The direction of overwrite mode is backward. The last perf_mmap__read() will set tail to map->prev. Need to correct the map->prev to head which is the end of next read. It will be used later. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-8-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Discard 'prev' in perf_mmap__read()Kan Liang2018-02-151-18/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'start' and 'prev' variables are duplicates in perf_mmap__read(). Use 'map->prev' to replace 'start' in perf_mmap__read_*(). Suggested-by: Wang Nan <wangnan0@huawei.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-7-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Add new return value logic for perf_mmap__read_init()Kan Liang2018-02-151-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve the readability by using meaningful enum (-EAGAIN, -EINVAL and 0) to replace the three returning states (0, -1 and 1). Suggested-by: Wang Nan <wangnan0@huawei.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-6-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Introduce perf_mmap__read_init()Kan Liang2018-02-152-10/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new function perf_mmap__read_init() is factored out from perf_mmap__push(). It is to calculate the 'start' and 'end' of the available data in ringbuffer. No functional change. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-5-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Cleanup perf_mmap__push()Kan Liang2018-02-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The first assignment for 'start' and 'end' is redundant. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1516310792-208685-4-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf mmap: Recalculate size for overwrite modeKan Liang2018-02-151-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In perf_mmap__push(), the 'size' need to be recalculated, otherwise the invalid data might be pushed to the record in overwrite mode. The issue is introduced by commit 7fb4b407a124 ("perf mmap: Don't discard prev in backward mode"). When the ring buffer is full in overwrite mode, backward_rb_find_range() will be called to recalculate the 'start' and 'end'. The 'size' needs to be recalculated accordingly. Unconditionally recalculate the 'size', not just for full ring buffer in overwrite mode. Because: - There is no harmful to recalculate the 'size' for other cases. - The code of calculating 'start' and 'end' will be factored out later. The new function does not need to return 'size'. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Fixes: 7fb4b407a124 ("perf mmap: Don't discard prev in backward mode") Link: http://lkml.kernel.org/r/1516310792-208685-3-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf evlist: Remove stale mmap read for backwardKan Liang2018-02-152-21/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perf_evlist__mmap_read_catchup() and perf_evlist__mmap_read_backward() are only for overwrite mode. But they read the evlist->mmap buffer which is for non-overwrite mode. It did not bring any serious problem yet, because there is no one use it. Remove the unused interfaces. Signed-off-by: Kan Liang <kan.liang@intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Wang Nan <wangnan0@huawei.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1516310792-208685-2-git-send-email-kan.liang@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf vendor events aarch64: Add JSON metrics for ARM Cortex-A53 ProcessorWilliam Cohen2018-02-157-0/+183
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add JSON metrics for ARM Cortex-A53 Processor. Unlike the Intel processors there isn't a script that automatically generated these files. The patch was manually generated from the documentation and the previous oprofile ARM Cortex ac53 event file patch I made. The relevant documentation is in the "12.9 Events" section of the ARM Cortex A53 MPCore Processor Revision: r0p4 Technical Reference Manual. The ARM Cortex A53 manual is available at: http://infocenter.arm.com/help/topic/com.arm.doc.ddi0500g/DDI0500G_cortex_a53_trm.pdf Use that to look for additional information about the events. Signed-off-by: William Cohen <wcohen@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180131032813.9564-1-wcohen@redhat.com [ Added references provided by William Cohen ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | | Merge branch 'irq-urgent-for-linus' of ↵Linus Torvalds2018-02-1811-51/+36
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "A small set of updates mostly for irq chip drivers: - MIPS GIC fix for spurious, masked interrupts - fix for a subtle IPI bug in GICv3 - do not probe GICv3 ITSs that are marked as disabled - multi-MSI support for GICv2m - various small cleanups" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqdomain: Re-use DEFINE_SHOW_ATTRIBUTE() macro irqchip/bcm: Remove hashed address printing irqchip/gic-v2m: Add PCI Multi-MSI support irqchip/gic-v3: Ignore disabled ITS nodes irqchip/gic-v3: Use wmb() instead of smb_wmb() in gic_raise_softirq() irqchip/gic-v3: Change pr_debug message to pr_devel irqchip/mips-gic: Avoid spuriously handling masked interrupts
| * \ \ Merge tag 'irqchip-4.16-2' of ↵Thomas Gleixner2018-02-1611344-304076/+490270
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/urgent Pull irqchip updates for 4.16-rc2 from Marc Zyngier - A MIPS GIC fix for spurious, masked interrupts - A fix for a subtle IPI bug in GICv3 - Do not probe GICv3 ITSs that are marked as disabled - Multi-MSI support for GICv2m - Various cleanups
| | * | | irqdomain: Re-use DEFINE_SHOW_ATTRIBUTE() macroAndy Shevchenko2018-02-161-14/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ...instead of open coding file operations followed by custom ->open() callbacks per each attribute. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>