diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-09-10 05:06:17 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-09-10 05:06:17 +0200 |
commit | 535a265d7f0dd50d8c3a4f8b4f3a452d56bd160f (patch) | |
tree | a42e088342dac365cfa13f30d987118f2a5d259e /tools | |
parent | Merge tag '6.6-rc-smb3-client-fixes-part2' of git://git.samba.org/sfrench/cif... (diff) | |
parent | perf parse-events: Fix driver config term (diff) | |
download | linux-535a265d7f0dd50d8c3a4f8b4f3a452d56bd160f.tar.xz linux-535a265d7f0dd50d8c3a4f8b4f3a452d56bd160f.zip |
Merge tag 'perf-tools-for-v6.6-1-2023-09-05' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools
Pull perf tools updates from Arnaldo Carvalho de Melo:
"perf tools maintainership:
- Add git information for perf-tools and perf-tools-next trees and
branches to the MAINTAINERS file. That is where development now
takes place and myself and Namhyung Kim have write access, more
people to come as we emulate other maintainer groups.
perf record:
- Record kernel data maps when 'perf record --data' is used, so that
global variables can be resolved and used in tools that do data
profiling.
perf trace:
- Remove the old, experimental support for BPF events in which a .c
file was passed as an event: "perf trace -e hello.c" to then get
compiled and loaded.
The only known usage for that, that shipped with the kernel as an
example for such events, augmented the raw_syscalls tracepoints and
was converted to a libbpf skeleton, reusing all the user space
components and the BPF code connected to the syscalls.
In the end just the way to glue the BPF part and the user space
type beautifiers changed, now being performed by libbpf skeletons.
The next step is to use BTF to do pretty printing of all syscall
types, as discussed with Alan Maguire and others.
Now, on a perf built with BUILD_BPF_SKEL=1 we get most if not all
path/filenames/strings, some of the networking data structures,
perf_event_attr, etc, i.e. systemwide tracing of nanosleep calls
and perf_event_open syscalls while 'perf stat' runs 'sleep' for 5
seconds:
# perf trace -a -e *nanosleep,perf* perf stat -e cycles,instructions sleep 5
0.000 ( 9.034 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0 (PERF_COUNT_HW_CPU_CYCLES), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 3
9.039 ( 0.006 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0x1 (PERF_COUNT_HW_INSTRUCTIONS), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf-exec), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 4
? ( ): gpm/991 ... [continued]: clock_nanosleep()) = 0
10.133 ( ): sleep/327642 clock_nanosleep(rqtp: { .tv_sec: 5, .tv_nsec: 0 }, rmtp: 0x7ffd36f83ed0) ...
? ( ): pool-gsd-smart/3051 ... [continued]: clock_nanosleep()) = 0
30.276 ( ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
223.215 (1000.430 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0
30.276 (2000.394 ms): gpm/991 ... [continued]: clock_nanosleep()) = 0
1230.814 ( ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ...
1230.814 (1000.404 ms): pool-gsd-smart/3051 ... [continued]: clock_nanosleep()) = 0
2030.886 ( ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
2237.709 (1000.153 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0
? ( ): crond/1172 ... [continued]: clock_nanosleep()) = 0
3242.699 ( ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ...
2030.886 (2000.385 ms): gpm/991 ... [continued]: clock_nanosleep()) = 0
3728.078 ( ): crond/1172 clock_nanosleep(rqtp: { .tv_sec: 60, .tv_nsec: 0 }, rmtp: 0x7ffe0971dcf0) ...
3242.699 (1000.158 ms): pool-gsd-smart/3051 ... [continued]: clock_nanosleep()) = 0
4031.409 ( ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ...
10.133 (5000.375 ms): sleep/327642 ... [continued]: clock_nanosleep()) = 0
Performance counter stats for 'sleep 5':
2,617,347 cycles
1,855,997 instructions # 0.71 insn per cycle
5.002282128 seconds time elapsed
0.000855000 seconds user
0.000852000 seconds sys
perf annotate:
- Building with binutils' libopcode now is opt-in (BUILD_NONDISTRO=1)
for licensing reasons, and we missed a build test on
tools/perf/tests makefile.
Since we now default to NDEBUG=1, we ended up segfaulting when
building with BUILD_NONDISTRO=1 because a needed initialization
routine was being "error checked" via an assert.
Fix it by explicitly checking the result and aborting instead if it
fails.
We better back propagate the error, but at least 'perf annotate' on
samples collected for a BPF program is back working when perf is
built with BUILD_NONDISTRO=1.
perf report/top:
- Add back TUI hierarchy mode header, that is seen when using 'perf
report/top --hierarchy'.
- Fix the number of entries for 'e' key in the TUI that was
preventing navigation of lines when expanding an entry.
perf report/script:
- Support cross platform register handling, allowing a perf.data file
collected on one architecture to have registers sampled correctly
displayed when analysis tools such as 'perf report' and 'perf
script' are used on a different architecture.
- Fix handling of event attributes in pipe mode, i.e. when one uses:
perf record -o - | perf report -i -
When no perf.data files are used.
- Handle files generated via pipe mode with a version of perf and
then read also via pipe mode with a different version of perf,
where the event attr record may have changed, use the record size
field to properly support this version mismatch.
perf probe:
- Accessing global variables from uprobes isn't supported, make the
error message state that instead of stating that some minimal
kernel version is needed to have that feature. This seems just a
tool limitation, the kernel probably has all that is needed.
perf tests:
- Fix a reference count related leak in the dlfilter v0 API where the
result of a thread__find_symbol_fb() is not matched with an
addr_location__exit() to drop the reference counts of the resolved
components (machine, thread, map, symbol, etc). Add a dlfilter test
to make sure that doesn't regresses.
- Lots of fixes for the 'perf test' written in shell script related
to problems found with the shellcheck utility.
- Fixes for 'perf test' shell scripts testing features enabled when
perf is built with BUILD_BPF_SKEL=1, such as 'perf stat' bpf
counters.
- Add perf record sample filtering test, things like the following
example, that gets implemented as a BPF filter attached to the
event:
# perf record -e task-clock -c 10000 --filter 'ip < 0xffffffff00000000'
- Improve the way the task_analyzer test checks if libtraceevent is
linked, using 'perf version --build-options' instead of the more
expensinve 'perf record -e "sched:sched_switch"'.
- Add support for riscv in the mmap-basic test. (This went as well
via the RiscV tree, same contents).
libperf:
- Implement riscv mmap support (This went as well via the RiscV tree,
same contents).
perf script:
- New tool that converts perf.data files to the firefox profiler
format so that one can use the visualizer at
https://profiler.firefox.com/. Done by Anup Sharma as part of this
year's Google Summer of Code.
One can generate the output and upload it to the web interface but
Anup also automated everything:
perf script gecko -F 99 -a sleep 60
- Support syscall name parsing on arm64.
- Print "cgroup" field on the same line as "comm".
perf bench:
- Add new 'uprobe' benchmark to measure the overhead of uprobes
with/without BPF programs attached to it.
- breakpoints are not available on power9, skip that test.
perf stat:
- Add #num_cpus_online literal to be used in 'perf stat' metrics, and
add this extra 'perf test' check that exemplifies its purpose:
TEST_ASSERT_VAL("#num_cpus_online",
expr__parse(&num_cpus_online, ctx, "#num_cpus_online") == 0);
TEST_ASSERT_VAL("#num_cpus", expr__parse(&num_cpus, ctx, "#num_cpus") == 0);
TEST_ASSERT_VAL("#num_cpus >= #num_cpus_online", num_cpus >= num_cpus_online);
Miscellaneous:
- Improve tool startup time by lazily reading PMU, JSON, sysfs data.
- Improve error reporting in the parsing of events, passing YYLTYPE
to error routines, so that the output can show were the parsing
error was found.
- Add 'perf test' entries to check the parsing of events
improvements.
- Fix various leak for things detected by -fsanitize=address, mostly
things that would be freed at tool exit, including:
- Free evsel->filter on the destructor.
- Allow tools to register a thread->priv destructor and use it in
'perf trace'.
- Free evsel->priv in 'perf trace'.
- Free string returned by synthesize_perf_probe_point() when the
caller fails to do all it needs.
- Adjust various compiler options to not consider errors some
warnings when building with broken headers found in things like
python, flex, bison, as we otherwise build with -Werror. Some for
gcc, some for clang, some for some specific version of those, some
for some specific version of flex or bison, or some specific
combination of these components, bah.
- Allow customization of clang options for BPF target, this helps
building on gentoo where there are other oddities where BPF targets
gets passed some compiler options intended for the native build, so
building with WERROR=0 helps while these oddities are fixed.
- Dont pass ERR_PTR() values to perf_session__delete() in 'perf top'
and 'perf lock', fixing some segfaults when handling some odd
failures.
- Add LTO build option.
- Fix format of unordered lists in the perf docs
(tools/perf/Documentation)
- Overhaul the bison files, using constructs such as YYNOMEM.
- Remove unused tokens from the bison .y files.
- Add more comments to various structs.
- A few LoongArch enablement patches.
Vendor events (JSON):
- Add JSON metrics for Yitian 710 DDR (aarch64). Things like:
EventName, BriefDescription
visible_window_limit_reached_rd, "At least one entry in read queue reaches the visible window limit.",
visible_window_limit_reached_wr, "At least one entry in write queue reaches the visible window limit.",
op_is_dqsosc_mpc , "A DQS Oscillator MPC command to DRAM.",
op_is_dqsosc_mrr , "A DQS Oscillator MRR command to DRAM.",
op_is_tcr_mrr , "A Temperature Compensated Refresh(TCR) MRR command to DRAM.",
- Add AmpereOne metrics (aarch64).
- Update N2 and V2 metrics (aarch64) and events using Arm telemetry
repo.
- Update scale units and descriptions of common topdown metrics on
aarch64. Things like:
- "MetricExpr": "stall_slot_frontend / (#slots * cpu_cycles)",
- "BriefDescription": "Frontend bound L1 topdown metric",
+ "MetricExpr": "100 * (stall_slot_frontend / (#slots * cpu_cycles))",
+ "BriefDescription": "This metric is the percentage of total slots that were stalled due to resource constraints in the frontend of the processor.",
- Update events for intel: meteorlake to 1.04, sapphirerapids to
1.15, Icelake+ metric constraints.
- Update files for the power10 platform"
* tag 'perf-tools-for-v6.6-1-2023-09-05' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools: (217 commits)
perf parse-events: Fix driver config term
perf parse-events: Fixes relating to no_value terms
perf parse-events: Fix propagation of term's no_value when cloning
perf parse-events: Name the two term enums
perf list: Don't print Unit for "default_core"
perf vendor events intel: Fix modifier in tma_info_system_mem_parallel_reads for skylake
perf dlfilter: Avoid leak in v0 API test use of resolve_address()
perf metric: Add #num_cpus_online literal
perf pmu: Remove str from perf_pmu_alias
perf parse-events: Make common term list to strbuf helper
perf parse-events: Minor help message improvements
perf pmu: Avoid uninitialized use of alias->str
perf jevents: Use "default_core" for events with no Unit
perf test stat_bpf_counters_cgrp: Enhance perf stat cgroup BPF counter test
perf test shell stat_bpf_counters: Fix test on Intel
perf test shell record_bpf_filter: Skip 6.2 kernel
libperf: Get rid of attr.id field
perf tools: Convert to perf_record_header_attr_id()
libperf: Add perf_record_header_attr_id()
perf tools: Handle old data in PERF_RECORD_ATTR
...
Diffstat (limited to 'tools')
282 files changed, 8004 insertions, 9119 deletions
diff --git a/tools/build/Makefile.build b/tools/build/Makefile.build index 89430338a3d9..fac42486a8cf 100644 --- a/tools/build/Makefile.build +++ b/tools/build/Makefile.build @@ -117,6 +117,16 @@ $(OUTPUT)%.s: %.c FORCE $(call rule_mkdir) $(call if_changed_dep,cc_s_c) +# bison and flex files are generated in the OUTPUT directory +# so it needs a separate rule to depend on them properly +$(OUTPUT)%-bison.o: $(OUTPUT)%-bison.c FORCE + $(call rule_mkdir) + $(call if_changed_dep,$(host)cc_o_c) + +$(OUTPUT)%-flex.o: $(OUTPUT)%-flex.c FORCE + $(call rule_mkdir) + $(call if_changed_dep,$(host)cc_o_c) + # Gather build data: # obj-y - list of build objects # subdir-y - list of directories to nest diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile index f0c5de018a95..dad79ede4e0a 100644 --- a/tools/build/feature/Makefile +++ b/tools/build/feature/Makefile @@ -340,7 +340,7 @@ $(OUTPUT)test-jvmti-cmlr.bin: $(BUILD) $(OUTPUT)test-llvm.bin: - $(BUILDXX) -std=gnu++14 \ + $(BUILDXX) -std=gnu++17 \ -I$(shell $(LLVM_CONFIG) --includedir) \ -L$(shell $(LLVM_CONFIG) --libdir) \ $(shell $(LLVM_CONFIG) --libs Core BPF) \ @@ -348,17 +348,15 @@ $(OUTPUT)test-llvm.bin: > $(@:.bin=.make.output) 2>&1 $(OUTPUT)test-llvm-version.bin: - $(BUILDXX) -std=gnu++14 \ + $(BUILDXX) -std=gnu++17 \ -I$(shell $(LLVM_CONFIG) --includedir) \ > $(@:.bin=.make.output) 2>&1 $(OUTPUT)test-clang.bin: - $(BUILDXX) -std=gnu++14 \ + $(BUILDXX) -std=gnu++17 \ -I$(shell $(LLVM_CONFIG) --includedir) \ -L$(shell $(LLVM_CONFIG) --libdir) \ - -Wl,--start-group -lclangBasic -lclangDriver \ - -lclangFrontend -lclangEdit -lclangLex \ - -lclangAST -Wl,--end-group \ + -Wl,--start-group -lclang-cpp -Wl,--end-group \ $(shell $(LLVM_CONFIG) --libs Core option) \ $(shell $(LLVM_CONFIG) --system-libs) \ > $(@:.bin=.make.output) 2>&1 diff --git a/tools/build/feature/test-clang.cpp b/tools/build/feature/test-clang.cpp deleted file mode 100644 index 7d87075cd1c5..000000000000 --- a/tools/build/feature/test-clang.cpp +++ /dev/null @@ -1,28 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include "clang/Basic/Version.h" -#if CLANG_VERSION_MAJOR < 8 -#include "clang/Basic/VirtualFileSystem.h" -#endif -#include "clang/Driver/Driver.h" -#include "clang/Frontend/TextDiagnosticPrinter.h" -#include "llvm/ADT/IntrusiveRefCntPtr.h" -#include "llvm/Support/ManagedStatic.h" -#if CLANG_VERSION_MAJOR >= 8 -#include "llvm/Support/VirtualFileSystem.h" -#endif -#include "llvm/Support/raw_ostream.h" - -using namespace clang; -using namespace clang::driver; - -int main() -{ - IntrusiveRefCntPtr<DiagnosticIDs> DiagID(new DiagnosticIDs()); - IntrusiveRefCntPtr<DiagnosticOptions> DiagOpts = new DiagnosticOptions(); - - DiagnosticsEngine Diags(DiagID, &*DiagOpts); - Driver TheDriver("test", "bpf-pc-linux", Diags); - - llvm::llvm_shutdown(); - return 0; -} diff --git a/tools/build/feature/test-cxx.cpp b/tools/build/feature/test-cxx.cpp deleted file mode 100644 index 396aaedd2418..000000000000 --- a/tools/build/feature/test-cxx.cpp +++ /dev/null @@ -1,16 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <iostream> -#include <memory> - -static void print_str(std::string s) -{ - std::cout << s << std::endl; -} - -int main() -{ - std::string s("Hello World!"); - print_str(std::move(s)); - std::cout << "|" << s << "|" << std::endl; - return 0; -} diff --git a/tools/build/feature/test-llvm-version.cpp b/tools/build/feature/test-llvm-version.cpp deleted file mode 100644 index 8a091625446a..000000000000 --- a/tools/build/feature/test-llvm-version.cpp +++ /dev/null @@ -1,12 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <cstdio> -#include "llvm/Config/llvm-config.h" - -#define NUM_VERSION (((LLVM_VERSION_MAJOR) << 16) + (LLVM_VERSION_MINOR << 8) + LLVM_VERSION_PATCH) -#define pass int main() {printf("%x\n", NUM_VERSION); return 0;} - -#if NUM_VERSION >= 0x030900 -pass -#else -# error This LLVM is not tested yet. -#endif diff --git a/tools/build/feature/test-llvm.cpp b/tools/build/feature/test-llvm.cpp deleted file mode 100644 index 88a3d1bdd9f6..000000000000 --- a/tools/build/feature/test-llvm.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include "llvm/Support/ManagedStatic.h" -#include "llvm/Support/raw_ostream.h" -#define NUM_VERSION (((LLVM_VERSION_MAJOR) << 16) + (LLVM_VERSION_MINOR << 8) + LLVM_VERSION_PATCH) - -#if NUM_VERSION < 0x030900 -# error "LLVM version too low" -#endif -int main() -{ - llvm::errs() << "Hello World!\n"; - llvm::llvm_shutdown(); - return 0; -} diff --git a/tools/lib/perf/include/perf/event.h b/tools/lib/perf/include/perf/event.h index ba2dcf64f4e6..ae64090184d3 100644 --- a/tools/lib/perf/include/perf/event.h +++ b/tools/lib/perf/include/perf/event.h @@ -148,8 +148,18 @@ struct perf_record_switch { struct perf_record_header_attr { struct perf_event_header header; struct perf_event_attr attr; - __u64 id[]; -}; + /* + * Array of u64 id follows here but we cannot use a flexible array + * because size of attr in the data can be different then current + * version. Please use perf_record_header_attr_id() below. + * + * __u64 id[]; // do not use this + */ +}; + +/* Returns the pointer to id array based on the actual attr size. */ +#define perf_record_header_attr_id(evt) \ + ((void *)&(evt)->attr.attr + (evt)->attr.attr.size) enum { PERF_CPU_MAP__CPUS = 0, diff --git a/tools/perf/Documentation/perf-bench.txt b/tools/perf/Documentation/perf-bench.txt index f04f0eaded98..ca5789625cd2 100644 --- a/tools/perf/Documentation/perf-bench.txt +++ b/tools/perf/Documentation/perf-bench.txt @@ -67,6 +67,9 @@ SUBSYSTEM 'internals':: Benchmark internal perf functionality. +'uprobe':: + Benchmark overhead of uprobe + BPF. + 'all':: All benchmark subsystems. diff --git a/tools/perf/Documentation/perf-config.txt b/tools/perf/Documentation/perf-config.txt index 1478068ad5dd..0b4e79dbd3f6 100644 --- a/tools/perf/Documentation/perf-config.txt +++ b/tools/perf/Documentation/perf-config.txt @@ -125,9 +125,6 @@ Given a $HOME/.perfconfig like this: group = true skip-empty = true - [llvm] - dump-obj = true - clang-opt = -g You can hide source code of annotate feature setting the config to false with @@ -657,36 +654,6 @@ ftrace.*:: -F option is not specified. Possible values are 'function' and 'function_graph'. -llvm.*:: - llvm.clang-path:: - Path to clang. If omit, search it from $PATH. - - llvm.clang-bpf-cmd-template:: - Cmdline template. Below lines show its default value. Environment - variable is used to pass options. - "$CLANG_EXEC -D__KERNEL__ -D__NR_CPUS__=$NR_CPUS "\ - "-DLINUX_VERSION_CODE=$LINUX_VERSION_CODE " \ - "$CLANG_OPTIONS $PERF_BPF_INC_OPTIONS $KERNEL_INC_OPTIONS " \ - "-Wno-unused-value -Wno-pointer-sign " \ - "-working-directory $WORKING_DIR " \ - "-c \"$CLANG_SOURCE\" --target=bpf $CLANG_EMIT_LLVM -O2 -o - $LLVM_OPTIONS_PIPE" - - llvm.clang-opt:: - Options passed to clang. - - llvm.kbuild-dir:: - kbuild directory. If not set, use /lib/modules/`uname -r`/build. - If set to "" deliberately, skip kernel header auto-detector. - - llvm.kbuild-opts:: - Options passed to 'make' when detecting kernel header options. - - llvm.dump-obj:: - Enable perf dump BPF object files compiled by LLVM. - - llvm.opts:: - Options passed to llc. - samples.*:: samples.context:: diff --git a/tools/perf/Documentation/perf-dlfilter.txt b/tools/perf/Documentation/perf-dlfilter.txt index fb22e3b31dc5..8887cc20a809 100644 --- a/tools/perf/Documentation/perf-dlfilter.txt +++ b/tools/perf/Documentation/perf-dlfilter.txt @@ -64,6 +64,12 @@ internal filtering. If implemented, 'filter_description' should return a one-line description of the filter, and optionally a longer description. +Do not assume the 'sample' argument is valid (dereferenceable) +after 'filter_event' and 'filter_event_early' return. + +Do not assume data referenced by pointers in struct perf_dlfilter_sample +is valid (dereferenceable) after 'filter_event' and 'filter_event_early' return. + The perf_dlfilter_sample structure ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -150,7 +156,8 @@ struct perf_dlfilter_fns { const char *(*srcline)(void *ctx, __u32 *line_number); struct perf_event_attr *(*attr)(void *ctx); __s32 (*object_code)(void *ctx, __u64 ip, void *buf, __u32 len); - void *(*reserved[120])(void *); + void (*al_cleanup)(void *ctx, struct perf_dlfilter_al *al); + void *(*reserved[119])(void *); }; ---- @@ -161,7 +168,8 @@ struct perf_dlfilter_fns { 'args' returns arguments from --dlarg options. 'resolve_address' provides information about 'address'. al->size must be set -before calling. Returns 0 on success, -1 otherwise. +before calling. Returns 0 on success, -1 otherwise. Call al_cleanup() (if present, +see below) when 'al' data is no longer needed. 'insn' returns instruction bytes and length. @@ -171,6 +179,12 @@ before calling. Returns 0 on success, -1 otherwise. 'object_code' reads object code and returns the number of bytes read. +'al_cleanup' must be called (if present, so check perf_dlfilter_fns.al_cleanup != NULL) +after resolve_address() to free any associated resources. + +Do not assume pointers obtained via perf_dlfilter_fns are valid (dereferenceable) +after 'filter_event' and 'filter_event_early' return. + The perf_dlfilter_al structure ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -197,9 +211,13 @@ struct perf_dlfilter_al { /* Below members are only populated by resolve_ip() */ __u8 filtered; /* true if this sample event will be filtered out */ const char *comm; + void *priv; /* Private data. Do not change */ }; ---- +Do not assume data referenced by pointers in struct perf_dlfilter_al +is valid (dereferenceable) after 'filter_event' and 'filter_event_early' return. + perf_dlfilter_sample flags ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/tools/perf/Documentation/perf-ftrace.txt b/tools/perf/Documentation/perf-ftrace.txt index df4595563801..d780b93fcf87 100644 --- a/tools/perf/Documentation/perf-ftrace.txt +++ b/tools/perf/Documentation/perf-ftrace.txt @@ -96,8 +96,9 @@ OPTIONS for 'perf ftrace trace' --func-opts:: List of options allowed to set: - call-graph - Display kernel stack trace for function tracer. - irq-info - Display irq context info for function tracer. + + - call-graph - Display kernel stack trace for function tracer. + - irq-info - Display irq context info for function tracer. -G:: --graph-funcs=:: @@ -118,11 +119,12 @@ OPTIONS for 'perf ftrace trace' --graph-opts:: List of options allowed to set: - nosleep-time - Measure on-CPU time only for function_graph tracer. - noirqs - Ignore functions that happen inside interrupt. - verbose - Show process names, PIDs, timestamps, etc. - thresh=<n> - Setup trace duration threshold in microseconds. - depth=<n> - Set max depth for function graph tracer to follow. + + - nosleep-time - Measure on-CPU time only for function_graph tracer. + - noirqs - Ignore functions that happen inside interrupt. + - verbose - Show process names, PIDs, timestamps, etc. + - thresh=<n> - Setup trace duration threshold in microseconds. + - depth=<n> - Set max depth for function graph tracer to follow. OPTIONS for 'perf ftrace latency' diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt index 680396c56bd1..d5217be012d7 100644 --- a/tools/perf/Documentation/perf-record.txt +++ b/tools/perf/Documentation/perf-record.txt @@ -99,20 +99,6 @@ OPTIONS If you want to profile write accesses in [0x1000~1008), just set 'mem:0x1000/8:w'. - - a BPF source file (ending in .c) or a precompiled object file (ending - in .o) selects one or more BPF events. - The BPF program can attach to various perf events based on the ELF section - names. - - When processing a '.c' file, perf searches an installed LLVM to compile it - into an object file first. Optional clang options can be passed via the - '--clang-opt' command line option, e.g.: - - perf record --clang-opt "-DLINUX_VERSION_CODE=0x50000" \ - -e tests/bpf-script-example.c - - Note: '--clang-opt' must be placed before '--event/-e'. - - a group of events surrounded by a pair of brace ("{event1,event2,...}"). Each event is separated by commas and the group should be quoted to prevent the shell interpretation. You also need to use --group on @@ -523,9 +509,10 @@ CLOCK_BOOTTIME, CLOCK_REALTIME and CLOCK_TAI. Select AUX area tracing Snapshot Mode. This option is valid only with an AUX area tracing event. Optionally, certain snapshot capturing parameters can be specified in a string that follows this option: - 'e': take one last snapshot on exit; guarantees that there is at least one + + - 'e': take one last snapshot on exit; guarantees that there is at least one snapshot in the output file; - <size>: if the PMU supports this, specify the desired snapshot size. + - <size>: if the PMU supports this, specify the desired snapshot size. In Snapshot Mode trace data is captured only when signal SIGUSR2 is received and on exit if the above 'e' option is given. @@ -547,14 +534,6 @@ PERF_RECORD_SWITCH_CPU_WIDE. In some cases (e.g. Intel PT, CoreSight or Arm SPE) switch events will be enabled automatically, which can be suppressed by by the option --no-switch-events. ---clang-path=PATH:: -Path to clang binary to use for compiling BPF scriptlets. -(enabled when BPF support is on) - ---clang-opt=OPTIONS:: -Options passed to clang when compiling BPF scriptlets. -(enabled when BPF support is on) - --vmlinux=PATH:: Specify vmlinux path which has debuginfo. (enabled when BPF prologue is on) @@ -572,8 +551,9 @@ providing implementation for Posix AIO API. --affinity=mode:: Set affinity mask of trace reading thread according to the policy defined by 'mode' value: - node - thread affinity mask is set to NUMA node cpu mask of the processed mmap buffer - cpu - thread affinity mask is set to cpu of the processed mmap buffer + + - node - thread affinity mask is set to NUMA node cpu mask of the processed mmap buffer + - cpu - thread affinity mask is set to cpu of the processed mmap buffer --mmap-flush=number:: @@ -625,16 +605,17 @@ Record timestamp boundary (time of first/last samples). --switch-output[=mode]:: Generate multiple perf.data files, timestamp prefixed, switching to a new one based on 'mode' value: - "signal" - when receiving a SIGUSR2 (default value) or - <size> - when reaching the size threshold, size is expected to - be a number with appended unit character - B/K/M/G - <time> - when reaching the time threshold, size is expected to - be a number with appended unit character - s/m/h/d - Note: the precision of the size threshold hugely depends - on your configuration - the number and size of your ring - buffers (-m). It is generally more precise for higher sizes - (like >5M), for lower values expect different sizes. + - "signal" - when receiving a SIGUSR2 (default value) or + - <size> - when reaching the size threshold, size is expected to + be a number with appended unit character - B/K/M/G + - <time> - when reaching the time threshold, size is expected to + be a number with appended unit character - s/m/h/d + + Note: the precision of the size threshold hugely depends + on your configuration - the number and size of your ring + buffers (-m). It is generally more precise for higher sizes + (like >5M), for lower values expect different sizes. A possible use case is to, given an external event, slice the perf.data file that gets then processed, possibly via a perf script, to decide if that @@ -680,11 +661,12 @@ choice in this option. For example, --synth=no would have MMAP events for kernel and modules. Available types are: - 'task' - synthesize FORK and COMM events for each task - 'mmap' - synthesize MMAP events for each process (implies 'task') - 'cgroup' - synthesize CGROUP events for each cgroup - 'all' - synthesize all events (default) - 'no' - do not synthesize any of the above events + + - 'task' - synthesize FORK and COMM events for each task + - 'mmap' - synthesize MMAP events for each process (implies 'task') + - 'cgroup' - synthesize CGROUP events for each cgroup + - 'all' - synthesize all events (default) + - 'no' - do not synthesize any of the above events --tail-synthesize:: Instead of collecting non-sample events (for example, fork, comm, mmap) at @@ -736,18 +718,19 @@ ctl-fifo / ack-fifo are opened and used as ctl-fd / ack-fd as follows. Listen on ctl-fd descriptor for command to control measurement. Available commands: - 'enable' : enable events - 'disable' : disable events - 'enable name' : enable event 'name' - 'disable name' : disable event 'name' - 'snapshot' : AUX area tracing snapshot). - 'stop' : stop perf record - 'ping' : ping - - 'evlist [-v|-g|-F] : display all events - -F Show just the sample frequency used for each event. - -v Show all fields. - -g Show event group information. + + - 'enable' : enable events + - 'disable' : disable events + - 'enable name' : enable event 'name' + - 'disable name' : disable event 'name' + - 'snapshot' : AUX area tracing snapshot). + - 'stop' : stop perf record + - 'ping' : ping + - 'evlist [-v|-g|-F] : display all events + + -F Show just the sample frequency used for each event. + -v Show all fields. + -g Show event group information. Measurements can be started with events disabled using --delay=-1 option. Optionally send control command completion ('ack\n') to ack-fd descriptor to synchronize with the @@ -808,10 +791,10 @@ the second monitors CPUs 1 and 5-7 with the affinity mask 5-7. <spec> value can also be a string meaning predefined parallel threads layout: - cpu - create new data streaming thread for every monitored cpu - core - create new thread to monitor CPUs grouped by a core - package - create new thread to monitor CPUs grouped by a package - numa - create new threed to monitor CPUs grouped by a NUMA domain + - cpu - create new data streaming thread for every monitored cpu + - core - create new thread to monitor CPUs grouped by a core + - package - create new thread to monitor CPUs grouped by a package + - numa - create new threed to monitor CPUs grouped by a NUMA domain Predefined layouts can be used on systems with large number of CPUs in order not to spawn multiple per-cpu streaming threads but still avoid LOST diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt index 635ba043fd7d..010a4edcd384 100644 --- a/tools/perf/Documentation/perf.data-file-format.txt +++ b/tools/perf/Documentation/perf.data-file-format.txt @@ -43,7 +43,7 @@ struct perf_file_section { Flags section: -For each of the optional features a perf_file_section it placed after the data +For each of the optional features a perf_file_section is placed after the data section if the feature bit is set in the perf_header flags bitset. The respective perf_file_section points to the data of the additional header and defines its size. diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config index c5db0de49868..d66b52407e19 100644 --- a/tools/perf/Makefile.config +++ b/tools/perf/Makefile.config @@ -246,6 +246,9 @@ ifeq ($(CC_NO_CLANG), 0) else CORE_CFLAGS += -O6 endif +else + CORE_CFLAGS += -g + CXXFLAGS += -g endif ifdef PARSER_DEBUG @@ -256,6 +259,11 @@ ifdef PARSER_DEBUG $(call detected_var,PARSER_DEBUG_FLEX) endif +ifdef LTO + CORE_CFLAGS += -flto + CXXFLAGS += -flto +endif + # Try different combinations to accommodate systems that only have # python[2][3]-config in weird combinations in the following order of # priority from lowest to highest: @@ -319,18 +327,14 @@ FEATURE_CHECK_LDFLAGS-disassembler-four-args = -lbfd -lopcodes -ldl FEATURE_CHECK_LDFLAGS-disassembler-init-styled = -lbfd -lopcodes -ldl CORE_CFLAGS += -fno-omit-frame-pointer -CORE_CFLAGS += -ggdb3 -CORE_CFLAGS += -funwind-tables CORE_CFLAGS += -Wall CORE_CFLAGS += -Wextra CORE_CFLAGS += -std=gnu11 -CXXFLAGS += -std=gnu++14 -fno-exceptions -fno-rtti +CXXFLAGS += -std=gnu++17 -fno-exceptions -fno-rtti CXXFLAGS += -Wall +CXXFLAGS += -Wextra CXXFLAGS += -fno-omit-frame-pointer -CXXFLAGS += -ggdb3 -CXXFLAGS += -funwind-tables -CXXFLAGS += -Wno-strict-aliasing HOSTCFLAGS += -Wall HOSTCFLAGS += -Wextra @@ -585,18 +589,6 @@ ifndef NO_LIBELF LIBBPF_STATIC := 1 endif endif - - ifndef NO_DWARF - ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET - CFLAGS += -DHAVE_BPF_PROLOGUE - $(call detected,CONFIG_BPF_PROLOGUE) - else - msg := $(warning BPF prologue is not supported by architecture $(SRCARCH), missing regs_query_register_offset()); - endif - else - msg := $(warning DWARF support is off, BPF prologue is disabled); - endif - endif # NO_LIBBPF endif # NO_LIBELF @@ -1123,37 +1115,6 @@ ifndef NO_JVMTI endif endif -USE_CXX = 0 -USE_CLANGLLVM = 0 -ifdef LIBCLANGLLVM - $(call feature_check,cxx) - ifneq ($(feature-cxx), 1) - msg := $(warning No g++ found, disable clang and llvm support. Please install g++) - else - $(call feature_check,llvm) - $(call feature_check,llvm-version) - ifneq ($(feature-llvm), 1) - msg := $(warning No suitable libLLVM found, disabling builtin clang and LLVM support. Please install llvm-dev(el) (>= 3.9.0)) - else - $(call feature_check,clang) - ifneq ($(feature-clang), 1) - msg := $(warning No suitable libclang found, disabling builtin clang and LLVM support. Please install libclang-dev(el) (>= 3.9.0)) - else - CFLAGS += -DHAVE_LIBCLANGLLVM_SUPPORT - CXXFLAGS += -DHAVE_LIBCLANGLLVM_SUPPORT -I$(shell $(LLVM_CONFIG) --includedir) - $(call detected,CONFIG_CXX) - $(call detected,CONFIG_CLANGLLVM) - USE_CXX = 1 - USE_LLVM = 1 - USE_CLANG = 1 - ifneq ($(feature-llvm-version),1) - msg := $(warning This version of LLVM is not tested. May cause build errors) - endif - endif - endif - endif -endif - ifndef NO_LIBPFM4 $(call feature_check,libpfm4) ifeq ($(feature-libpfm4), 1) diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf index 097316ef38e6..37af6df7b978 100644 --- a/tools/perf/Makefile.perf +++ b/tools/perf/Makefile.perf @@ -99,10 +99,6 @@ include ../scripts/utilities.mak # Define NO_JVMTI_CMLR (debug only) if you do not want to process CMLR # data for java source lines. # -# Define LIBCLANGLLVM if you DO want builtin clang and llvm support. -# When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if -# llvm-config is not in $PATH. -# # Define CORESIGHT if you DO WANT support for CoreSight trace decoding. # # Define NO_AIO if you do not want support of Posix AIO based trace @@ -381,7 +377,7 @@ ifndef NO_JVMTI PROGRAMS += $(OUTPUT)$(LIBJVMTI) endif -DLFILTERS := dlfilter-test-api-v0.so dlfilter-show-cycles.so +DLFILTERS := dlfilter-test-api-v0.so dlfilter-test-api-v2.so dlfilter-show-cycles.so DLFILTERS := $(patsubst %,$(OUTPUT)dlfilters/%,$(DLFILTERS)) # what 'all' will build and 'install' will install, in perfexecdir @@ -425,22 +421,6 @@ endif EXTLIBS := $(call filter-out,$(EXCLUDE_EXTLIBS),$(EXTLIBS)) LIBS = -Wl,--whole-archive $(PERFLIBS) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive -Wl,--start-group $(EXTLIBS) -Wl,--end-group -ifeq ($(USE_CLANG), 1) - CLANGLIBS_LIST = AST Basic CodeGen Driver Frontend Lex Tooling Edit Sema Analysis Parse Serialization - CLANGLIBS_NOEXT_LIST = $(foreach l,$(CLANGLIBS_LIST),$(shell $(LLVM_CONFIG) --libdir)/libclang$(l)) - LIBCLANG = $(foreach l,$(CLANGLIBS_NOEXT_LIST),$(wildcard $(l).a $(l).so)) - LIBS += -Wl,--start-group $(LIBCLANG) -Wl,--end-group -endif - -ifeq ($(USE_LLVM), 1) - LIBLLVM = $(shell $(LLVM_CONFIG) --libs all) $(shell $(LLVM_CONFIG) --system-libs) - LIBS += -L$(shell $(LLVM_CONFIG) --libdir) $(LIBLLVM) -endif - -ifeq ($(USE_CXX), 1) - LIBS += -lstdc++ -endif - export INSTALL SHELL_PATH ### Build rules @@ -978,11 +958,6 @@ ifndef NO_JVMTI endif $(call QUIET_INSTALL, libexec) \ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' -ifndef NO_LIBBPF - $(call QUIET_INSTALL, bpf-examples) \ - $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf'; \ - $(INSTALL) examples/bpf/*.c -m 644 -t '$(DESTDIR_SQ)$(perf_examples_instdir_SQ)/bpf' -endif $(call QUIET_INSTALL, perf-archive) \ $(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' $(call QUIET_INSTALL, perf-iostat) \ @@ -1057,6 +1032,8 @@ SKELETONS += $(SKEL_OUT)/bperf_leader.skel.h $(SKEL_OUT)/bperf_follower.skel.h SKELETONS += $(SKEL_OUT)/bperf_cgroup.skel.h $(SKEL_OUT)/func_latency.skel.h SKELETONS += $(SKEL_OUT)/off_cpu.skel.h $(SKEL_OUT)/lock_contention.skel.h SKELETONS += $(SKEL_OUT)/kwork_trace.skel.h $(SKEL_OUT)/sample_filter.skel.h +SKELETONS += $(SKEL_OUT)/bench_uprobe.skel.h +SKELETONS += $(SKEL_OUT)/augmented_raw_syscalls.skel.h $(SKEL_TMP_OUT) $(LIBAPI_OUTPUT) $(LIBBPF_OUTPUT) $(LIBPERF_OUTPUT) $(LIBSUBCMD_OUTPUT) $(LIBSYMBOL_OUTPUT): $(Q)$(MKDIR) -p $@ @@ -1079,10 +1056,15 @@ ifneq ($(CROSS_COMPILE),) CLANG_TARGET_ARCH = --target=$(notdir $(CROSS_COMPILE:%-=%)) endif +CLANG_OPTIONS = -Wall CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH)) BPF_INCLUDE := -I$(SKEL_TMP_OUT)/.. -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) TOOLS_UAPI_INCLUDE := -I$(srctree)/tools/include/uapi +ifneq ($(WERROR),0) + CLANG_OPTIONS += -Werror +endif + $(BPFTOOL): | $(SKEL_TMP_OUT) $(Q)CFLAGS= $(MAKE) -C ../bpf/bpftool \ OUTPUT=$(SKEL_TMP_OUT)/ bootstrap @@ -1124,7 +1106,7 @@ else endif $(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) $(SKEL_OUT)/vmlinux.h | $(SKEL_TMP_OUT) - $(QUIET_CLANG)$(CLANG) -g -O2 --target=bpf -Wall -Werror $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \ + $(QUIET_CLANG)$(CLANG) -g -O2 --target=bpf $(CLANG_OPTIONS) $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \ -c $(filter util/bpf_skel/%.bpf.c,$^) -o $@ $(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o | $(BPFTOOL) diff --git a/tools/perf/arch/arm/include/perf_regs.h b/tools/perf/arch/arm/include/perf_regs.h index 99a06550e25d..75ce1c370114 100644 --- a/tools/perf/arch/arm/include/perf_regs.h +++ b/tools/perf/arch/arm/include/perf_regs.h @@ -12,7 +12,4 @@ void perf_regs_load(u64 *regs); #define PERF_REGS_MAX PERF_REG_ARM_MAX #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32 -#define PERF_REG_IP PERF_REG_ARM_PC -#define PERF_REG_SP PERF_REG_ARM_SP - #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c index 7c51fa182b51..b8d6a953fd74 100644 --- a/tools/perf/arch/arm/util/cs-etm.c +++ b/tools/perf/arch/arm/util/cs-etm.c @@ -79,9 +79,9 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr, int err; u32 val; u64 contextid = evsel->core.attr.config & - (perf_pmu__format_bits(&cs_etm_pmu->format, "contextid") | - perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1") | - perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2")); + (perf_pmu__format_bits(cs_etm_pmu, "contextid") | + perf_pmu__format_bits(cs_etm_pmu, "contextid1") | + perf_pmu__format_bits(cs_etm_pmu, "contextid2")); if (!contextid) return 0; @@ -106,7 +106,7 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr, } if (contextid & - perf_pmu__format_bits(&cs_etm_pmu->format, "contextid1")) { + perf_pmu__format_bits(cs_etm_pmu, "contextid1")) { /* * TRCIDR2.CIDSIZE, bit [9-5], indicates whether contextID * tracing is supported: @@ -122,7 +122,7 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr, } if (contextid & - perf_pmu__format_bits(&cs_etm_pmu->format, "contextid2")) { + perf_pmu__format_bits(cs_etm_pmu, "contextid2")) { /* * TRCIDR2.VMIDOPT[30:29] != 0 and * TRCIDR2.VMIDSIZE[14:10] == 0b00100 (32bit virtual contextid) @@ -151,7 +151,7 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr, u32 val; if (!(evsel->core.attr.config & - perf_pmu__format_bits(&cs_etm_pmu->format, "timestamp"))) + perf_pmu__format_bits(cs_etm_pmu, "timestamp"))) return 0; if (!cs_etm_is_etmv4(itr, cpu)) { diff --git a/tools/perf/arch/arm/util/perf_regs.c b/tools/perf/arch/arm/util/perf_regs.c index 2833e101a7c6..2c56e8b56ddf 100644 --- a/tools/perf/arch/arm/util/perf_regs.c +++ b/tools/perf/arch/arm/util/perf_regs.c @@ -1,6 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include "perf_regs.h" #include "../../../util/perf_regs.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG_END }; + +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/arm/util/unwind-libdw.c b/tools/perf/arch/arm/util/unwind-libdw.c index 1834a0cd9ce3..4e02cef461e3 100644 --- a/tools/perf/arch/arm/util/unwind-libdw.c +++ b/tools/perf/arch/arm/util/unwind-libdw.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <elfutils/libdwfl.h> +#include "perf_regs.h" #include "../../../util/unwind-libdw.h" #include "../../../util/perf_regs.h" #include "../../../util/sample.h" diff --git a/tools/perf/arch/arm64/include/arch-tests.h b/tools/perf/arch/arm64/include/arch-tests.h index 452b3d904521..474d7cf5afbd 100644 --- a/tools/perf/arch/arm64/include/arch-tests.h +++ b/tools/perf/arch/arm64/include/arch-tests.h @@ -2,6 +2,9 @@ #ifndef ARCH_TESTS_H #define ARCH_TESTS_H +struct test_suite; + +int test__cpuid_match(struct test_suite *test, int subtest); extern struct test_suite *arch_tests[]; #endif diff --git a/tools/perf/arch/arm64/include/perf_regs.h b/tools/perf/arch/arm64/include/perf_regs.h index 35a3cc775b39..58639ee9f7ea 100644 --- a/tools/perf/arch/arm64/include/perf_regs.h +++ b/tools/perf/arch/arm64/include/perf_regs.h @@ -14,7 +14,4 @@ void perf_regs_load(u64 *regs); #define PERF_REGS_MAX PERF_REG_ARM64_MAX #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64 -#define PERF_REG_IP PERF_REG_ARM64_PC -#define PERF_REG_SP PERF_REG_ARM64_SP - #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/arm64/tests/Build b/tools/perf/arch/arm64/tests/Build index a61c06bdb757..e337c09e7f56 100644 --- a/tools/perf/arch/arm64/tests/Build +++ b/tools/perf/arch/arm64/tests/Build @@ -2,3 +2,4 @@ perf-y += regs_load.o perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o perf-y += arch-tests.o +perf-y += cpuid-match.o diff --git a/tools/perf/arch/arm64/tests/arch-tests.c b/tools/perf/arch/arm64/tests/arch-tests.c index ad16b4f8f63e..74932e72c727 100644 --- a/tools/perf/arch/arm64/tests/arch-tests.c +++ b/tools/perf/arch/arm64/tests/arch-tests.c @@ -3,9 +3,13 @@ #include "tests/tests.h" #include "arch-tests.h" + +DEFINE_SUITE("arm64 CPUID matching", cpuid_match); + struct test_suite *arch_tests[] = { #ifdef HAVE_DWARF_UNWIND_SUPPORT &suite__dwarf_unwind, #endif + &suite__cpuid_match, NULL, }; diff --git a/tools/perf/arch/arm64/tests/cpuid-match.c b/tools/perf/arch/arm64/tests/cpuid-match.c new file mode 100644 index 000000000000..e8e3947cca18 --- /dev/null +++ b/tools/perf/arch/arm64/tests/cpuid-match.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/compiler.h> + +#include "arch-tests.h" +#include "tests/tests.h" +#include "util/header.h" + +int test__cpuid_match(struct test_suite *test __maybe_unused, + int subtest __maybe_unused) +{ + /* midr with no leading zeros matches */ + if (strcmp_cpuid_str("0x410fd0c0", "0x00000000410fd0c0")) + return -1; + /* Upper case matches */ + if (strcmp_cpuid_str("0x410fd0c0", "0x00000000410FD0C0")) + return -1; + /* r0p0 = r0p0 matches */ + if (strcmp_cpuid_str("0x00000000410fd480", "0x00000000410fd480")) + return -1; + /* r0p1 > r0p0 matches */ + if (strcmp_cpuid_str("0x00000000410fd480", "0x00000000410fd481")) + return -1; + /* r1p0 > r0p0 matches*/ + if (strcmp_cpuid_str("0x00000000410fd480", "0x00000000411fd480")) + return -1; + /* r0p0 < r0p1 doesn't match */ + if (!strcmp_cpuid_str("0x00000000410fd481", "0x00000000410fd480")) + return -1; + /* r0p0 < r1p0 doesn't match */ + if (!strcmp_cpuid_str("0x00000000411fd480", "0x00000000410fd480")) + return -1; + /* Different CPU doesn't match */ + if (!strcmp_cpuid_str("0x00000000410fd4c0", "0x00000000430f0af0")) + return -1; + + return 0; +} diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c index 3b1676ff03f9..9cc3d6dcb849 100644 --- a/tools/perf/arch/arm64/util/arm-spe.c +++ b/tools/perf/arch/arm64/util/arm-spe.c @@ -230,7 +230,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr, * inform that the resulting output's SPE samples contain physical addresses * where applicable. */ - bit = perf_pmu__format_bits(&arm_spe_pmu->format, "pa_enable"); + bit = perf_pmu__format_bits(arm_spe_pmu, "pa_enable"); if (arm_spe_evsel->core.attr.config & bit) evsel__set_sample_bit(arm_spe_evsel, PHYS_ADDR); diff --git a/tools/perf/arch/arm64/util/header.c b/tools/perf/arch/arm64/util/header.c index 80b9f6287fe2..a2eef9ec5491 100644 --- a/tools/perf/arch/arm64/util/header.c +++ b/tools/perf/arch/arm64/util/header.c @@ -1,3 +1,6 @@ +#include <linux/kernel.h> +#include <linux/bits.h> +#include <linux/bitfield.h> #include <stdio.h> #include <stdlib.h> #include <perf/cpumap.h> @@ -10,15 +13,14 @@ #define MIDR "/regs/identification/midr_el1" #define MIDR_SIZE 19 -#define MIDR_REVISION_MASK 0xf -#define MIDR_VARIANT_SHIFT 20 -#define MIDR_VARIANT_MASK (0xf << MIDR_VARIANT_SHIFT) +#define MIDR_REVISION_MASK GENMASK(3, 0) +#define MIDR_VARIANT_MASK GENMASK(23, 20) static int _get_cpuid(char *buf, size_t sz, struct perf_cpu_map *cpus) { const char *sysfs = sysfs__mountpoint(); - u64 midr = 0; int cpu; + int ret = EINVAL; if (!sysfs || sz < MIDR_SIZE) return EINVAL; @@ -44,22 +46,13 @@ static int _get_cpuid(char *buf, size_t sz, struct perf_cpu_map *cpus) } fclose(file); - /* Ignore/clear Variant[23:20] and - * Revision[3:0] of MIDR - */ - midr = strtoul(buf, NULL, 16); - midr &= (~(MIDR_VARIANT_MASK | MIDR_REVISION_MASK)); - scnprintf(buf, MIDR_SIZE, "0x%016lx", midr); /* got midr break loop */ + ret = 0; break; } perf_cpu_map__put(cpus); - - if (!midr) - return EINVAL; - - return 0; + return ret; } int get_cpuid(char *buf, size_t sz) @@ -99,3 +92,47 @@ char *get_cpuid_str(struct perf_pmu *pmu) return buf; } + +/* + * Return 0 if idstr is a higher or equal to version of the same part as + * mapcpuid. Therefore, if mapcpuid has 0 for revision and variant then any + * version of idstr will match as long as it's the same CPU type. + * + * Return 1 if the CPU type is different or the version of idstr is lower. + */ +int strcmp_cpuid_str(const char *mapcpuid, const char *idstr) +{ + u64 map_id = strtoull(mapcpuid, NULL, 16); + char map_id_variant = FIELD_GET(MIDR_VARIANT_MASK, map_id); + char map_id_revision = FIELD_GET(MIDR_REVISION_MASK, map_id); + u64 id = strtoull(idstr, NULL, 16); + char id_variant = FIELD_GET(MIDR_VARIANT_MASK, id); + char id_revision = FIELD_GET(MIDR_REVISION_MASK, id); + u64 id_fields = ~(MIDR_VARIANT_MASK | MIDR_REVISION_MASK); + + /* Compare without version first */ + if ((map_id & id_fields) != (id & id_fields)) + return 1; + + /* + * ID matches, now compare version. + * + * Arm revisions (like r0p0) are compared here like two digit semver + * values eg. 1.3 < 2.0 < 2.1 < 2.2. + * + * r = high value = 'Variant' field in MIDR + * p = low value = 'Revision' field in MIDR + * + */ + if (id_variant > map_id_variant) + return 0; + + if (id_variant == map_id_variant && id_revision >= map_id_revision) + return 0; + + /* + * variant is less than mapfile variant or variants are the same but + * the revision doesn't match. Return no match. + */ + return 1; +} diff --git a/tools/perf/arch/arm64/util/machine.c b/tools/perf/arch/arm64/util/machine.c index 235a0a1e1ec7..ba1144366e85 100644 --- a/tools/perf/arch/arm64/util/machine.c +++ b/tools/perf/arch/arm64/util/machine.c @@ -6,6 +6,7 @@ #include "debug.h" #include "symbol.h" #include "callchain.h" +#include "perf_regs.h" #include "record.h" #include "util/perf_regs.h" diff --git a/tools/perf/arch/arm64/util/mem-events.c b/tools/perf/arch/arm64/util/mem-events.c index df817d1f9f3e..3bcc5c7035c2 100644 --- a/tools/perf/arch/arm64/util/mem-events.c +++ b/tools/perf/arch/arm64/util/mem-events.c @@ -20,7 +20,7 @@ struct perf_mem_event *perf_mem_events__ptr(int i) return &perf_mem_events[i]; } -char *perf_mem_events__name(int i, char *pmu_name __maybe_unused) +const char *perf_mem_events__name(int i, const char *pmu_name __maybe_unused) { struct perf_mem_event *e = perf_mem_events__ptr(i); diff --git a/tools/perf/arch/arm64/util/perf_regs.c b/tools/perf/arch/arm64/util/perf_regs.c index 006692c9b040..1b79d8eab22f 100644 --- a/tools/perf/arch/arm64/util/perf_regs.c +++ b/tools/perf/arch/arm64/util/perf_regs.c @@ -6,6 +6,7 @@ #include <linux/kernel.h> #include <linux/zalloc.h> +#include "perf_regs.h" #include "../../../perf-sys.h" #include "../../../util/debug.h" #include "../../../util/event.h" @@ -139,6 +140,11 @@ int arch_sdt_arg_parse_op(char *old_op, char **new_op) return SDT_ARG_VALID; } +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + uint64_t arch__user_reg_mask(void) { struct perf_event_attr attr = { diff --git a/tools/perf/arch/arm64/util/pmu.c b/tools/perf/arch/arm64/util/pmu.c index 512a8f13c4de..615084eb88d8 100644 --- a/tools/perf/arch/arm64/util/pmu.c +++ b/tools/perf/arch/arm64/util/pmu.c @@ -2,28 +2,12 @@ #include <internal/cpumap.h> #include "../../../util/cpumap.h" +#include "../../../util/header.h" #include "../../../util/pmu.h" #include "../../../util/pmus.h" #include <api/fs/fs.h> #include <math.h> -static struct perf_pmu *pmu__find_core_pmu(void) -{ - struct perf_pmu *pmu = NULL; - - while ((pmu = perf_pmus__scan_core(pmu))) { - /* - * The cpumap should cover all CPUs. Otherwise, some CPUs may - * not support some events or have different event IDs. - */ - if (RC_CHK_ACCESS(pmu->cpus)->nr != cpu__max_cpu().cpu) - return NULL; - - return pmu; - } - return NULL; -} - const struct pmu_metrics_table *pmu_metrics_table__find(void) { struct perf_pmu *pmu = pmu__find_core_pmu(); diff --git a/tools/perf/arch/arm64/util/unwind-libdw.c b/tools/perf/arch/arm64/util/unwind-libdw.c index 09385081bb03..e056d50ab42e 100644 --- a/tools/perf/arch/arm64/util/unwind-libdw.c +++ b/tools/perf/arch/arm64/util/unwind-libdw.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <elfutils/libdwfl.h> +#include "perf_regs.h" #include "../../../util/unwind-libdw.h" #include "../../../util/perf_regs.h" #include "../../../util/sample.h" diff --git a/tools/perf/arch/csky/include/perf_regs.h b/tools/perf/arch/csky/include/perf_regs.h index 1afcc0e916c2..076c7746c8a2 100644 --- a/tools/perf/arch/csky/include/perf_regs.h +++ b/tools/perf/arch/csky/include/perf_regs.h @@ -12,7 +12,4 @@ #define PERF_REGS_MAX PERF_REG_CSKY_MAX #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32 -#define PERF_REG_IP PERF_REG_CSKY_PC -#define PERF_REG_SP PERF_REG_CSKY_SP - #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/csky/util/perf_regs.c b/tools/perf/arch/csky/util/perf_regs.c index 2864e2e3776d..c0877c264d49 100644 --- a/tools/perf/arch/csky/util/perf_regs.c +++ b/tools/perf/arch/csky/util/perf_regs.c @@ -1,6 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include "perf_regs.h" #include "../../util/perf_regs.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG_END }; + +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/csky/util/unwind-libdw.c b/tools/perf/arch/csky/util/unwind-libdw.c index 4bb4a06776e4..79df4374ab18 100644 --- a/tools/perf/arch/csky/util/unwind-libdw.c +++ b/tools/perf/arch/csky/util/unwind-libdw.c @@ -2,6 +2,7 @@ // Copyright (C) 2019 Hangzhou C-SKY Microsystems co.,ltd. #include <elfutils/libdwfl.h> +#include "perf_regs.h" #include "../../util/unwind-libdw.h" #include "../../util/perf_regs.h" #include "../../util/event.h" diff --git a/tools/perf/arch/loongarch/include/perf_regs.h b/tools/perf/arch/loongarch/include/perf_regs.h index 7833c7dbd38d..45c799fa5330 100644 --- a/tools/perf/arch/loongarch/include/perf_regs.h +++ b/tools/perf/arch/loongarch/include/perf_regs.h @@ -7,8 +7,6 @@ #include <asm/perf_regs.h> #define PERF_REGS_MAX PERF_REG_LOONGARCH_MAX -#define PERF_REG_IP PERF_REG_LOONGARCH_PC -#define PERF_REG_SP PERF_REG_LOONGARCH_R3 #define PERF_REGS_MASK ((1ULL << PERF_REG_LOONGARCH_MAX) - 1) diff --git a/tools/perf/arch/loongarch/util/perf_regs.c b/tools/perf/arch/loongarch/util/perf_regs.c index 2833e101a7c6..2c56e8b56ddf 100644 --- a/tools/perf/arch/loongarch/util/perf_regs.c +++ b/tools/perf/arch/loongarch/util/perf_regs.c @@ -1,6 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include "perf_regs.h" #include "../../../util/perf_regs.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG_END }; + +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/loongarch/util/unwind-libdw.c b/tools/perf/arch/loongarch/util/unwind-libdw.c index a9415385230a..7b3b9a4b21f8 100644 --- a/tools/perf/arch/loongarch/util/unwind-libdw.c +++ b/tools/perf/arch/loongarch/util/unwind-libdw.c @@ -2,6 +2,7 @@ /* Copyright (C) 2020-2023 Loongson Technology Corporation Limited */ #include <elfutils/libdwfl.h> +#include "perf_regs.h" #include "../../util/unwind-libdw.h" #include "../../util/perf_regs.h" #include "../../util/sample.h" diff --git a/tools/perf/arch/mips/include/perf_regs.h b/tools/perf/arch/mips/include/perf_regs.h index b8cd8bbb37ba..7082e91e0ed1 100644 --- a/tools/perf/arch/mips/include/perf_regs.h +++ b/tools/perf/arch/mips/include/perf_regs.h @@ -7,8 +7,6 @@ #include <asm/perf_regs.h> #define PERF_REGS_MAX PERF_REG_MIPS_MAX -#define PERF_REG_IP PERF_REG_MIPS_PC -#define PERF_REG_SP PERF_REG_MIPS_R29 #define PERF_REGS_MASK ((1ULL << PERF_REG_MIPS_MAX) - 1) diff --git a/tools/perf/arch/mips/util/perf_regs.c b/tools/perf/arch/mips/util/perf_regs.c index 2864e2e3776d..c0877c264d49 100644 --- a/tools/perf/arch/mips/util/perf_regs.c +++ b/tools/perf/arch/mips/util/perf_regs.c @@ -1,6 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include "perf_regs.h" #include "../../util/perf_regs.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG_END }; + +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/powerpc/include/perf_regs.h b/tools/perf/arch/powerpc/include/perf_regs.h index 9bb17c3f370b..1c66f6ba6773 100644 --- a/tools/perf/arch/powerpc/include/perf_regs.h +++ b/tools/perf/arch/powerpc/include/perf_regs.h @@ -16,7 +16,4 @@ void perf_regs_load(u64 *regs); #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32 #endif -#define PERF_REG_IP PERF_REG_POWERPC_NIP -#define PERF_REG_SP PERF_REG_POWERPC_R1 - #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/powerpc/util/mem-events.c b/tools/perf/arch/powerpc/util/mem-events.c index 4120fafe0be4..78b986e5268d 100644 --- a/tools/perf/arch/powerpc/util/mem-events.c +++ b/tools/perf/arch/powerpc/util/mem-events.c @@ -3,10 +3,10 @@ #include "mem-events.h" /* PowerPC does not support 'ldlat' parameter. */ -char *perf_mem_events__name(int i, char *pmu_name __maybe_unused) +const char *perf_mem_events__name(int i, const char *pmu_name __maybe_unused) { if (i == PERF_MEM_EVENTS__LOAD) - return (char *) "cpu/mem-loads/"; + return "cpu/mem-loads/"; - return (char *) "cpu/mem-stores/"; + return "cpu/mem-stores/"; } diff --git a/tools/perf/arch/powerpc/util/perf_regs.c b/tools/perf/arch/powerpc/util/perf_regs.c index 8d07a78e742a..b38aa056eea0 100644 --- a/tools/perf/arch/powerpc/util/perf_regs.c +++ b/tools/perf/arch/powerpc/util/perf_regs.c @@ -4,6 +4,7 @@ #include <regex.h> #include <linux/zalloc.h> +#include "perf_regs.h" #include "../../../util/perf_regs.h" #include "../../../util/debug.h" #include "../../../util/event.h" @@ -226,3 +227,8 @@ uint64_t arch__intr_reg_mask(void) } return mask; } + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/powerpc/util/unwind-libdw.c b/tools/perf/arch/powerpc/util/unwind-libdw.c index e616642c754c..e9a5a8bb67d9 100644 --- a/tools/perf/arch/powerpc/util/unwind-libdw.c +++ b/tools/perf/arch/powerpc/util/unwind-libdw.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include <elfutils/libdwfl.h> #include <linux/kernel.h> +#include "perf_regs.h" #include "../../../util/unwind-libdw.h" #include "../../../util/perf_regs.h" #include "../../../util/sample.h" diff --git a/tools/perf/arch/riscv/include/perf_regs.h b/tools/perf/arch/riscv/include/perf_regs.h index 6944bf0de53e..d482edb413e5 100644 --- a/tools/perf/arch/riscv/include/perf_regs.h +++ b/tools/perf/arch/riscv/include/perf_regs.h @@ -16,7 +16,4 @@ #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_32 #endif -#define PERF_REG_IP PERF_REG_RISCV_PC -#define PERF_REG_SP PERF_REG_RISCV_SP - #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/riscv/util/perf_regs.c b/tools/perf/arch/riscv/util/perf_regs.c index 2864e2e3776d..c0877c264d49 100644 --- a/tools/perf/arch/riscv/util/perf_regs.c +++ b/tools/perf/arch/riscv/util/perf_regs.c @@ -1,6 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include "perf_regs.h" #include "../../util/perf_regs.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG_END }; + +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/riscv/util/unwind-libdw.c b/tools/perf/arch/riscv/util/unwind-libdw.c index 54a198714eb8..5c98010d8b59 100644 --- a/tools/perf/arch/riscv/util/unwind-libdw.c +++ b/tools/perf/arch/riscv/util/unwind-libdw.c @@ -2,6 +2,7 @@ /* Copyright (C) 2019 Hangzhou C-SKY Microsystems co.,ltd. */ #include <elfutils/libdwfl.h> +#include "perf_regs.h" #include "../../util/unwind-libdw.h" #include "../../util/perf_regs.h" #include "../../util/sample.h" diff --git a/tools/perf/arch/s390/include/perf_regs.h b/tools/perf/arch/s390/include/perf_regs.h index 52fcc0891da6..130dfad2b96a 100644 --- a/tools/perf/arch/s390/include/perf_regs.h +++ b/tools/perf/arch/s390/include/perf_regs.h @@ -11,7 +11,4 @@ void perf_regs_load(u64 *regs); #define PERF_REGS_MAX PERF_REG_S390_MAX #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64 -#define PERF_REG_IP PERF_REG_S390_PC -#define PERF_REG_SP PERF_REG_S390_R15 - #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/s390/util/perf_regs.c b/tools/perf/arch/s390/util/perf_regs.c index 2864e2e3776d..c0877c264d49 100644 --- a/tools/perf/arch/s390/util/perf_regs.c +++ b/tools/perf/arch/s390/util/perf_regs.c @@ -1,6 +1,17 @@ // SPDX-License-Identifier: GPL-2.0 +#include "perf_regs.h" #include "../../util/perf_regs.h" const struct sample_reg sample_reg_masks[] = { SMPL_REG_END }; + +uint64_t arch__intr_reg_mask(void) +{ + return PERF_REGS_MASK; +} + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/s390/util/unwind-libdw.c b/tools/perf/arch/s390/util/unwind-libdw.c index 7d92452d5287..f50fb6dbb35c 100644 --- a/tools/perf/arch/s390/util/unwind-libdw.c +++ b/tools/perf/arch/s390/util/unwind-libdw.c @@ -5,6 +5,7 @@ #include "../../util/event.h" #include "../../util/sample.h" #include "dwarf-regs-table.h" +#include "perf_regs.h" bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg) diff --git a/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh b/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh index fa526a993845..59d7914ed6bb 100755 --- a/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh +++ b/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh @@ -24,7 +24,7 @@ sorted_table=$(mktemp /tmp/syscalltbl.XXXXXX) grep '^[0-9]' "$in" | sort -n > $sorted_table max_nr=0 -while read nr abi name entry compat; do +while read nr _abi name entry _compat; do if [ $nr -ge 512 ] ; then # discard compat sycalls break fi diff --git a/tools/perf/arch/x86/include/perf_regs.h b/tools/perf/arch/x86/include/perf_regs.h index 16e23b722042..f209ce2c1dd9 100644 --- a/tools/perf/arch/x86/include/perf_regs.h +++ b/tools/perf/arch/x86/include/perf_regs.h @@ -20,7 +20,5 @@ void perf_regs_load(u64 *regs); #define PERF_REGS_MASK (((1ULL << PERF_REG_X86_64_MAX) - 1) & ~REG_NOSUPPORT) #define PERF_SAMPLE_REGS_ABI PERF_SAMPLE_REGS_ABI_64 #endif -#define PERF_REG_IP PERF_REG_X86_IP -#define PERF_REG_SP PERF_REG_X86_SP #endif /* ARCH_PERF_REGS_H */ diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c index cbd582182932..b1ce0c52d88d 100644 --- a/tools/perf/arch/x86/util/evlist.c +++ b/tools/perf/arch/x86/util/evlist.c @@ -75,11 +75,12 @@ int arch_evlist__add_default_attrs(struct evlist *evlist, int arch_evlist__cmp(const struct evsel *lhs, const struct evsel *rhs) { - if (topdown_sys_has_perf_metrics() && evsel__sys_has_perf_metrics(lhs)) { + if (topdown_sys_has_perf_metrics() && + (arch_evsel__must_be_in_group(lhs) || arch_evsel__must_be_in_group(rhs))) { /* Ensure the topdown slots comes first. */ - if (strcasestr(lhs->name, "slots")) + if (strcasestr(lhs->name, "slots") && !strcasestr(lhs->name, "uops_retired.slots")) return -1; - if (strcasestr(rhs->name, "slots")) + if (strcasestr(rhs->name, "slots") && !strcasestr(rhs->name, "uops_retired.slots")) return 1; /* Followed by topdown events. */ if (strcasestr(lhs->name, "topdown") && !strcasestr(rhs->name, "topdown")) diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c index 81d22657922a..090d0f371891 100644 --- a/tools/perf/arch/x86/util/evsel.c +++ b/tools/perf/arch/x86/util/evsel.c @@ -40,12 +40,11 @@ bool evsel__sys_has_perf_metrics(const struct evsel *evsel) bool arch_evsel__must_be_in_group(const struct evsel *evsel) { - if (!evsel__sys_has_perf_metrics(evsel)) + if (!evsel__sys_has_perf_metrics(evsel) || !evsel->name || + strcasestr(evsel->name, "uops_retired.slots")) return false; - return evsel->name && - (strcasestr(evsel->name, "slots") || - strcasestr(evsel->name, "topdown")); + return strcasestr(evsel->name, "topdown") || strcasestr(evsel->name, "slots"); } int arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size) diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c index 74b70fd379df..31807791589e 100644 --- a/tools/perf/arch/x86/util/intel-pt.c +++ b/tools/perf/arch/x86/util/intel-pt.c @@ -60,8 +60,7 @@ struct intel_pt_recording { size_t priv_size; }; -static int intel_pt_parse_terms_with_default(const char *pmu_name, - struct list_head *formats, +static int intel_pt_parse_terms_with_default(struct perf_pmu *pmu, const char *str, u64 *config) { @@ -75,13 +74,12 @@ static int intel_pt_parse_terms_with_default(const char *pmu_name, INIT_LIST_HEAD(terms); - err = parse_events_terms(terms, str); + err = parse_events_terms(terms, str, /*input=*/ NULL); if (err) goto out_free; attr.config = *config; - err = perf_pmu__config_terms(pmu_name, formats, &attr, terms, true, - NULL); + err = perf_pmu__config_terms(pmu, &attr, terms, /*zero=*/true, /*err=*/NULL); if (err) goto out_free; @@ -91,12 +89,10 @@ out_free: return err; } -static int intel_pt_parse_terms(const char *pmu_name, struct list_head *formats, - const char *str, u64 *config) +static int intel_pt_parse_terms(struct perf_pmu *pmu, const char *str, u64 *config) { *config = 0; - return intel_pt_parse_terms_with_default(pmu_name, formats, str, - config); + return intel_pt_parse_terms_with_default(pmu, str, config); } static u64 intel_pt_masked_bits(u64 mask, u64 bits) @@ -126,7 +122,7 @@ static int intel_pt_read_config(struct perf_pmu *intel_pt_pmu, const char *str, *res = 0; - mask = perf_pmu__format_bits(&intel_pt_pmu->format, str); + mask = perf_pmu__format_bits(intel_pt_pmu, str); if (!mask) return -EINVAL; @@ -236,8 +232,7 @@ static u64 intel_pt_default_config(struct perf_pmu *intel_pt_pmu) pr_debug2("%s default config: %s\n", intel_pt_pmu->name, buf); - intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, buf, - &config); + intel_pt_parse_terms(intel_pt_pmu, buf, &config); close(dirfd); return config; @@ -348,16 +343,11 @@ static int intel_pt_info_fill(struct auxtrace_record *itr, if (priv_size != ptr->priv_size) return -EINVAL; - intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, - "tsc", &tsc_bit); - intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, - "noretcomp", &noretcomp_bit); - intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, - "mtc", &mtc_bit); - mtc_freq_bits = perf_pmu__format_bits(&intel_pt_pmu->format, - "mtc_period"); - intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, - "cyc", &cyc_bit); + intel_pt_parse_terms(intel_pt_pmu, "tsc", &tsc_bit); + intel_pt_parse_terms(intel_pt_pmu, "noretcomp", &noretcomp_bit); + intel_pt_parse_terms(intel_pt_pmu, "mtc", &mtc_bit); + mtc_freq_bits = perf_pmu__format_bits(intel_pt_pmu, "mtc_period"); + intel_pt_parse_terms(intel_pt_pmu, "cyc", &cyc_bit); intel_pt_tsc_ctc_ratio(&tsc_ctc_ratio_n, &tsc_ctc_ratio_d); @@ -511,7 +501,7 @@ static int intel_pt_val_config_term(struct perf_pmu *intel_pt_pmu, int dirfd, valid |= 1; - bits = perf_pmu__format_bits(&intel_pt_pmu->format, name); + bits = perf_pmu__format_bits(intel_pt_pmu, name); config &= bits; @@ -781,8 +771,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr, intel_pt_evsel->core.attr.aux_watermark = aux_watermark; } - intel_pt_parse_terms(intel_pt_pmu->name, &intel_pt_pmu->format, - "tsc", &tsc_bit); + intel_pt_parse_terms(intel_pt_pmu, "tsc", &tsc_bit); if (opts->full_auxtrace && (intel_pt_evsel->core.attr.config & tsc_bit)) have_timing_info = true; diff --git a/tools/perf/arch/x86/util/mem-events.c b/tools/perf/arch/x86/util/mem-events.c index a8a782bcb121..191b372f9a2d 100644 --- a/tools/perf/arch/x86/util/mem-events.c +++ b/tools/perf/arch/x86/util/mem-events.c @@ -52,7 +52,7 @@ bool is_mem_loads_aux_event(struct evsel *leader) return leader->core.attr.config == MEM_LOADS_AUX; } -char *perf_mem_events__name(int i, char *pmu_name) +const char *perf_mem_events__name(int i, const char *pmu_name) { struct perf_mem_event *e = perf_mem_events__ptr(i); @@ -65,7 +65,7 @@ char *perf_mem_events__name(int i, char *pmu_name) if (!pmu_name) { mem_loads_name__init = true; - pmu_name = (char *)"cpu"; + pmu_name = "cpu"; } if (perf_pmus__have_event(pmu_name, "mem-loads-aux")) { @@ -82,12 +82,12 @@ char *perf_mem_events__name(int i, char *pmu_name) if (i == PERF_MEM_EVENTS__STORE) { if (!pmu_name) - pmu_name = (char *)"cpu"; + pmu_name = "cpu"; scnprintf(mem_stores_name, sizeof(mem_stores_name), e->name, pmu_name); return mem_stores_name; } - return (char *)e->name; + return e->name; } diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c index 8ad4112ad10c..b813502a2727 100644 --- a/tools/perf/arch/x86/util/perf_regs.c +++ b/tools/perf/arch/x86/util/perf_regs.c @@ -5,6 +5,7 @@ #include <linux/kernel.h> #include <linux/zalloc.h> +#include "perf_regs.h" #include "../../../perf-sys.h" #include "../../../util/perf_regs.h" #include "../../../util/debug.h" @@ -317,3 +318,8 @@ uint64_t arch__intr_reg_mask(void) return PERF_REGS_MASK; } + +uint64_t arch__user_reg_mask(void) +{ + return PERF_REGS_MASK; +} diff --git a/tools/perf/arch/x86/util/pmu.c b/tools/perf/arch/x86/util/pmu.c index 65d8cdff4d5f..f428cffb0378 100644 --- a/tools/perf/arch/x86/util/pmu.c +++ b/tools/perf/arch/x86/util/pmu.c @@ -126,7 +126,7 @@ close_dir: return ret; } -static char *__pmu_find_real_name(const char *name) +static const char *__pmu_find_real_name(const char *name) { struct pmu_alias *pmu_alias; @@ -135,10 +135,10 @@ static char *__pmu_find_real_name(const char *name) return pmu_alias->name; } - return (char *)name; + return name; } -char *pmu_find_real_name(const char *name) +const char *pmu_find_real_name(const char *name) { if (cached_list) return __pmu_find_real_name(name); @@ -149,7 +149,7 @@ char *pmu_find_real_name(const char *name) return __pmu_find_real_name(name); } -static char *__pmu_find_alias_name(const char *name) +static const char *__pmu_find_alias_name(const char *name) { struct pmu_alias *pmu_alias; @@ -160,7 +160,7 @@ static char *__pmu_find_alias_name(const char *name) return NULL; } -char *pmu_find_alias_name(const char *name) +const char *pmu_find_alias_name(const char *name) { if (cached_list) return __pmu_find_alias_name(name); diff --git a/tools/perf/arch/x86/util/unwind-libdw.c b/tools/perf/arch/x86/util/unwind-libdw.c index ef71e8bf80bf..edb77e20e083 100644 --- a/tools/perf/arch/x86/util/unwind-libdw.c +++ b/tools/perf/arch/x86/util/unwind-libdw.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <elfutils/libdwfl.h> +#include "perf_regs.h" #include "../../../util/unwind-libdw.h" #include "../../../util/perf_regs.h" #include "util/sample.h" diff --git a/tools/perf/bench/Build b/tools/perf/bench/Build index 07bbc449329e..c2ab30907ae7 100644 --- a/tools/perf/bench/Build +++ b/tools/perf/bench/Build @@ -17,6 +17,7 @@ perf-y += inject-buildid.o perf-y += evlist-open-close.o perf-y += breakpoint.o perf-y += pmu-scan.o +perf-y += uprobe.o perf-$(CONFIG_X86_64) += mem-memcpy-x86-64-asm.o perf-$(CONFIG_X86_64) += mem-memset-x86-64-asm.o diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h index a0625c77bea3..faa18e6d2467 100644 --- a/tools/perf/bench/bench.h +++ b/tools/perf/bench/bench.h @@ -43,6 +43,9 @@ int bench_inject_build_id(int argc, const char **argv); int bench_evlist_open_close(int argc, const char **argv); int bench_breakpoint_thread(int argc, const char **argv); int bench_breakpoint_enable(int argc, const char **argv); +int bench_uprobe_baseline(int argc, const char **argv); +int bench_uprobe_empty(int argc, const char **argv); +int bench_uprobe_trace_printk(int argc, const char **argv); int bench_pmu_scan(int argc, const char **argv); #define BENCH_FORMAT_DEFAULT_STR "default" diff --git a/tools/perf/bench/breakpoint.c b/tools/perf/bench/breakpoint.c index 41385f89ffc7..dfd18f5db97d 100644 --- a/tools/perf/bench/breakpoint.c +++ b/tools/perf/bench/breakpoint.c @@ -47,6 +47,7 @@ struct breakpoint { static int breakpoint_setup(void *addr) { struct perf_event_attr attr = { .size = 0, }; + int fd; attr.type = PERF_TYPE_BREAKPOINT; attr.size = sizeof(attr); @@ -56,7 +57,12 @@ static int breakpoint_setup(void *addr) attr.bp_addr = (unsigned long)addr; attr.bp_type = HW_BREAKPOINT_RW; attr.bp_len = HW_BREAKPOINT_LEN_1; - return syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0); + fd = syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0); + + if (fd < 0) + fd = -errno; + + return fd; } static void *passive_thread(void *arg) @@ -122,8 +128,14 @@ int bench_breakpoint_thread(int argc, const char **argv) for (i = 0; i < thread_params.nbreakpoints; i++) { breakpoints[i].fd = breakpoint_setup(&breakpoints[i].watched); - if (breakpoints[i].fd == -1) + + if (breakpoints[i].fd < 0) { + if (breakpoints[i].fd == -ENODEV) { + printf("Skipping perf bench breakpoint thread: No hardware support\n"); + return 0; + } exit((perror("perf_event_open"), EXIT_FAILURE)); + } } gettimeofday(&start, NULL); for (i = 0; i < thread_params.nparallel; i++) { @@ -196,8 +208,14 @@ int bench_breakpoint_enable(int argc, const char **argv) exit(EXIT_FAILURE); } fd = breakpoint_setup(&watched); - if (fd == -1) + + if (fd < 0) { + if (fd == -ENODEV) { + printf("Skipping perf bench breakpoint enable: No hardware support\n"); + return 0; + } exit((perror("perf_event_open"), EXIT_FAILURE)); + } nthreads = enable_params.npassive + enable_params.nactive; threads = calloc(nthreads, sizeof(threads[0])); if (!threads) diff --git a/tools/perf/bench/pmu-scan.c b/tools/perf/bench/pmu-scan.c index c7d207f8e13c..9e4d36486f62 100644 --- a/tools/perf/bench/pmu-scan.c +++ b/tools/perf/bench/pmu-scan.c @@ -57,9 +57,7 @@ static int save_result(void) r->is_core = pmu->is_core; r->nr_caps = pmu->nr_caps; - r->nr_aliases = 0; - list_for_each(list, &pmu->aliases) - r->nr_aliases++; + r->nr_aliases = perf_pmu__num_events(pmu); r->nr_formats = 0; list_for_each(list, &pmu->format) @@ -98,9 +96,7 @@ static int check_result(bool core_only) return -1; } - nr = 0; - list_for_each(list, &pmu->aliases) - nr++; + nr = perf_pmu__num_events(pmu); if (nr != r->nr_aliases) { pr_err("Unmatched number of event aliases in %s: expect %d vs got %d\n", pmu->name, r->nr_aliases, nr); diff --git a/tools/perf/bench/uprobe.c b/tools/perf/bench/uprobe.c new file mode 100644 index 000000000000..914c0817fe8a --- /dev/null +++ b/tools/perf/bench/uprobe.c @@ -0,0 +1,198 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* + * uprobe.c + * + * uprobe benchmarks + * + * Copyright (C) 2023, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> + */ +#include "../perf.h" +#include "../util/util.h" +#include <subcmd/parse-options.h> +#include "../builtin.h" +#include "bench.h" +#include <linux/compiler.h> +#include <linux/time64.h> + +#include <inttypes.h> +#include <stdio.h> +#include <sys/time.h> +#include <sys/types.h> +#include <time.h> +#include <unistd.h> +#include <stdlib.h> + +#define LOOPS_DEFAULT 1000 +static int loops = LOOPS_DEFAULT; + +enum bench_uprobe { + BENCH_UPROBE__BASELINE, + BENCH_UPROBE__EMPTY, + BENCH_UPROBE__TRACE_PRINTK, +}; + +static const struct option options[] = { + OPT_INTEGER('l', "loop", &loops, "Specify number of loops"), + OPT_END() +}; + +static const char * const bench_uprobe_usage[] = { + "perf bench uprobe <options>", + NULL +}; + +#ifdef HAVE_BPF_SKEL +#include "bpf_skel/bench_uprobe.skel.h" + +#define bench_uprobe__attach_uprobe(prog) \ + skel->links.prog = bpf_program__attach_uprobe_opts(/*prog=*/skel->progs.prog, \ + /*pid=*/-1, \ + /*binary_path=*/"/lib64/libc.so.6", \ + /*func_offset=*/0, \ + /*opts=*/&uprobe_opts); \ + if (!skel->links.prog) { \ + err = -errno; \ + fprintf(stderr, "Failed to attach bench uprobe \"%s\": %s\n", #prog, strerror(errno)); \ + goto cleanup; \ + } + +struct bench_uprobe_bpf *skel; + +static int bench_uprobe__setup_bpf_skel(enum bench_uprobe bench) +{ + DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts); + int err; + + /* Load and verify BPF application */ + skel = bench_uprobe_bpf__open(); + if (!skel) { + fprintf(stderr, "Failed to open and load uprobes bench BPF skeleton\n"); + return -1; + } + + err = bench_uprobe_bpf__load(skel); + if (err) { + fprintf(stderr, "Failed to load and verify BPF skeleton\n"); + goto cleanup; + } + + uprobe_opts.func_name = "usleep"; + switch (bench) { + case BENCH_UPROBE__BASELINE: break; + case BENCH_UPROBE__EMPTY: bench_uprobe__attach_uprobe(empty); break; + case BENCH_UPROBE__TRACE_PRINTK: bench_uprobe__attach_uprobe(trace_printk); break; + default: + fprintf(stderr, "Invalid bench: %d\n", bench); + goto cleanup; + } + + return err; +cleanup: + bench_uprobe_bpf__destroy(skel); + return err; +} + +static void bench_uprobe__teardown_bpf_skel(void) +{ + if (skel) { + bench_uprobe_bpf__destroy(skel); + skel = NULL; + } +} +#else +static int bench_uprobe__setup_bpf_skel(enum bench_uprobe bench __maybe_unused) { return 0; } +static void bench_uprobe__teardown_bpf_skel(void) {}; +#endif + +static int bench_uprobe_format__default_fprintf(const char *name, const char *unit, u64 diff, FILE *fp) +{ + static u64 baseline, previous; + s64 diff_to_baseline = diff - baseline, + diff_to_previous = diff - previous; + int printed = fprintf(fp, "# Executed %'d %s calls\n", loops, name); + + printed += fprintf(fp, " %14s: %'" PRIu64 " %ss", "Total time", diff, unit); + + if (baseline) { + printed += fprintf(fp, " %s%'" PRId64 " to baseline", diff_to_baseline > 0 ? "+" : "", diff_to_baseline); + + if (previous != baseline) + fprintf(stdout, " %s%'" PRId64 " to previous", diff_to_previous > 0 ? "+" : "", diff_to_previous); + } + + printed += fprintf(fp, "\n\n %'.3f %ss/op", (double)diff / (double)loops, unit); + + if (baseline) { + printed += fprintf(fp, " %'.3f %ss/op to baseline", (double)diff_to_baseline / (double)loops, unit); + + if (previous != baseline) + printed += fprintf(fp, " %'.3f %ss/op to previous", (double)diff_to_previous / (double)loops, unit); + } else { + baseline = diff; + } + + fputc('\n', fp); + + previous = diff; + + return printed + 1; +} + +static int bench_uprobe(int argc, const char **argv, enum bench_uprobe bench) +{ + const char *name = "usleep(1000)", *unit = "usec"; + struct timespec start, end; + u64 diff; + int i; + + argc = parse_options(argc, argv, options, bench_uprobe_usage, 0); + + if (bench != BENCH_UPROBE__BASELINE && bench_uprobe__setup_bpf_skel(bench) < 0) + return 0; + + clock_gettime(CLOCK_REALTIME, &start); + + for (i = 0; i < loops; i++) { + usleep(USEC_PER_MSEC); + } + + clock_gettime(CLOCK_REALTIME, &end); + + diff = end.tv_sec * NSEC_PER_SEC + end.tv_nsec - (start.tv_sec * NSEC_PER_SEC + start.tv_nsec); + diff /= NSEC_PER_USEC; + + switch (bench_format) { + case BENCH_FORMAT_DEFAULT: + bench_uprobe_format__default_fprintf(name, unit, diff, stdout); + break; + + case BENCH_FORMAT_SIMPLE: + printf("%" PRIu64 "\n", diff); + break; + + default: + /* reaching here is something of a disaster */ + fprintf(stderr, "Unknown format:%d\n", bench_format); + exit(1); + } + + if (bench != BENCH_UPROBE__BASELINE) + bench_uprobe__teardown_bpf_skel(); + + return 0; +} + +int bench_uprobe_baseline(int argc, const char **argv) +{ + return bench_uprobe(argc, argv, BENCH_UPROBE__BASELINE); +} + +int bench_uprobe_empty(int argc, const char **argv) +{ + return bench_uprobe(argc, argv, BENCH_UPROBE__EMPTY); +} + +int bench_uprobe_trace_printk(int argc, const char **argv) +{ + return bench_uprobe(argc, argv, BENCH_UPROBE__TRACE_PRINTK); +} diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c index 5033e8bab276..1a8898d5b560 100644 --- a/tools/perf/builtin-bench.c +++ b/tools/perf/builtin-bench.c @@ -105,6 +105,13 @@ static struct bench breakpoint_benchmarks[] = { { NULL, NULL, NULL }, }; +static struct bench uprobe_benchmarks[] = { + { "baseline", "Baseline libc usleep(1000) call", bench_uprobe_baseline, }, + { "empty", "Attach empty BPF prog to uprobe on usleep, system wide", bench_uprobe_empty, }, + { "trace_printk", "Attach trace_printk BPF prog to uprobe on usleep syswide", bench_uprobe_trace_printk, }, + { NULL, NULL, NULL }, +}; + struct collection { const char *name; const char *summary; @@ -124,6 +131,7 @@ static struct collection collections[] = { #endif { "internals", "Perf-internals benchmarks", internals_benchmarks }, { "breakpoint", "Breakpoint benchmarks", breakpoint_benchmarks }, + { "uprobe", "uprobe benchmarks", uprobe_benchmarks }, { "all", "All benchmarks", NULL }, { NULL, NULL, NULL } }; diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c index e8a1b16aa5f8..57d300d8e570 100644 --- a/tools/perf/builtin-diff.c +++ b/tools/perf/builtin-diff.c @@ -1915,8 +1915,8 @@ static int data_init(int argc, const char **argv) struct perf_data *data = &d->data; data->path = use_default ? defaults[i] : argv[i]; - data->mode = PERF_DATA_MODE_READ, - data->force = force, + data->mode = PERF_DATA_MODE_READ; + data->force = force; d->idx = i; } diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c index 7fec2cca759f..a343823c8ddf 100644 --- a/tools/perf/builtin-list.c +++ b/tools/perf/builtin-list.c @@ -145,9 +145,20 @@ static void default_print_event(void *ps, const char *pmu_name, const char *topi putchar('\n'); if (desc && print_state->desc) { + char *desc_with_unit = NULL; + int desc_len = -1; + + if (pmu_name && strcmp(pmu_name, "default_core")) { + desc_len = strlen(desc); + desc_len = asprintf(&desc_with_unit, + desc[desc_len - 1] != '.' + ? "%s. Unit: %s" : "%s Unit: %s", + desc, pmu_name); + } printf("%*s", 8, "["); - wordwrap(desc, 8, pager_get_columns(), 0); + wordwrap(desc_len > 0 ? desc_with_unit : desc, 8, pager_get_columns(), 0); printf("]\n"); + free(desc_with_unit); } long_desc = long_desc ?: desc; if (long_desc && print_state->long_desc) { @@ -423,6 +434,13 @@ static void json_print_metric(void *ps __maybe_unused, const char *group, strbuf_release(&buf); } +static bool default_skip_duplicate_pmus(void *ps) +{ + struct print_state *print_state = ps; + + return !print_state->long_desc; +} + int cmd_list(int argc, const char **argv) { int i, ret = 0; @@ -434,6 +452,7 @@ int cmd_list(int argc, const char **argv) .print_end = default_print_end, .print_event = default_print_event, .print_metric = default_print_metric, + .skip_duplicate_pmus = default_skip_duplicate_pmus, }; const char *cputype = NULL; const char *unit_name = NULL; @@ -502,7 +521,7 @@ int cmd_list(int argc, const char **argv) ret = -1; goto out; } - default_ps.pmu_glob = pmu->name; + default_ps.pmu_glob = strdup(pmu->name); } } print_cb.print_start(ps); diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c index c15386cb1033..b141f2134274 100644 --- a/tools/perf/builtin-lock.c +++ b/tools/perf/builtin-lock.c @@ -2052,6 +2052,7 @@ static int __cmd_contention(int argc, const char **argv) if (IS_ERR(session)) { pr_err("Initializing perf session failed\n"); err = PTR_ERR(session); + session = NULL; goto out_delete; } @@ -2506,7 +2507,7 @@ int cmd_lock(int argc, const char **argv) OPT_CALLBACK('M', "map-nr-entries", &bpf_map_entries, "num", "Max number of BPF map entries", parse_map_entry), OPT_CALLBACK(0, "max-stack", &max_stack_depth, "num", - "Set the maximum stack depth when collecting lopck contention, " + "Set the maximum stack depth when collecting lock contention, " "Default: " __stringify(CONTENTION_STACK_DEPTH), parse_max_stack), OPT_INTEGER(0, "stack-skip", &stack_skip, "Set the number of stack depth to skip when finding a lock caller, " diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index aec18db7ff23..34bb31f08bb5 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -37,8 +37,6 @@ #include "util/parse-branch-options.h" #include "util/parse-regs-options.h" #include "util/perf_api_probe.h" -#include "util/llvm-utils.h" -#include "util/bpf-loader.h" #include "util/trigger.h" #include "util/perf-hooks.h" #include "util/cpu-set-sched.h" @@ -2465,16 +2463,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) } } - err = bpf__apply_obj_config(); - if (err) { - char errbuf[BUFSIZ]; - - bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf)); - pr_err("ERROR: Apply config to BPF failed: %s\n", - errbuf); - goto out_free_threads; - } - /* * Normally perf_session__new would do this, but it doesn't have the * evlist. @@ -3486,10 +3474,6 @@ static struct option __record_options[] = { "collect kernel callchains"), OPT_BOOLEAN(0, "user-callchains", &record.opts.user_callchains, "collect user callchains"), - OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path", - "clang binary to use for compiling BPF scriptlets"), - OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options", - "options passed to clang when compiling BPF scriptlets"), OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name, "file", "vmlinux pathname"), OPT_BOOLEAN(0, "buildid-all", &record.buildid_all, @@ -3967,27 +3951,6 @@ int cmd_record(int argc, const char **argv) setlocale(LC_ALL, ""); -#ifndef HAVE_LIBBPF_SUPPORT -# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, "NO_LIBBPF=1", c) - set_nobuild('\0', "clang-path", true); - set_nobuild('\0', "clang-opt", true); -# undef set_nobuild -#endif - -#ifndef HAVE_BPF_PROLOGUE -# if !defined (HAVE_DWARF_SUPPORT) -# define REASON "NO_DWARF=1" -# elif !defined (HAVE_LIBBPF_SUPPORT) -# define REASON "NO_LIBBPF=1" -# else -# define REASON "this architecture doesn't support BPF prologue" -# endif -# define set_nobuild(s, l, c) set_option_nobuild(record_options, s, l, REASON, c) - set_nobuild('\0', "vmlinux", true); -# undef set_nobuild -# undef REASON -#endif - #ifndef HAVE_BPF_SKEL # define set_nobuild(s, l, m, c) set_option_nobuild(record_options, s, l, m, c) set_nobuild('\0', "off-cpu", "no BUILD_BPF_SKEL=1", true); @@ -4116,14 +4079,6 @@ int cmd_record(int argc, const char **argv) if (dry_run) goto out; - err = bpf__setup_stdout(rec->evlist); - if (err) { - bpf__strerror_setup_stdout(rec->evlist, err, errbuf, sizeof(errbuf)); - pr_err("ERROR: Setup BPF stdout failed: %s\n", - errbuf); - goto out; - } - err = -ENOMEM; if (rec->no_buildid_cache || rec->no_buildid) { diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c index 200b3e7ea8da..517bf25750c8 100644 --- a/tools/perf/builtin-script.c +++ b/tools/perf/builtin-script.c @@ -2199,6 +2199,17 @@ static void process_event(struct perf_script *script, if (PRINT_FIELD(RETIRE_LAT)) fprintf(fp, "%16" PRIu16, sample->retire_lat); + if (PRINT_FIELD(CGROUP)) { + const char *cgrp_name; + struct cgroup *cgrp = cgroup__find(machine->env, + sample->cgroup); + if (cgrp != NULL) + cgrp_name = cgrp->name; + else + cgrp_name = "unknown"; + fprintf(fp, " %s", cgrp_name); + } + if (PRINT_FIELD(IP)) { struct callchain_cursor *cursor = NULL; @@ -2243,17 +2254,6 @@ static void process_event(struct perf_script *script, if (PRINT_FIELD(CODE_PAGE_SIZE)) fprintf(fp, " %s", get_page_size_name(sample->code_page_size, str)); - if (PRINT_FIELD(CGROUP)) { - const char *cgrp_name; - struct cgroup *cgrp = cgroup__find(machine->env, - sample->cgroup); - if (cgrp != NULL) - cgrp_name = cgrp->name; - else - cgrp_name = "unknown"; - fprintf(fp, " %s", cgrp_name); - } - perf_sample__fprintf_ipc(sample, attr, fp); fprintf(fp, "\n"); diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 1baa2acb3ced..ea8c7eca5eee 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -1805,6 +1805,7 @@ int cmd_top(int argc, const char **argv) top.session = perf_session__new(NULL, NULL); if (IS_ERR(top.session)) { status = PTR_ERR(top.session); + top.session = NULL; goto out_delete_evlist; } diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c index 6e73d0e95715..e541d0e2777a 100644 --- a/tools/perf/builtin-trace.c +++ b/tools/perf/builtin-trace.c @@ -18,6 +18,10 @@ #include <api/fs/tracing_path.h> #ifdef HAVE_LIBBPF_SUPPORT #include <bpf/bpf.h> +#include <bpf/libbpf.h> +#ifdef HAVE_BPF_SKEL +#include "bpf_skel/augmented_raw_syscalls.skel.h" +#endif #endif #include "util/bpf_map.h" #include "util/rlimit.h" @@ -53,7 +57,6 @@ #include "trace/beauty/beauty.h" #include "trace-event.h" #include "util/parse-events.h" -#include "util/bpf-loader.h" #include "util/tracepoint.h" #include "callchain.h" #include "print_binary.h" @@ -127,25 +130,19 @@ struct trace { struct syscalltbl *sctbl; struct { struct syscall *table; - struct { // per syscall BPF_MAP_TYPE_PROG_ARRAY - struct bpf_map *sys_enter, - *sys_exit; - } prog_array; struct { struct evsel *sys_enter, - *sys_exit, - *augmented; + *sys_exit, + *bpf_output; } events; - struct bpf_program *unaugmented_prog; } syscalls; - struct { - struct bpf_map *map; - } dump; +#ifdef HAVE_BPF_SKEL + struct augmented_raw_syscalls_bpf *skel; +#endif struct record_opts opts; struct evlist *evlist; struct machine *host; struct thread *current; - struct bpf_object *bpf_obj; struct cgroup *cgroup; u64 base_time; FILE *output; @@ -415,6 +412,7 @@ static int evsel__init_syscall_tp(struct evsel *evsel) if (evsel__init_tp_uint_field(evsel, &sc->id, "__syscall_nr") && evsel__init_tp_uint_field(evsel, &sc->id, "nr")) return -ENOENT; + return 0; } @@ -1296,6 +1294,22 @@ static struct thread_trace *thread_trace__new(void) return ttrace; } +static void thread_trace__free_files(struct thread_trace *ttrace); + +static void thread_trace__delete(void *pttrace) +{ + struct thread_trace *ttrace = pttrace; + + if (!ttrace) + return; + + intlist__delete(ttrace->syscall_stats); + ttrace->syscall_stats = NULL; + thread_trace__free_files(ttrace); + zfree(&ttrace->entry_str); + free(ttrace); +} + static struct thread_trace *thread__trace(struct thread *thread, FILE *fp) { struct thread_trace *ttrace; @@ -1333,6 +1347,17 @@ void syscall_arg__set_ret_scnprintf(struct syscall_arg *arg, static const size_t trace__entry_str_size = 2048; +static void thread_trace__free_files(struct thread_trace *ttrace) +{ + for (int i = 0; i < ttrace->files.max; ++i) { + struct file *file = ttrace->files.table + i; + zfree(&file->pathname); + } + + zfree(&ttrace->files.table); + ttrace->files.max = -1; +} + static struct file *thread_trace__files_entry(struct thread_trace *ttrace, int fd) { if (fd < 0) @@ -1635,6 +1660,8 @@ static int trace__symbols_init(struct trace *trace, struct evlist *evlist) if (trace->host == NULL) return -ENOMEM; + thread__set_priv_destructor(thread_trace__delete); + err = trace_event__register_resolver(trace->host, trace__machine__resolve_kernel_addr); if (err < 0) goto out; @@ -2816,7 +2843,7 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel, if (thread) trace__fprintf_comm_tid(trace, thread, trace->output); - if (evsel == trace->syscalls.events.augmented) { + if (evsel == trace->syscalls.events.bpf_output) { int id = perf_evsel__sc_tp_uint(evsel, id, sample); struct syscall *sc = trace__syscall_info(trace, evsel, id); @@ -3136,13 +3163,8 @@ static void evlist__free_syscall_tp_fields(struct evlist *evlist) struct evsel *evsel; evlist__for_each_entry(evlist, evsel) { - struct evsel_trace *et = evsel->priv; - - if (!et || !evsel->tp_format || strcmp(evsel->tp_format->system, "syscalls")) - continue; - - zfree(&et->fmt); - free(et); + evsel_trace__delete(evsel->priv); + evsel->priv = NULL; } } @@ -3254,35 +3276,16 @@ out_enomem: goto out; } -#ifdef HAVE_LIBBPF_SUPPORT -static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace, const char *name) -{ - if (trace->bpf_obj == NULL) - return NULL; - - return bpf_object__find_map_by_name(trace->bpf_obj, name); -} - -static void trace__set_bpf_map_filtered_pids(struct trace *trace) -{ - trace->filter_pids.map = trace__find_bpf_map_by_name(trace, "pids_filtered"); -} - -static void trace__set_bpf_map_syscalls(struct trace *trace) -{ - trace->syscalls.prog_array.sys_enter = trace__find_bpf_map_by_name(trace, "syscalls_sys_enter"); - trace->syscalls.prog_array.sys_exit = trace__find_bpf_map_by_name(trace, "syscalls_sys_exit"); -} - +#ifdef HAVE_BPF_SKEL static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace, const char *name) { struct bpf_program *pos, *prog = NULL; const char *sec_name; - if (trace->bpf_obj == NULL) + if (trace->skel->obj == NULL) return NULL; - bpf_object__for_each_program(pos, trace->bpf_obj) { + bpf_object__for_each_program(pos, trace->skel->obj) { sec_name = bpf_program__section_name(pos); if (sec_name && !strcmp(sec_name, name)) { prog = pos; @@ -3300,12 +3303,12 @@ static struct bpf_program *trace__find_syscall_bpf_prog(struct trace *trace, str if (prog_name == NULL) { char default_prog_name[256]; - scnprintf(default_prog_name, sizeof(default_prog_name), "!syscalls:sys_%s_%s", type, sc->name); + scnprintf(default_prog_name, sizeof(default_prog_name), "tp/syscalls/sys_%s_%s", type, sc->name); prog = trace__find_bpf_program_by_title(trace, default_prog_name); if (prog != NULL) goto out_found; if (sc->fmt && sc->fmt->alias) { - scnprintf(default_prog_name, sizeof(default_prog_name), "!syscalls:sys_%s_%s", type, sc->fmt->alias); + scnprintf(default_prog_name, sizeof(default_prog_name), "tp/syscalls/sys_%s_%s", type, sc->fmt->alias); prog = trace__find_bpf_program_by_title(trace, default_prog_name); if (prog != NULL) goto out_found; @@ -3323,7 +3326,7 @@ out_found: pr_debug("Couldn't find BPF prog \"%s\" to associate with syscalls:sys_%s_%s, not augmenting it\n", prog_name, type, sc->name); out_unaugmented: - return trace->syscalls.unaugmented_prog; + return trace->skel->progs.syscall_unaugmented; } static void trace__init_syscall_bpf_progs(struct trace *trace, int id) @@ -3340,13 +3343,13 @@ static void trace__init_syscall_bpf_progs(struct trace *trace, int id) static int trace__bpf_prog_sys_enter_fd(struct trace *trace, int id) { struct syscall *sc = trace__syscall_info(trace, NULL, id); - return sc ? bpf_program__fd(sc->bpf_prog.sys_enter) : bpf_program__fd(trace->syscalls.unaugmented_prog); + return sc ? bpf_program__fd(sc->bpf_prog.sys_enter) : bpf_program__fd(trace->skel->progs.syscall_unaugmented); } static int trace__bpf_prog_sys_exit_fd(struct trace *trace, int id) { struct syscall *sc = trace__syscall_info(trace, NULL, id); - return sc ? bpf_program__fd(sc->bpf_prog.sys_exit) : bpf_program__fd(trace->syscalls.unaugmented_prog); + return sc ? bpf_program__fd(sc->bpf_prog.sys_exit) : bpf_program__fd(trace->skel->progs.syscall_unaugmented); } static struct bpf_program *trace__find_usable_bpf_prog_entry(struct trace *trace, struct syscall *sc) @@ -3371,7 +3374,7 @@ try_to_find_pair: bool is_candidate = false; if (pair == NULL || pair == sc || - pair->bpf_prog.sys_enter == trace->syscalls.unaugmented_prog) + pair->bpf_prog.sys_enter == trace->skel->progs.syscall_unaugmented) continue; for (field = sc->args, candidate_field = pair->args; @@ -3395,6 +3398,19 @@ try_to_find_pair: if (strcmp(field->type, candidate_field->type)) goto next_candidate; + /* + * This is limited in the BPF program but sys_write + * uses "const char *" for its "buf" arg so we need to + * use some heuristic that is kinda future proof... + */ + if (strcmp(field->type, "const char *") == 0 && + !(strstr(field->name, "name") || + strstr(field->name, "path") || + strstr(field->name, "file") || + strstr(field->name, "root") || + strstr(field->name, "description"))) + goto next_candidate; + is_candidate = true; } @@ -3424,7 +3440,7 @@ try_to_find_pair: */ if (pair_prog == NULL) { pair_prog = trace__find_syscall_bpf_prog(trace, pair, pair->fmt ? pair->fmt->bpf_prog_name.sys_enter : NULL, "enter"); - if (pair_prog == trace->syscalls.unaugmented_prog) + if (pair_prog == trace->skel->progs.syscall_unaugmented) goto next_candidate; } @@ -3439,8 +3455,8 @@ try_to_find_pair: static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace) { - int map_enter_fd = bpf_map__fd(trace->syscalls.prog_array.sys_enter), - map_exit_fd = bpf_map__fd(trace->syscalls.prog_array.sys_exit); + int map_enter_fd = bpf_map__fd(trace->skel->maps.syscalls_sys_enter); + int map_exit_fd = bpf_map__fd(trace->skel->maps.syscalls_sys_exit); int err = 0, key; for (key = 0; key < trace->sctbl->syscalls.nr_entries; ++key) { @@ -3502,7 +3518,7 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace) * For now we're just reusing the sys_enter prog, and if it * already has an augmenter, we don't need to find one. */ - if (sc->bpf_prog.sys_enter != trace->syscalls.unaugmented_prog) + if (sc->bpf_prog.sys_enter != trace->skel->progs.syscall_unaugmented) continue; /* @@ -3525,74 +3541,9 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace) break; } - return err; } - -static void trace__delete_augmented_syscalls(struct trace *trace) -{ - struct evsel *evsel, *tmp; - - evlist__remove(trace->evlist, trace->syscalls.events.augmented); - evsel__delete(trace->syscalls.events.augmented); - trace->syscalls.events.augmented = NULL; - - evlist__for_each_entry_safe(trace->evlist, tmp, evsel) { - if (evsel->bpf_obj == trace->bpf_obj) { - evlist__remove(trace->evlist, evsel); - evsel__delete(evsel); - } - - } - - bpf_object__close(trace->bpf_obj); - trace->bpf_obj = NULL; -} -#else // HAVE_LIBBPF_SUPPORT -static struct bpf_map *trace__find_bpf_map_by_name(struct trace *trace __maybe_unused, - const char *name __maybe_unused) -{ - return NULL; -} - -static void trace__set_bpf_map_filtered_pids(struct trace *trace __maybe_unused) -{ -} - -static void trace__set_bpf_map_syscalls(struct trace *trace __maybe_unused) -{ -} - -static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace __maybe_unused, - const char *name __maybe_unused) -{ - return NULL; -} - -static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace __maybe_unused) -{ - return 0; -} - -static void trace__delete_augmented_syscalls(struct trace *trace __maybe_unused) -{ -} -#endif // HAVE_LIBBPF_SUPPORT - -static bool trace__only_augmented_syscalls_evsels(struct trace *trace) -{ - struct evsel *evsel; - - evlist__for_each_entry(trace->evlist, evsel) { - if (evsel == trace->syscalls.events.augmented || - evsel->bpf_obj == trace->bpf_obj) - continue; - - return false; - } - - return true; -} +#endif // HAVE_BPF_SKEL static int trace__set_ev_qualifier_filter(struct trace *trace) { @@ -3956,23 +3907,31 @@ static int trace__run(struct trace *trace, int argc, const char **argv) err = evlist__open(evlist); if (err < 0) goto out_error_open; +#ifdef HAVE_BPF_SKEL + if (trace->syscalls.events.bpf_output) { + struct perf_cpu cpu; - err = bpf__apply_obj_config(); - if (err) { - char errbuf[BUFSIZ]; - - bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf)); - pr_err("ERROR: Apply config to BPF failed: %s\n", - errbuf); - goto out_error_open; + /* + * Set up the __augmented_syscalls__ BPF map to hold for each + * CPU the bpf-output event's file descriptor. + */ + perf_cpu_map__for_each_cpu(cpu, i, trace->syscalls.events.bpf_output->core.cpus) { + bpf_map__update_elem(trace->skel->maps.__augmented_syscalls__, + &cpu.cpu, sizeof(int), + xyarray__entry(trace->syscalls.events.bpf_output->core.fd, + cpu.cpu, 0), + sizeof(__u32), BPF_ANY); + } } - +#endif err = trace__set_filter_pids(trace); if (err < 0) goto out_error_mem; - if (trace->syscalls.prog_array.sys_enter) +#ifdef HAVE_BPF_SKEL + if (trace->skel && trace->skel->progs.sys_enter) trace__init_syscalls_bpf_prog_array_maps(trace); +#endif if (trace->ev_qualifier_ids.nr > 0) { err = trace__set_ev_qualifier_filter(trace); @@ -4005,9 +3964,6 @@ static int trace__run(struct trace *trace, int argc, const char **argv) if (err < 0) goto out_error_apply_filters; - if (trace->dump.map) - bpf_map__fprintf(trace->dump.map, trace->output); - err = evlist__mmap(evlist, trace->opts.mmap_pages); if (err < 0) goto out_error_mmap; @@ -4704,6 +4660,18 @@ static void trace__exit(struct trace *trace) zfree(&trace->perfconfig_events); } +#ifdef HAVE_BPF_SKEL +static int bpf__setup_bpf_output(struct evlist *evlist) +{ + int err = parse_event(evlist, "bpf-output/no-inherit=1,name=__augmented_syscalls__/"); + + if (err) + pr_debug("ERROR: failed to create the \"__augmented_syscalls__\" bpf-output event\n"); + + return err; +} +#endif + int cmd_trace(int argc, const char **argv) { const char *trace_usage[] = { @@ -4735,7 +4703,6 @@ int cmd_trace(int argc, const char **argv) .max_stack = UINT_MAX, .max_events = ULONG_MAX, }; - const char *map_dump_str = NULL; const char *output_name = NULL; const struct option trace_options[] = { OPT_CALLBACK('e', "event", &trace, "event", @@ -4769,9 +4736,6 @@ int cmd_trace(int argc, const char **argv) OPT_CALLBACK(0, "duration", &trace, "float", "show only events with duration > N.M ms", trace__set_duration), -#ifdef HAVE_LIBBPF_SUPPORT - OPT_STRING(0, "map-dump", &map_dump_str, "BPF map", "BPF map to periodically dump"), -#endif OPT_BOOLEAN(0, "sched", &trace.sched, "show blocking scheduler events"), OPT_INCR('v', "verbose", &verbose, "be more verbose"), OPT_BOOLEAN('T', "time", &trace.full_time, @@ -4898,87 +4862,48 @@ int cmd_trace(int argc, const char **argv) "cgroup monitoring only available in system-wide mode"); } - evsel = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__"); - if (IS_ERR(evsel)) { - bpf__strerror_setup_output_event(trace.evlist, PTR_ERR(evsel), bf, sizeof(bf)); - pr_err("ERROR: Setup trace syscalls enter failed: %s\n", bf); - goto out; - } - - if (evsel) { - trace.syscalls.events.augmented = evsel; +#ifdef HAVE_BPF_SKEL + if (!trace.trace_syscalls) + goto skip_augmentation; - evsel = evlist__find_tracepoint_by_name(trace.evlist, "raw_syscalls:sys_enter"); - if (evsel == NULL) { - pr_err("ERROR: raw_syscalls:sys_enter not found in the augmented BPF object\n"); - goto out; - } + trace.skel = augmented_raw_syscalls_bpf__open(); + if (!trace.skel) { + pr_debug("Failed to open augmented syscalls BPF skeleton"); + } else { + /* + * Disable attaching the BPF programs except for sys_enter and + * sys_exit that tail call into this as necessary. + */ + struct bpf_program *prog; - if (evsel->bpf_obj == NULL) { - pr_err("ERROR: raw_syscalls:sys_enter not associated to a BPF object\n"); - goto out; + bpf_object__for_each_program(prog, trace.skel->obj) { + if (prog != trace.skel->progs.sys_enter && prog != trace.skel->progs.sys_exit) + bpf_program__set_autoattach(prog, /*autoattach=*/false); } - trace.bpf_obj = evsel->bpf_obj; + err = augmented_raw_syscalls_bpf__load(trace.skel); - /* - * If we have _just_ the augmenter event but don't have a - * explicit --syscalls, then assume we want all strace-like - * syscalls: - */ - if (!trace.trace_syscalls && trace__only_augmented_syscalls_evsels(&trace)) - trace.trace_syscalls = true; - /* - * So, if we have a syscall augmenter, but trace_syscalls, aka - * strace-like syscall tracing is not set, then we need to trow - * away the augmenter, i.e. all the events that were created - * from that BPF object file. - * - * This is more to fix the current .perfconfig trace.add_events - * style of setting up the strace-like eBPF based syscall point - * payload augmenter. - * - * All this complexity will be avoided by adding an alternative - * to trace.add_events in the form of - * trace.bpf_augmented_syscalls, that will be only parsed if we - * need it. - * - * .perfconfig trace.add_events is still useful if we want, for - * instance, have msr_write.msr in some .perfconfig profile based - * 'perf trace --config determinism.profile' mode, where for some - * particular goal/workload type we want a set of events and - * output mode (with timings, etc) instead of having to add - * all via the command line. - * - * Also --config to specify an alternate .perfconfig file needs - * to be implemented. - */ - if (!trace.trace_syscalls) { - trace__delete_augmented_syscalls(&trace); + if (err < 0) { + libbpf_strerror(err, bf, sizeof(bf)); + pr_debug("Failed to load augmented syscalls BPF skeleton: %s\n", bf); } else { - trace__set_bpf_map_filtered_pids(&trace); - trace__set_bpf_map_syscalls(&trace); - trace.syscalls.unaugmented_prog = trace__find_bpf_program_by_title(&trace, "!raw_syscalls:unaugmented"); + augmented_raw_syscalls_bpf__attach(trace.skel); + trace__add_syscall_newtp(&trace); } } - err = bpf__setup_stdout(trace.evlist); + err = bpf__setup_bpf_output(trace.evlist); if (err) { - bpf__strerror_setup_stdout(trace.evlist, err, bf, sizeof(bf)); - pr_err("ERROR: Setup BPF stdout failed: %s\n", bf); + libbpf_strerror(err, bf, sizeof(bf)); + pr_err("ERROR: Setup BPF output event failed: %s\n", bf); goto out; } - + trace.syscalls.events.bpf_output = evlist__last(trace.evlist); + assert(!strcmp(evsel__name(trace.syscalls.events.bpf_output), "__augmented_syscalls__")); +skip_augmentation: +#endif err = -1; - if (map_dump_str) { - trace.dump.map = trace__find_bpf_map_by_name(&trace, map_dump_str); - if (trace.dump.map == NULL) { - pr_err("ERROR: BPF map \"%s\" not found\n", map_dump_str); - goto out; - } - } - if (trace.trace_pgfaults) { trace.opts.sample_address = true; trace.opts.sample_time = true; @@ -5029,7 +4954,7 @@ int cmd_trace(int argc, const char **argv) * buffers that are being copied from kernel to userspace, think 'read' * syscall. */ - if (trace.syscalls.events.augmented) { + if (trace.syscalls.events.bpf_output) { evlist__for_each_entry(trace.evlist, evsel) { bool raw_syscalls_sys_exit = strcmp(evsel__name(evsel), "raw_syscalls:sys_exit") == 0; @@ -5038,9 +4963,9 @@ int cmd_trace(int argc, const char **argv) goto init_augmented_syscall_tp; } - if (trace.syscalls.events.augmented->priv == NULL && + if (trace.syscalls.events.bpf_output->priv == NULL && strstr(evsel__name(evsel), "syscalls:sys_enter")) { - struct evsel *augmented = trace.syscalls.events.augmented; + struct evsel *augmented = trace.syscalls.events.bpf_output; if (evsel__init_augmented_syscall_tp(augmented, evsel) || evsel__init_augmented_syscall_tp_args(augmented)) goto out; @@ -5145,5 +5070,8 @@ out_close: fclose(trace.output); out: trace__exit(&trace); +#ifdef HAVE_BPF_SKEL + augmented_raw_syscalls_bpf__destroy(trace.skel); +#endif return err; } diff --git a/tools/perf/check-headers.sh b/tools/perf/check-headers.sh index a0f1d8adce60..4314c9197850 100755 --- a/tools/perf/check-headers.sh +++ b/tools/perf/check-headers.sh @@ -123,7 +123,7 @@ check () { shift - check_2 "tools/$file" "$file" $* + check_2 "tools/$file" "$file" "$@" } beauty_check () { @@ -131,7 +131,7 @@ beauty_check () { shift - check_2 "tools/perf/trace/beauty/$file" "$file" $* + check_2 "tools/perf/trace/beauty/$file" "$file" "$@" } # Check if we have the kernel headers (tools/perf/../../include), else @@ -183,7 +183,7 @@ done check_2 tools/perf/util/hashmap.h tools/lib/bpf/hashmap.h check_2 tools/perf/util/hashmap.c tools/lib/bpf/hashmap.c -cd tools/perf +cd tools/perf || exit if [ ${#FAILURES[@]} -gt 0 ] then diff --git a/tools/perf/dlfilters/dlfilter-test-api-v0.c b/tools/perf/dlfilters/dlfilter-test-api-v0.c index b1f51efd67d6..72f263d49121 100644 --- a/tools/perf/dlfilters/dlfilter-test-api-v0.c +++ b/tools/perf/dlfilters/dlfilter-test-api-v0.c @@ -254,6 +254,30 @@ static int check_addr_al(void *ctx) return 0; } +static int check_address_al(void *ctx, const struct perf_dlfilter_sample *sample) +{ + struct perf_dlfilter_al address_al; + const struct perf_dlfilter_al *al; + + al = perf_dlfilter_fns.resolve_ip(ctx); + if (!al) + return test_fail("resolve_ip() failed"); + + address_al.size = sizeof(address_al); + if (perf_dlfilter_fns.resolve_address(ctx, sample->ip, &address_al)) + return test_fail("resolve_address() failed"); + + CHECK(address_al.sym && al->sym); + CHECK(!strcmp(address_al.sym, al->sym)); + CHECK(address_al.addr == al->addr); + CHECK(address_al.sym_start == al->sym_start); + CHECK(address_al.sym_end == al->sym_end); + CHECK(address_al.dso && al->dso); + CHECK(!strcmp(address_al.dso, al->dso)); + + return 0; +} + static int check_attr(void *ctx) { struct perf_event_attr *attr = perf_dlfilter_fns.attr(ctx); @@ -290,7 +314,7 @@ static int do_checks(void *data, const struct perf_dlfilter_sample *sample, void if (early && !d->do_early) return 0; - if (check_al(ctx) || check_addr_al(ctx)) + if (check_al(ctx) || check_addr_al(ctx) || check_address_al(ctx, sample)) return -1; if (early) diff --git a/tools/perf/dlfilters/dlfilter-test-api-v2.c b/tools/perf/dlfilters/dlfilter-test-api-v2.c new file mode 100644 index 000000000000..38e593d92920 --- /dev/null +++ b/tools/perf/dlfilters/dlfilter-test-api-v2.c @@ -0,0 +1,377 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test v2 API for perf --dlfilter shared object + * Copyright (c) 2023, Intel Corporation. + */ +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <stdbool.h> + +/* + * Copy v2 API instead of including current API + */ +#include <linux/perf_event.h> +#include <linux/types.h> + +/* + * The following macro can be used to determine if this header defines + * perf_dlfilter_sample machine_pid and vcpu. + */ +#define PERF_DLFILTER_HAS_MACHINE_PID + +/* Definitions for perf_dlfilter_sample flags */ +enum { + PERF_DLFILTER_FLAG_BRANCH = 1ULL << 0, + PERF_DLFILTER_FLAG_CALL = 1ULL << 1, + PERF_DLFILTER_FLAG_RETURN = 1ULL << 2, + PERF_DLFILTER_FLAG_CONDITIONAL = 1ULL << 3, + PERF_DLFILTER_FLAG_SYSCALLRET = 1ULL << 4, + PERF_DLFILTER_FLAG_ASYNC = 1ULL << 5, + PERF_DLFILTER_FLAG_INTERRUPT = 1ULL << 6, + PERF_DLFILTER_FLAG_TX_ABORT = 1ULL << 7, + PERF_DLFILTER_FLAG_TRACE_BEGIN = 1ULL << 8, + PERF_DLFILTER_FLAG_TRACE_END = 1ULL << 9, + PERF_DLFILTER_FLAG_IN_TX = 1ULL << 10, + PERF_DLFILTER_FLAG_VMENTRY = 1ULL << 11, + PERF_DLFILTER_FLAG_VMEXIT = 1ULL << 12, +}; + +/* + * perf sample event information (as per perf script and <linux/perf_event.h>) + */ +struct perf_dlfilter_sample { + __u32 size; /* Size of this structure (for compatibility checking) */ + __u16 ins_lat; /* Refer PERF_SAMPLE_WEIGHT_TYPE in <linux/perf_event.h> */ + __u16 p_stage_cyc; /* Refer PERF_SAMPLE_WEIGHT_TYPE in <linux/perf_event.h> */ + __u64 ip; + __s32 pid; + __s32 tid; + __u64 time; + __u64 addr; + __u64 id; + __u64 stream_id; + __u64 period; + __u64 weight; /* Refer PERF_SAMPLE_WEIGHT_TYPE in <linux/perf_event.h> */ + __u64 transaction; /* Refer PERF_SAMPLE_TRANSACTION in <linux/perf_event.h> */ + __u64 insn_cnt; /* For instructions-per-cycle (IPC) */ + __u64 cyc_cnt; /* For instructions-per-cycle (IPC) */ + __s32 cpu; + __u32 flags; /* Refer PERF_DLFILTER_FLAG_* above */ + __u64 data_src; /* Refer PERF_SAMPLE_DATA_SRC in <linux/perf_event.h> */ + __u64 phys_addr; /* Refer PERF_SAMPLE_PHYS_ADDR in <linux/perf_event.h> */ + __u64 data_page_size; /* Refer PERF_SAMPLE_DATA_PAGE_SIZE in <linux/perf_event.h> */ + __u64 code_page_size; /* Refer PERF_SAMPLE_CODE_PAGE_SIZE in <linux/perf_event.h> */ + __u64 cgroup; /* Refer PERF_SAMPLE_CGROUP in <linux/perf_event.h> */ + __u8 cpumode; /* Refer CPUMODE_MASK etc in <linux/perf_event.h> */ + __u8 addr_correlates_sym; /* True => resolve_addr() can be called */ + __u16 misc; /* Refer perf_event_header in <linux/perf_event.h> */ + __u32 raw_size; /* Refer PERF_SAMPLE_RAW in <linux/perf_event.h> */ + const void *raw_data; /* Refer PERF_SAMPLE_RAW in <linux/perf_event.h> */ + __u64 brstack_nr; /* Number of brstack entries */ + const struct perf_branch_entry *brstack; /* Refer <linux/perf_event.h> */ + __u64 raw_callchain_nr; /* Number of raw_callchain entries */ + const __u64 *raw_callchain; /* Refer <linux/perf_event.h> */ + const char *event; + __s32 machine_pid; + __s32 vcpu; +}; + +/* + * Address location (as per perf script) + */ +struct perf_dlfilter_al { + __u32 size; /* Size of this structure (for compatibility checking) */ + __u32 symoff; + const char *sym; + __u64 addr; /* Mapped address (from dso) */ + __u64 sym_start; + __u64 sym_end; + const char *dso; + __u8 sym_binding; /* STB_LOCAL, STB_GLOBAL or STB_WEAK, refer <elf.h> */ + __u8 is_64_bit; /* Only valid if dso is not NULL */ + __u8 is_kernel_ip; /* True if in kernel space */ + __u32 buildid_size; + __u8 *buildid; + /* Below members are only populated by resolve_ip() */ + __u8 filtered; /* True if this sample event will be filtered out */ + const char *comm; + void *priv; /* Private data (v2 API) */ +}; + +struct perf_dlfilter_fns { + /* Return information about ip */ + const struct perf_dlfilter_al *(*resolve_ip)(void *ctx); + /* Return information about addr (if addr_correlates_sym) */ + const struct perf_dlfilter_al *(*resolve_addr)(void *ctx); + /* Return arguments from --dlarg option */ + char **(*args)(void *ctx, int *dlargc); + /* + * Return information about address (al->size must be set before + * calling). Returns 0 on success, -1 otherwise. Call al_cleanup() + * when 'al' data is no longer needed. + */ + __s32 (*resolve_address)(void *ctx, __u64 address, struct perf_dlfilter_al *al); + /* Return instruction bytes and length */ + const __u8 *(*insn)(void *ctx, __u32 *length); + /* Return source file name and line number */ + const char *(*srcline)(void *ctx, __u32 *line_number); + /* Return perf_event_attr, refer <linux/perf_event.h> */ + struct perf_event_attr *(*attr)(void *ctx); + /* Read object code, return numbers of bytes read */ + __s32 (*object_code)(void *ctx, __u64 ip, void *buf, __u32 len); + /* + * If present (i.e. must check al_cleanup != NULL), call after + * resolve_address() to free any associated resources. (v2 API) + */ + void (*al_cleanup)(void *ctx, struct perf_dlfilter_al *al); + /* Reserved */ + void *(*reserved[119])(void *); +}; + +struct perf_dlfilter_fns perf_dlfilter_fns; + +static int verbose; + +#define pr_debug(fmt, ...) do { \ + if (verbose > 0) \ + fprintf(stderr, fmt, ##__VA_ARGS__); \ + } while (0) + +static int test_fail(const char *msg) +{ + pr_debug("%s\n", msg); + return -1; +} + +#define CHECK(x) do { \ + if (!(x)) \ + return test_fail("Check '" #x "' failed\n"); \ + } while (0) + +struct filter_data { + __u64 ip; + __u64 addr; + int do_early; + int early_filter_cnt; + int filter_cnt; +}; + +static struct filter_data *filt_dat; + +int start(void **data, void *ctx) +{ + int dlargc; + char **dlargv; + struct filter_data *d; + static bool called; + + verbose = 1; + + CHECK(!filt_dat && !called); + called = true; + + d = calloc(1, sizeof(*d)); + if (!d) + test_fail("Failed to allocate memory"); + filt_dat = d; + *data = d; + + dlargv = perf_dlfilter_fns.args(ctx, &dlargc); + + CHECK(dlargc == 6); + CHECK(!strcmp(dlargv[0], "first")); + verbose = strtol(dlargv[1], NULL, 0); + d->ip = strtoull(dlargv[2], NULL, 0); + d->addr = strtoull(dlargv[3], NULL, 0); + d->do_early = strtol(dlargv[4], NULL, 0); + CHECK(!strcmp(dlargv[5], "last")); + + pr_debug("%s API\n", __func__); + + return 0; +} + +#define CHECK_SAMPLE(x) do { \ + if (sample->x != expected.x) \ + return test_fail("'" #x "' not expected value\n"); \ + } while (0) + +static int check_sample(struct filter_data *d, const struct perf_dlfilter_sample *sample) +{ + struct perf_dlfilter_sample expected = { + .ip = d->ip, + .pid = 12345, + .tid = 12346, + .time = 1234567890, + .addr = d->addr, + .id = 99, + .stream_id = 101, + .period = 543212345, + .cpu = 31, + .cpumode = PERF_RECORD_MISC_USER, + .addr_correlates_sym = 1, + .misc = PERF_RECORD_MISC_USER, + }; + + CHECK(sample->size >= sizeof(struct perf_dlfilter_sample)); + + CHECK_SAMPLE(ip); + CHECK_SAMPLE(pid); + CHECK_SAMPLE(tid); + CHECK_SAMPLE(time); + CHECK_SAMPLE(addr); + CHECK_SAMPLE(id); + CHECK_SAMPLE(stream_id); + CHECK_SAMPLE(period); + CHECK_SAMPLE(cpu); + CHECK_SAMPLE(cpumode); + CHECK_SAMPLE(addr_correlates_sym); + CHECK_SAMPLE(misc); + + CHECK(!sample->raw_data); + CHECK_SAMPLE(brstack_nr); + CHECK(!sample->brstack); + CHECK_SAMPLE(raw_callchain_nr); + CHECK(!sample->raw_callchain); + +#define EVENT_NAME "branches:" + CHECK(!strncmp(sample->event, EVENT_NAME, strlen(EVENT_NAME))); + + return 0; +} + +static int check_al(void *ctx) +{ + const struct perf_dlfilter_al *al; + + al = perf_dlfilter_fns.resolve_ip(ctx); + if (!al) + return test_fail("resolve_ip() failed"); + + CHECK(al->sym && !strcmp("foo", al->sym)); + CHECK(!al->symoff); + + return 0; +} + +static int check_addr_al(void *ctx) +{ + const struct perf_dlfilter_al *addr_al; + + addr_al = perf_dlfilter_fns.resolve_addr(ctx); + if (!addr_al) + return test_fail("resolve_addr() failed"); + + CHECK(addr_al->sym && !strcmp("bar", addr_al->sym)); + CHECK(!addr_al->symoff); + + return 0; +} + +static int check_address_al(void *ctx, const struct perf_dlfilter_sample *sample) +{ + struct perf_dlfilter_al address_al; + const struct perf_dlfilter_al *al; + + al = perf_dlfilter_fns.resolve_ip(ctx); + if (!al) + return test_fail("resolve_ip() failed"); + + address_al.size = sizeof(address_al); + if (perf_dlfilter_fns.resolve_address(ctx, sample->ip, &address_al)) + return test_fail("resolve_address() failed"); + + CHECK(address_al.sym && al->sym); + CHECK(!strcmp(address_al.sym, al->sym)); + CHECK(address_al.addr == al->addr); + CHECK(address_al.sym_start == al->sym_start); + CHECK(address_al.sym_end == al->sym_end); + CHECK(address_al.dso && al->dso); + CHECK(!strcmp(address_al.dso, al->dso)); + + /* al_cleanup() is v2 API so may not be present */ + if (perf_dlfilter_fns.al_cleanup) + perf_dlfilter_fns.al_cleanup(ctx, &address_al); + + return 0; +} + +static int check_attr(void *ctx) +{ + struct perf_event_attr *attr = perf_dlfilter_fns.attr(ctx); + + CHECK(attr); + CHECK(attr->type == PERF_TYPE_HARDWARE); + CHECK(attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + + return 0; +} + +static int do_checks(void *data, const struct perf_dlfilter_sample *sample, void *ctx, bool early) +{ + struct filter_data *d = data; + + CHECK(data && filt_dat == data); + + if (early) { + CHECK(!d->early_filter_cnt); + d->early_filter_cnt += 1; + } else { + CHECK(!d->filter_cnt); + CHECK(d->early_filter_cnt); + CHECK(d->do_early != 2); + d->filter_cnt += 1; + } + + if (check_sample(data, sample)) + return -1; + + if (check_attr(ctx)) + return -1; + + if (early && !d->do_early) + return 0; + + if (check_al(ctx) || check_addr_al(ctx) || check_address_al(ctx, sample)) + return -1; + + if (early) + return d->do_early == 2; + + return 1; +} + +int filter_event_early(void *data, const struct perf_dlfilter_sample *sample, void *ctx) +{ + pr_debug("%s API\n", __func__); + + return do_checks(data, sample, ctx, true); +} + +int filter_event(void *data, const struct perf_dlfilter_sample *sample, void *ctx) +{ + pr_debug("%s API\n", __func__); + + return do_checks(data, sample, ctx, false); +} + +int stop(void *data, void *ctx) +{ + static bool called; + + pr_debug("%s API\n", __func__); + + CHECK(data && filt_dat == data && !called); + called = true; + + free(data); + filt_dat = NULL; + return 0; +} + +const char *filter_description(const char **long_description) +{ + *long_description = "Filter used by the 'dlfilter C API' perf test"; + return "dlfilter to test v2 C API"; +} diff --git a/tools/perf/examples/bpf/5sec.c b/tools/perf/examples/bpf/5sec.c deleted file mode 100644 index 3bd7fc17631f..000000000000 --- a/tools/perf/examples/bpf/5sec.c +++ /dev/null @@ -1,53 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - Description: - - . Disable strace like syscall tracing (--no-syscalls), or try tracing - just some (-e *sleep). - - . Attach a filter function to a kernel function, returning when it should - be considered, i.e. appear on the output. - - . Run it system wide, so that any sleep of >= 5 seconds and < than 6 - seconds gets caught. - - . Ask for callgraphs using DWARF info, so that userspace can be unwound - - . While this is running, run something like "sleep 5s". - - . If we decide to add tv_nsec as well, then it becomes: - - int probe(hrtimer_nanosleep, rqtp->tv_sec rqtp->tv_nsec)(void *ctx, int err, long sec, long nsec) - - I.e. add where it comes from (rqtp->tv_nsec) and where it will be - accessible in the function body (nsec) - - # perf trace --no-syscalls -e tools/perf/examples/bpf/5sec.c/call-graph=dwarf/ - 0.000 perf_bpf_probe:func:(ffffffff9811b5f0) tv_sec=5 - hrtimer_nanosleep ([kernel.kallsyms]) - __x64_sys_nanosleep ([kernel.kallsyms]) - do_syscall_64 ([kernel.kallsyms]) - entry_SYSCALL_64 ([kernel.kallsyms]) - __GI___nanosleep (/usr/lib64/libc-2.26.so) - rpl_nanosleep (/usr/bin/sleep) - xnanosleep (/usr/bin/sleep) - main (/usr/bin/sleep) - __libc_start_main (/usr/lib64/libc-2.26.so) - _start (/usr/bin/sleep) - ^C# - - Copyright (C) 2018 Red Hat, Inc., Arnaldo Carvalho de Melo <acme@redhat.com> -*/ - -#include <linux/bpf.h> -#include <bpf/bpf_helpers.h> - -#define NSEC_PER_SEC 1000000000L - -SEC("hrtimer_nanosleep=hrtimer_nanosleep rqtp") -int hrtimer_nanosleep(void *ctx, int err, long long sec) -{ - return sec / NSEC_PER_SEC == 5ULL; -} - -char _license[] SEC("license") = "GPL"; diff --git a/tools/perf/examples/bpf/empty.c b/tools/perf/examples/bpf/empty.c deleted file mode 100644 index 3e296c0c53d7..000000000000 --- a/tools/perf/examples/bpf/empty.c +++ /dev/null @@ -1,12 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <linux/bpf.h> -#include <bpf/bpf_helpers.h> - -struct syscall_enter_args; - -SEC("raw_syscalls:sys_enter") -int sys_enter(struct syscall_enter_args *args) -{ - return 0; -} -char _license[] SEC("license") = "GPL"; diff --git a/tools/perf/examples/bpf/hello.c b/tools/perf/examples/bpf/hello.c deleted file mode 100644 index e9080b0df158..000000000000 --- a/tools/perf/examples/bpf/hello.c +++ /dev/null @@ -1,27 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <linux/bpf.h> -#include <bpf/bpf_helpers.h> - -struct __bpf_stdout__ { - __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); - __type(key, int); - __type(value, __u32); - __uint(max_entries, __NR_CPUS__); -} __bpf_stdout__ SEC(".maps"); - -#define puts(from) \ - ({ const int __len = sizeof(from); \ - char __from[sizeof(from)] = from; \ - bpf_perf_event_output(args, &__bpf_stdout__, BPF_F_CURRENT_CPU, \ - &__from, __len & (sizeof(from) - 1)); }) - -struct syscall_enter_args; - -SEC("raw_syscalls:sys_enter") -int sys_enter(struct syscall_enter_args *args) -{ - puts("Hello, world\n"); - return 0; -} - -char _license[] SEC("license") = "GPL"; diff --git a/tools/perf/examples/bpf/sys_enter_openat.c b/tools/perf/examples/bpf/sys_enter_openat.c deleted file mode 100644 index c4481c390d23..000000000000 --- a/tools/perf/examples/bpf/sys_enter_openat.c +++ /dev/null @@ -1,33 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Hook into 'openat' syscall entry tracepoint - * - * Test it with: - * - * perf trace -e tools/perf/examples/bpf/sys_enter_openat.c cat /etc/passwd > /dev/null - * - * It'll catch some openat syscalls related to the dynamic linked and - * the last one should be the one for '/etc/passwd'. - * - * The syscall_enter_openat_args can be used to get the syscall fields - * and use them for filtering calls, i.e. use in expressions for - * the return value. - */ - -#include <bpf/bpf.h> - -struct syscall_enter_openat_args { - unsigned long long unused; - long syscall_nr; - long dfd; - char *filename_ptr; - long flags; - long mode; -}; - -int syscall_enter(openat)(struct syscall_enter_openat_args *args) -{ - return 1; -} - -license(GPL); diff --git a/tools/perf/include/perf/perf_dlfilter.h b/tools/perf/include/perf/perf_dlfilter.h index a26e2f129f83..16fc4568ac53 100644 --- a/tools/perf/include/perf/perf_dlfilter.h +++ b/tools/perf/include/perf/perf_dlfilter.h @@ -91,6 +91,7 @@ struct perf_dlfilter_al { /* Below members are only populated by resolve_ip() */ __u8 filtered; /* True if this sample event will be filtered out */ const char *comm; + void *priv; /* Private data. Do not change */ }; struct perf_dlfilter_fns { @@ -102,7 +103,8 @@ struct perf_dlfilter_fns { char **(*args)(void *ctx, int *dlargc); /* * Return information about address (al->size must be set before - * calling). Returns 0 on success, -1 otherwise. + * calling). Returns 0 on success, -1 otherwise. Call al_cleanup() + * when 'al' data is no longer needed. */ __s32 (*resolve_address)(void *ctx, __u64 address, struct perf_dlfilter_al *al); /* Return instruction bytes and length */ @@ -113,8 +115,13 @@ struct perf_dlfilter_fns { struct perf_event_attr *(*attr)(void *ctx); /* Read object code, return numbers of bytes read */ __s32 (*object_code)(void *ctx, __u64 ip, void *buf, __u32 len); + /* + * If present (i.e. must check al_cleanup != NULL), call after + * resolve_address() to free any associated resources. + */ + void (*al_cleanup)(void *ctx, struct perf_dlfilter_al *al); /* Reserved */ - void *(*reserved[120])(void *); + void *(*reserved[119])(void *); }; /* diff --git a/tools/perf/perf.c b/tools/perf/perf.c index 38cae4721583..d3fc8090413c 100644 --- a/tools/perf/perf.c +++ b/tools/perf/perf.c @@ -18,7 +18,6 @@ #include <subcmd/run-command.h> #include "util/parse-events.h" #include <subcmd/parse-options.h> -#include "util/bpf-loader.h" #include "util/debug.h" #include "util/event.h" #include "util/util.h" // usage() @@ -324,7 +323,6 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv) perf_config__exit(); exit_browser(status); perf_env__exit(&perf_env); - bpf__clear(); if (status) return status & 0xff; diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build index 150765f2baee..1d18bb89402e 100644 --- a/tools/perf/pmu-events/Build +++ b/tools/perf/pmu-events/Build @@ -35,3 +35,9 @@ $(PMU_EVENTS_C): $(JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_L $(call rule_mkdir) $(Q)$(call echo-cmd,gen)$(PYTHON) $(JEVENTS_PY) $(JEVENTS_ARCH) $(JEVENTS_MODEL) pmu-events/arch $@ endif + +# pmu-events.c file is generated in the OUTPUT directory so it needs a +# separate rule to depend on it properly +$(OUTPUT)pmu-events/pmu-events.o: $(PMU_EVENTS_C) + $(call rule_mkdir) + $(call if_changed_dep,cc_o_c) diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json index fc0633054211..7a2b7b200f14 100644 --- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json +++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json @@ -93,9 +93,6 @@ "ArchStdEvent": "L1D_CACHE_LMISS_RD" }, { - "ArchStdEvent": "L1D_CACHE_LMISS" - }, - { "ArchStdEvent": "L1I_CACHE_LMISS" }, { diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json index 95c30243f2b2..88b23b85e33c 100644 --- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json +++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json @@ -534,66 +534,6 @@ "BriefDescription": "L2D OTB allocate" }, { - "PublicDescription": "DTLB Translation cache hit on S1L2 walk cache entry", - "EventCode": "0xD801", - "EventName": "MMU_D_TRANS_CACHE_HIT_S1L2_WALK", - "BriefDescription": "DTLB Translation cache hit on S1L2 walk cache entry" - }, - { - "PublicDescription": "DTLB Translation cache hit on S1L1 walk cache entry", - "EventCode": "0xD802", - "EventName": "MMU_D_TRANS_CACHE_HIT_S1L1_WALK", - "BriefDescription": "DTLB Translation cache hit on S1L1 walk cache entry" - }, - { - "PublicDescription": "DTLB Translation cache hit on S1L0 walk cache entry", - "EventCode": "0xD803", - "EventName": "MMU_D_TRANS_CACHE_HIT_S1L0_WALK", - "BriefDescription": "DTLB Translation cache hit on S1L0 walk cache entry" - }, - { - "PublicDescription": "DTLB Translation cache hit on S2L2 walk cache entry", - "EventCode": "0xD804", - "EventName": "MMU_D_TRANS_CACHE_HIT_S2L2_WALK", - "BriefDescription": "DTLB Translation cache hit on S2L2 walk cache entry" - }, - { - "PublicDescription": "DTLB Translation cache hit on S2L1 walk cache entry", - "EventCode": "0xD805", - "EventName": "MMU_D_TRANS_CACHE_HIT_S2L1_WALK", - "BriefDescription": "DTLB Translation cache hit on S2L1 walk cache entry" - }, - { - "PublicDescription": "DTLB Translation cache hit on S2L0 walk cache entry", - "EventCode": "0xD806", - "EventName": "MMU_D_TRANS_CACHE_HIT_S2L0_WALK", - "BriefDescription": "DTLB Translation cache hit on S2L0 walk cache entry" - }, - { - "PublicDescription": "D-side S1 Page walk cache lookup", - "EventCode": "0xD807", - "EventName": "MMU_D_S1_WALK_CACHE_LOOKUP", - "BriefDescription": "D-side S1 Page walk cache lookup" - }, - { - "PublicDescription": "D-side S1 Page walk cache refill", - "EventCode": "0xD808", - "EventName": "MMU_D_S1_WALK_CACHE_REFILL", - "BriefDescription": "D-side S1 Page walk cache refill" - }, - { - "PublicDescription": "D-side S2 Page walk cache lookup", - "EventCode": "0xD809", - "EventName": "MMU_D_S2_WALK_CACHE_LOOKUP", - "BriefDescription": "D-side S2 Page walk cache lookup" - }, - { - "PublicDescription": "D-side S2 Page walk cache refill", - "EventCode": "0xD80A", - "EventName": "MMU_D_S2_WALK_CACHE_REFILL", - "BriefDescription": "D-side S2 Page walk cache refill" - }, - { "PublicDescription": "D-side Stage1 tablewalk fault", "EventCode": "0xD80B", "EventName": "MMU_D_S1_WALK_FAULT", @@ -618,66 +558,6 @@ "BriefDescription": "L2I OTB allocate" }, { - "PublicDescription": "ITLB Translation cache hit on S1L2 walk cache entry", - "EventCode": "0xD901", - "EventName": "MMU_I_TRANS_CACHE_HIT_S1L2_WALK", - "BriefDescription": "ITLB Translation cache hit on S1L2 walk cache entry" - }, - { - "PublicDescription": "ITLB Translation cache hit on S1L1 walk cache entry", - "EventCode": "0xD902", - "EventName": "MMU_I_TRANS_CACHE_HIT_S1L1_WALK", - "BriefDescription": "ITLB Translation cache hit on S1L1 walk cache entry" - }, - { - "PublicDescription": "ITLB Translation cache hit on S1L0 walk cache entry", - "EventCode": "0xD903", - "EventName": "MMU_I_TRANS_CACHE_HIT_S1L0_WALK", - "BriefDescription": "ITLB Translation cache hit on S1L0 walk cache entry" - }, - { - "PublicDescription": "ITLB Translation cache hit on S2L2 walk cache entry", - "EventCode": "0xD904", - "EventName": "MMU_I_TRANS_CACHE_HIT_S2L2_WALK", - "BriefDescription": "ITLB Translation cache hit on S2L2 walk cache entry" - }, - { - "PublicDescription": "ITLB Translation cache hit on S2L1 walk cache entry", - "EventCode": "0xD905", - "EventName": "MMU_I_TRANS_CACHE_HIT_S2L1_WALK", - "BriefDescription": "ITLB Translation cache hit on S2L1 walk cache entry" - }, - { - "PublicDescription": "ITLB Translation cache hit on S2L0 walk cache entry", - "EventCode": "0xD906", - "EventName": "MMU_I_TRANS_CACHE_HIT_S2L0_WALK", - "BriefDescription": "ITLB Translation cache hit on S2L0 walk cache entry" - }, - { - "PublicDescription": "I-side S1 Page walk cache lookup", - "EventCode": "0xD907", - "EventName": "MMU_I_S1_WALK_CACHE_LOOKUP", - "BriefDescription": "I-side S1 Page walk cache lookup" - }, - { - "PublicDescription": "I-side S1 Page walk cache refill", - "EventCode": "0xD908", - "EventName": "MMU_I_S1_WALK_CACHE_REFILL", - "BriefDescription": "I-side S1 Page walk cache refill" - }, - { - "PublicDescription": "I-side S2 Page walk cache lookup", - "EventCode": "0xD909", - "EventName": "MMU_I_S2_WALK_CACHE_LOOKUP", - "BriefDescription": "I-side S2 Page walk cache lookup" - }, - { - "PublicDescription": "I-side S2 Page walk cache refill", - "EventCode": "0xD90A", - "EventName": "MMU_I_S2_WALK_CACHE_REFILL", - "BriefDescription": "I-side S2 Page walk cache refill" - }, - { "PublicDescription": "I-side Stage1 tablewalk fault", "EventCode": "0xD90B", "EventName": "MMU_I_S1_WALK_FAULT", diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json new file mode 100644 index 000000000000..1e7e8901a445 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json @@ -0,0 +1,362 @@ +[ + { + "MetricExpr": "BR_MIS_PRED / BR_PRED", + "BriefDescription": "Branch predictor misprediction rate. May not count branches that are never resolved because they are in the misprediction shadow of an earlier branch", + "MetricGroup": "Branch Prediction", + "MetricName": "Misprediction" + }, + { + "MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED", + "BriefDescription": "Branch predictor misprediction rate", + "MetricGroup": "Branch Prediction", + "MetricName": "Misprediction (retired)" + }, + { + "MetricExpr": "BUS_ACCESS / ( BUS_CYCLES * 1)", + "BriefDescription": "Core-to-uncore bus utilization", + "MetricGroup": "Bus", + "MetricName": "Bus utilization" + }, + { + "MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE", + "BriefDescription": "L1D cache miss rate", + "MetricGroup": "Cache", + "MetricName": "L1D cache miss" + }, + { + "MetricExpr": "L1D_CACHE_LMISS_RD / L1D_CACHE_RD", + "BriefDescription": "L1D cache read miss rate", + "MetricGroup": "Cache", + "MetricName": "L1D cache read miss" + }, + { + "MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE", + "BriefDescription": "L1I cache miss rate", + "MetricGroup": "Cache", + "MetricName": "L1I cache miss" + }, + { + "MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE", + "BriefDescription": "L2 cache miss rate", + "MetricGroup": "Cache", + "MetricName": "L2 cache miss" + }, + { + "MetricExpr": "L1I_CACHE_LMISS / L1I_CACHE", + "BriefDescription": "L1I cache read miss rate", + "MetricGroup": "Cache", + "MetricName": "L1I cache read miss" + }, + { + "MetricExpr": "L2D_CACHE_LMISS_RD / L2D_CACHE_RD", + "BriefDescription": "L2 cache read miss rate", + "MetricGroup": "Cache", + "MetricName": "L2 cache read miss" + }, + { + "MetricExpr": "(L1D_CACHE_LMISS_RD * 1000) / INST_RETIRED", + "BriefDescription": "Misses per thousand instructions (data)", + "MetricGroup": "Cache", + "MetricName": "MPKI data" + }, + { + "MetricExpr": "(L1I_CACHE_LMISS * 1000) / INST_RETIRED", + "BriefDescription": "Misses per thousand instructions (instruction)", + "MetricGroup": "Cache", + "MetricName": "MPKI instruction" + }, + { + "MetricExpr": "ASE_SPEC / OP_SPEC", + "BriefDescription": "Proportion of advanced SIMD data processing operations (excluding DP_SPEC/LD_SPEC) operations", + "MetricGroup": "Instruction", + "MetricName": "ASE mix" + }, + { + "MetricExpr": "CRYPTO_SPEC / OP_SPEC", + "BriefDescription": "Proportion of crypto data processing operations", + "MetricGroup": "Instruction", + "MetricName": "Crypto mix" + }, + { + "MetricExpr": "VFP_SPEC / (duration_time *1000000000)", + "BriefDescription": "Giga-floating point operations per second", + "MetricGroup": "Instruction", + "MetricName": "GFLOPS_ISSUED" + }, + { + "MetricExpr": "DP_SPEC / OP_SPEC", + "BriefDescription": "Proportion of integer data processing operations", + "MetricGroup": "Instruction", + "MetricName": "Integer mix" + }, + { + "MetricExpr": "INST_RETIRED / CPU_CYCLES", + "BriefDescription": "Instructions per cycle", + "MetricGroup": "Instruction", + "MetricName": "IPC" + }, + { + "MetricExpr": "LD_SPEC / OP_SPEC", + "BriefDescription": "Proportion of load operations", + "MetricGroup": "Instruction", + "MetricName": "Load mix" + }, + { + "MetricExpr": "LDST_SPEC/ OP_SPEC", + "BriefDescription": "Proportion of load & store operations", + "MetricGroup": "Instruction", + "MetricName": "Load-store mix" + }, + { + "MetricExpr": "INST_RETIRED / (duration_time * 1000000)", + "BriefDescription": "Millions of instructions per second", + "MetricGroup": "Instruction", + "MetricName": "MIPS_RETIRED" + }, + { + "MetricExpr": "INST_SPEC / (duration_time * 1000000)", + "BriefDescription": "Millions of instructions per second", + "MetricGroup": "Instruction", + "MetricName": "MIPS_UTILIZATION" + }, + { + "MetricExpr": "PC_WRITE_SPEC / OP_SPEC", + "BriefDescription": "Proportion of software change of PC operations", + "MetricGroup": "Instruction", + "MetricName": "PC write mix" + }, + { + "MetricExpr": "ST_SPEC / OP_SPEC", + "BriefDescription": "Proportion of store operations", + "MetricGroup": "Instruction", + "MetricName": "Store mix" + }, + { + "MetricExpr": "VFP_SPEC / OP_SPEC", + "BriefDescription": "Proportion of FP operations", + "MetricGroup": "Instruction", + "MetricName": "VFP mix" + }, + { + "MetricExpr": "1 - (OP_RETIRED/ (CPU_CYCLES * 4))", + "BriefDescription": "Proportion of slots lost", + "MetricGroup": "Speculation / TDA", + "MetricName": "CPU lost" + }, + { + "MetricExpr": "OP_RETIRED/ (CPU_CYCLES * 4)", + "BriefDescription": "Proportion of slots retiring", + "MetricGroup": "Speculation / TDA", + "MetricName": "CPU utilization" + }, + { + "MetricExpr": "OP_RETIRED - OP_SPEC", + "BriefDescription": "Operations lost due to misspeculation", + "MetricGroup": "Speculation / TDA", + "MetricName": "Operations lost" + }, + { + "MetricExpr": "1 - (OP_RETIRED / OP_SPEC)", + "BriefDescription": "Proportion of operations lost", + "MetricGroup": "Speculation / TDA", + "MetricName": "Operations lost (ratio)" + }, + { + "MetricExpr": "OP_RETIRED / OP_SPEC", + "BriefDescription": "Proportion of operations retired", + "MetricGroup": "Speculation / TDA", + "MetricName": "Operations retired" + }, + { + "MetricExpr": "STALL_BACKEND_CACHE / CPU_CYCLES", + "BriefDescription": "Proportion of cycles stalled and no operations issued to backend and cache miss", + "MetricGroup": "Stall", + "MetricName": "Stall backend cache cycles" + }, + { + "MetricExpr": "STALL_BACKEND_RESOURCE / CPU_CYCLES", + "BriefDescription": "Proportion of cycles stalled and no operations issued to backend and resource full", + "MetricGroup": "Stall", + "MetricName": "Stall backend resource cycles" + }, + { + "MetricExpr": "STALL_BACKEND_TLB / CPU_CYCLES", + "BriefDescription": "Proportion of cycles stalled and no operations issued to backend and TLB miss", + "MetricGroup": "Stall", + "MetricName": "Stall backend tlb cycles" + }, + { + "MetricExpr": "STALL_FRONTEND_CACHE / CPU_CYCLES", + "BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and cache miss", + "MetricGroup": "Stall", + "MetricName": "Stall frontend cache cycles" + }, + { + "MetricExpr": "STALL_FRONTEND_TLB / CPU_CYCLES", + "BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and TLB miss", + "MetricGroup": "Stall", + "MetricName": "Stall frontend tlb cycles" + }, + { + "MetricExpr": "DTLB_WALK / L1D_TLB", + "BriefDescription": "D-side walk per d-side translation request", + "MetricGroup": "TLB", + "MetricName": "DTLB walks" + }, + { + "MetricExpr": "ITLB_WALK / L1I_TLB", + "BriefDescription": "I-side walk per i-side translation request", + "MetricGroup": "TLB", + "MetricName": "ITLB walks" + }, + { + "MetricExpr": "STALL_SLOT_BACKEND / (CPU_CYCLES * 4)", + "BriefDescription": "Fraction of slots backend bound", + "MetricGroup": "TopDownL1", + "MetricName": "backend" + }, + { + "MetricExpr": "1 - (retiring + lost + backend)", + "BriefDescription": "Fraction of slots frontend bound", + "MetricGroup": "TopDownL1", + "MetricName": "frontend" + }, + { + "MetricExpr": "((OP_SPEC - OP_RETIRED) / (CPU_CYCLES * 4))", + "BriefDescription": "Fraction of slots lost due to misspeculation", + "MetricGroup": "TopDownL1", + "MetricName": "lost" + }, + { + "MetricExpr": "(OP_RETIRED / (CPU_CYCLES * 4))", + "BriefDescription": "Fraction of slots retiring, useful work", + "MetricGroup": "TopDownL1", + "MetricName": "retiring" + }, + { + "MetricExpr": "backend - backend_memory", + "BriefDescription": "Fraction of slots the CPU was stalled due to backend non-memory subsystem issues", + "MetricGroup": "TopDownL2", + "MetricName": "backend_core" + }, + { + "MetricExpr": "(STALL_BACKEND_TLB + STALL_BACKEND_CACHE + STALL_BACKEND_MEM) / CPU_CYCLES ", + "BriefDescription": "Fraction of slots the CPU was stalled due to backend memory subsystem issues (cache/tlb miss)", + "MetricGroup": "TopDownL2", + "MetricName": "backend_memory" + }, + { + "MetricExpr": " (BR_MIS_PRED_RETIRED / GPC_FLUSH) * lost", + "BriefDescription": "Fraction of slots lost due to branch misprediciton", + "MetricGroup": "TopDownL2", + "MetricName": "branch_mispredict" + }, + { + "MetricExpr": "frontend - frontend_latency", + "BriefDescription": "Fraction of slots the CPU did not dispatch at full bandwidth - able to dispatch partial slots only (1, 2, or 3 uops)", + "MetricGroup": "TopDownL2", + "MetricName": "frontend_bandwidth" + }, + { + "MetricExpr": "(STALL_FRONTEND - ((STALL_SLOT_FRONTEND - (frontend * CPU_CYCLES * 4)) / 4)) / CPU_CYCLES", + "BriefDescription": "Fraction of slots the CPU was stalled due to frontend latency issues (cache/tlb miss); nothing to dispatch", + "MetricGroup": "TopDownL2", + "MetricName": "frontend_latency" + }, + { + "MetricExpr": "lost - branch_mispredict", + "BriefDescription": "Fraction of slots lost due to other/non-branch misprediction misspeculation", + "MetricGroup": "TopDownL2", + "MetricName": "other_clears" + }, + { + "MetricExpr": "(IXU_NUM_UOPS_ISSUED + FSU_ISSUED) / (CPU_CYCLES * 6)", + "BriefDescription": "Fraction of execute slots utilized", + "MetricGroup": "TopDownL2", + "MetricName": "pipe_utilization" + }, + { + "MetricExpr": "STALL_BACKEND_MEM / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled due to data L2 cache miss", + "MetricGroup": "TopDownL3", + "MetricName": "d_cache_l2_miss" + }, + { + "MetricExpr": "STALL_BACKEND_CACHE / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled due to data cache miss", + "MetricGroup": "TopDownL3", + "MetricName": "d_cache_miss" + }, + { + "MetricExpr": "STALL_BACKEND_TLB / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled due to data TLB miss", + "MetricGroup": "TopDownL3", + "MetricName": "d_tlb_miss" + }, + { + "MetricExpr": "FSU_ISSUED / (CPU_CYCLES * 2)", + "BriefDescription": "Fraction of FSU execute slots utilized", + "MetricGroup": "TopDownL3", + "MetricName": "fsu_pipe_utilization" + }, + { + "MetricExpr": "STALL_FRONTEND_CACHE / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled due to instruction cache miss", + "MetricGroup": "TopDownL3", + "MetricName": "i_cache_miss" + }, + { + "MetricExpr": " STALL_FRONTEND_TLB / CPU_CYCLES ", + "BriefDescription": "Fraction of cycles the CPU was stalled due to instruction TLB miss", + "MetricGroup": "TopDownL3", + "MetricName": "i_tlb_miss" + }, + { + "MetricExpr": "IXU_NUM_UOPS_ISSUED / (CPU_CYCLES / 4)", + "BriefDescription": "Fraction of IXU execute slots utilized", + "MetricGroup": "TopDownL3", + "MetricName": "ixu_pipe_utilization" + }, + { + "MetricExpr": "IDR_STALL_FLUSH / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled due to flush recovery", + "MetricGroup": "TopDownL3", + "MetricName": "recovery" + }, + { + "MetricExpr": "STALL_BACKEND_RESOURCE / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled due to core resource shortage", + "MetricGroup": "TopDownL3", + "MetricName": "resource" + }, + { + "MetricExpr": "IDR_STALL_FSU_SCHED / CPU_CYCLES ", + "BriefDescription": "Fraction of cycles the CPU was stalled and FSU was full", + "MetricGroup": "TopDownL4", + "MetricName": "stall_fsu_sched" + }, + { + "MetricExpr": "IDR_STALL_IXU_SCHED / CPU_CYCLES ", + "BriefDescription": "Fraction of cycles the CPU was stalled and IXU was full", + "MetricGroup": "TopDownL4", + "MetricName": "stall_ixu_sched" + }, + { + "MetricExpr": "IDR_STALL_LOB_ID / CPU_CYCLES ", + "BriefDescription": "Fraction of cycles the CPU was stalled and LOB was full", + "MetricGroup": "TopDownL4", + "MetricName": "stall_lob_id" + }, + { + "MetricExpr": "IDR_STALL_ROB_ID / CPU_CYCLES", + "BriefDescription": "Fraction of cycles the CPU was stalled and ROB was full", + "MetricGroup": "TopDownL4", + "MetricName": "stall_rob_id" + }, + { + "MetricExpr": "IDR_STALL_SOB_ID / CPU_CYCLES ", + "BriefDescription": "Fraction of cycles the CPU was stalled and SOB was full", + "MetricGroup": "TopDownL4", + "MetricName": "stall_sob_id" + } +] diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/pipeline.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/pipeline.json index f9fae15f7555..711028377f3e 100644 --- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/pipeline.json +++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/pipeline.json @@ -1,18 +1,24 @@ [ { - "ArchStdEvent": "STALL_FRONTEND" + "ArchStdEvent": "STALL_FRONTEND", + "Errata": "Errata AC03_CPU_29", + "BriefDescription": "Impacted by errata, use metrics instead -" }, { "ArchStdEvent": "STALL_BACKEND" }, { - "ArchStdEvent": "STALL" + "ArchStdEvent": "STALL", + "Errata": "Errata AC03_CPU_29", + "BriefDescription": "Impacted by errata, use metrics instead -" }, { "ArchStdEvent": "STALL_SLOT_BACKEND" }, { - "ArchStdEvent": "STALL_SLOT_FRONTEND" + "ArchStdEvent": "STALL_SLOT_FRONTEND", + "Errata": "Errata AC03_CPU_29", + "BriefDescription": "Impacted by errata, use metrics instead -" }, { "ArchStdEvent": "STALL_SLOT" diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/branch.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/branch.json deleted file mode 100644 index 79f2016c53b0..000000000000 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/branch.json +++ /dev/null @@ -1,8 +0,0 @@ -[ - { - "ArchStdEvent": "BR_MIS_PRED" - }, - { - "ArchStdEvent": "BR_PRED" - } -] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/bus.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/bus.json index 579c1c993d17..2e11a8c4a484 100644 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/bus.json +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/bus.json @@ -1,20 +1,18 @@ [ { - "ArchStdEvent": "CPU_CYCLES" + "ArchStdEvent": "BUS_ACCESS", + "PublicDescription": "Counts memory transactions issued by the CPU to the external bus, including snoop requests and snoop responses. Each beat of data is counted individually." }, { - "ArchStdEvent": "BUS_ACCESS" + "ArchStdEvent": "BUS_CYCLES", + "PublicDescription": "Counts bus cycles in the CPU. Bus cycles represent a clock cycle in which a transaction could be sent or received on the interface from the CPU to the external bus. Since that interface is driven at the same clock speed as the CPU, this event is a duplicate of CPU_CYCLES." }, { - "ArchStdEvent": "BUS_CYCLES" + "ArchStdEvent": "BUS_ACCESS_RD", + "PublicDescription": "Counts memory read transactions seen on the external bus. Each beat of data is counted individually." }, { - "ArchStdEvent": "BUS_ACCESS_RD" - }, - { - "ArchStdEvent": "BUS_ACCESS_WR" - }, - { - "ArchStdEvent": "CNT_CYCLES" + "ArchStdEvent": "BUS_ACCESS_WR", + "PublicDescription": "Counts memory write transactions seen on the external bus. Each beat of data is counted individually." } ] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/cache.json deleted file mode 100644 index 0141f749bff3..000000000000 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/cache.json +++ /dev/null @@ -1,155 +0,0 @@ -[ - { - "ArchStdEvent": "L1I_CACHE_REFILL" - }, - { - "ArchStdEvent": "L1I_TLB_REFILL" - }, - { - "ArchStdEvent": "L1D_CACHE_REFILL" - }, - { - "ArchStdEvent": "L1D_CACHE" - }, - { - "ArchStdEvent": "L1D_TLB_REFILL" - }, - { - "ArchStdEvent": "L1I_CACHE" - }, - { - "ArchStdEvent": "L1D_CACHE_WB" - }, - { - "ArchStdEvent": "L2D_CACHE" - }, - { - "ArchStdEvent": "L2D_CACHE_REFILL" - }, - { - "ArchStdEvent": "L2D_CACHE_WB" - }, - { - "ArchStdEvent": "L2D_CACHE_ALLOCATE" - }, - { - "ArchStdEvent": "L1D_TLB" - }, - { - "ArchStdEvent": "L1I_TLB" - }, - { - "ArchStdEvent": "L3D_CACHE_ALLOCATE" - }, - { - "ArchStdEvent": "L3D_CACHE_REFILL" - }, - { - "ArchStdEvent": "L3D_CACHE" - }, - { - "ArchStdEvent": "L2D_TLB_REFILL" - }, - { - "ArchStdEvent": "L2D_TLB" - }, - { - "ArchStdEvent": "DTLB_WALK" - }, - { - "ArchStdEvent": "ITLB_WALK" - }, - { - "ArchStdEvent": "LL_CACHE_RD" - }, - { - "ArchStdEvent": "LL_CACHE_MISS_RD" - }, - { - "ArchStdEvent": "L1D_CACHE_LMISS_RD" - }, - { - "ArchStdEvent": "L1D_CACHE_RD" - }, - { - "ArchStdEvent": "L1D_CACHE_WR" - }, - { - "ArchStdEvent": "L1D_CACHE_REFILL_RD" - }, - { - "ArchStdEvent": "L1D_CACHE_REFILL_WR" - }, - { - "ArchStdEvent": "L1D_CACHE_REFILL_INNER" - }, - { - "ArchStdEvent": "L1D_CACHE_REFILL_OUTER" - }, - { - "ArchStdEvent": "L1D_CACHE_WB_VICTIM" - }, - { - "ArchStdEvent": "L1D_CACHE_WB_CLEAN" - }, - { - "ArchStdEvent": "L1D_CACHE_INVAL" - }, - { - "ArchStdEvent": "L1D_TLB_REFILL_RD" - }, - { - "ArchStdEvent": "L1D_TLB_REFILL_WR" - }, - { - "ArchStdEvent": "L1D_TLB_RD" - }, - { - "ArchStdEvent": "L1D_TLB_WR" - }, - { - "ArchStdEvent": "L2D_CACHE_RD" - }, - { - "ArchStdEvent": "L2D_CACHE_WR" - }, - { - "ArchStdEvent": "L2D_CACHE_REFILL_RD" - }, - { - "ArchStdEvent": "L2D_CACHE_REFILL_WR" - }, - { - "ArchStdEvent": "L2D_CACHE_WB_VICTIM" - }, - { - "ArchStdEvent": "L2D_CACHE_WB_CLEAN" - }, - { - "ArchStdEvent": "L2D_CACHE_INVAL" - }, - { - "ArchStdEvent": "L2D_TLB_REFILL_RD" - }, - { - "ArchStdEvent": "L2D_TLB_REFILL_WR" - }, - { - "ArchStdEvent": "L2D_TLB_RD" - }, - { - "ArchStdEvent": "L2D_TLB_WR" - }, - { - "ArchStdEvent": "L3D_CACHE_RD" - }, - { - "ArchStdEvent": "L1I_CACHE_LMISS" - }, - { - "ArchStdEvent": "L2D_CACHE_LMISS_RD" - }, - { - "ArchStdEvent": "L3D_CACHE_LMISS_RD" - } -] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json index 344a2d552ad5..4404b8e91690 100644 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json @@ -1,47 +1,62 @@ [ { - "ArchStdEvent": "EXC_TAKEN" + "ArchStdEvent": "EXC_TAKEN", + "PublicDescription": "Counts any taken architecturally visible exceptions such as IRQ, FIQ, SError, and other synchronous exceptions. Exceptions are counted whether or not they are taken locally." }, { - "ArchStdEvent": "MEMORY_ERROR" + "ArchStdEvent": "EXC_RETURN", + "PublicDescription": "Counts any architecturally executed exception return instructions. Eg: AArch64: ERET" }, { - "ArchStdEvent": "EXC_UNDEF" + "ArchStdEvent": "EXC_UNDEF", + "PublicDescription": "Counts the number of synchronous exceptions which are taken locally that are due to attempting to execute an instruction that is UNDEFINED. Attempting to execute instruction bit patterns that have not been allocated. Attempting to execute instructions when they are disabled. Attempting to execute instructions at an inappropriate Exception level. Attempting to execute an instruction when the value of PSTATE.IL is 1." }, { - "ArchStdEvent": "EXC_SVC" + "ArchStdEvent": "EXC_SVC", + "PublicDescription": "Counts SVC exceptions taken locally." }, { - "ArchStdEvent": "EXC_PABORT" + "ArchStdEvent": "EXC_PABORT", + "PublicDescription": "Counts synchronous exceptions that are taken locally and caused by Instruction Aborts." }, { - "ArchStdEvent": "EXC_DABORT" + "ArchStdEvent": "EXC_DABORT", + "PublicDescription": "Counts exceptions that are taken locally and are caused by data aborts or SErrors. Conditions that could cause those exceptions are attempting to read or write memory where the MMU generates a fault, attempting to read or write memory with a misaligned address, interrupts from the nSEI inputs and internally generated SErrors." }, { - "ArchStdEvent": "EXC_IRQ" + "ArchStdEvent": "EXC_IRQ", + "PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are taken locally." }, { - "ArchStdEvent": "EXC_FIQ" + "ArchStdEvent": "EXC_FIQ", + "PublicDescription": "Counts FIQ exceptions including the virtual FIQs that are taken locally." }, { - "ArchStdEvent": "EXC_SMC" + "ArchStdEvent": "EXC_SMC", + "PublicDescription": "Counts SMC exceptions take to EL3." }, { - "ArchStdEvent": "EXC_HVC" + "ArchStdEvent": "EXC_HVC", + "PublicDescription": "Counts HVC exceptions taken to EL2." }, { - "ArchStdEvent": "EXC_TRAP_PABORT" + "ArchStdEvent": "EXC_TRAP_PABORT", + "PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Instruction Aborts. For example, attempting to execute an instruction with a misaligned PC." }, { - "ArchStdEvent": "EXC_TRAP_DABORT" + "ArchStdEvent": "EXC_TRAP_DABORT", + "PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Data Aborts or SError interrupts. Conditions that could cause those exceptions are:\n\n1. Attempting to read or write memory where the MMU generates a fault,\n2. Attempting to read or write memory with a misaligned address,\n3. Interrupts from the SEI input.\n4. internally generated SErrors." }, { - "ArchStdEvent": "EXC_TRAP_OTHER" + "ArchStdEvent": "EXC_TRAP_OTHER", + "PublicDescription": "Counts the number of synchronous trap exceptions which are not taken locally and are not SVC, SMC, HVC, data aborts, Instruction Aborts, or interrupts." }, { - "ArchStdEvent": "EXC_TRAP_IRQ" + "ArchStdEvent": "EXC_TRAP_IRQ", + "PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are not taken locally." }, { - "ArchStdEvent": "EXC_TRAP_FIQ" + "ArchStdEvent": "EXC_TRAP_FIQ", + "PublicDescription": "Counts FIQs which are not taken locally but taken from EL0, EL1,\n or EL2 to EL3 (which would be the normal behavior for FIQs when not executing\n in EL3)." } ] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/fp_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/fp_operation.json new file mode 100644 index 000000000000..cec3435ac766 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/fp_operation.json @@ -0,0 +1,22 @@ +[ + { + "ArchStdEvent": "FP_HP_SPEC", + "PublicDescription": "Counts speculatively executed half precision floating point operations." + }, + { + "ArchStdEvent": "FP_SP_SPEC", + "PublicDescription": "Counts speculatively executed single precision floating point operations." + }, + { + "ArchStdEvent": "FP_DP_SPEC", + "PublicDescription": "Counts speculatively executed double precision floating point operations." + }, + { + "ArchStdEvent": "FP_SCALE_OPS_SPEC", + "PublicDescription": "Counts speculatively executed scalable single precision floating point operations." + }, + { + "ArchStdEvent": "FP_FIXED_OPS_SPEC", + "PublicDescription": "Counts speculatively executed non-scalable single precision floating point operations." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json new file mode 100644 index 000000000000..428810f855b8 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json @@ -0,0 +1,10 @@ +[ + { + "ArchStdEvent": "CPU_CYCLES", + "PublicDescription": "Counts CPU clock cycles (not timer cycles). The clock measured by this event is defined as the physical clock driving the CPU logic." + }, + { + "ArchStdEvent": "CNT_CYCLES", + "PublicDescription": "Counts constant frequency cycles" + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/instruction.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/instruction.json deleted file mode 100644 index e57cd55937c6..000000000000 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/instruction.json +++ /dev/null @@ -1,143 +0,0 @@ -[ - { - "ArchStdEvent": "SW_INCR" - }, - { - "ArchStdEvent": "INST_RETIRED" - }, - { - "ArchStdEvent": "EXC_RETURN" - }, - { - "ArchStdEvent": "CID_WRITE_RETIRED" - }, - { - "ArchStdEvent": "INST_SPEC" - }, - { - "ArchStdEvent": "TTBR_WRITE_RETIRED" - }, - { - "ArchStdEvent": "BR_RETIRED" - }, - { - "ArchStdEvent": "BR_MIS_PRED_RETIRED" - }, - { - "ArchStdEvent": "OP_RETIRED" - }, - { - "ArchStdEvent": "OP_SPEC" - }, - { - "ArchStdEvent": "LDREX_SPEC" - }, - { - "ArchStdEvent": "STREX_PASS_SPEC" - }, - { - "ArchStdEvent": "STREX_FAIL_SPEC" - }, - { - "ArchStdEvent": "STREX_SPEC" - }, - { - "ArchStdEvent": "LD_SPEC" - }, - { - "ArchStdEvent": "ST_SPEC" - }, - { - "ArchStdEvent": "DP_SPEC" - }, - { - "ArchStdEvent": "ASE_SPEC" - }, - { - "ArchStdEvent": "VFP_SPEC" - }, - { - "ArchStdEvent": "PC_WRITE_SPEC" - }, - { - "ArchStdEvent": "CRYPTO_SPEC" - }, - { - "ArchStdEvent": "BR_IMMED_SPEC" - }, - { - "ArchStdEvent": "BR_RETURN_SPEC" - }, - { - "ArchStdEvent": "BR_INDIRECT_SPEC" - }, - { - "ArchStdEvent": "ISB_SPEC" - }, - { - "ArchStdEvent": "DSB_SPEC" - }, - { - "ArchStdEvent": "DMB_SPEC" - }, - { - "ArchStdEvent": "RC_LD_SPEC" - }, - { - "ArchStdEvent": "RC_ST_SPEC" - }, - { - "ArchStdEvent": "ASE_INST_SPEC" - }, - { - "ArchStdEvent": "SVE_INST_SPEC" - }, - { - "ArchStdEvent": "FP_HP_SPEC" - }, - { - "ArchStdEvent": "FP_SP_SPEC" - }, - { - "ArchStdEvent": "FP_DP_SPEC" - }, - { - "ArchStdEvent": "SVE_PRED_SPEC" - }, - { - "ArchStdEvent": "SVE_PRED_EMPTY_SPEC" - }, - { - "ArchStdEvent": "SVE_PRED_FULL_SPEC" - }, - { - "ArchStdEvent": "SVE_PRED_PARTIAL_SPEC" - }, - { - "ArchStdEvent": "SVE_PRED_NOT_FULL_SPEC" - }, - { - "ArchStdEvent": "SVE_LDFF_SPEC" - }, - { - "ArchStdEvent": "SVE_LDFF_FAULT_SPEC" - }, - { - "ArchStdEvent": "FP_SCALE_OPS_SPEC" - }, - { - "ArchStdEvent": "FP_FIXED_OPS_SPEC" - }, - { - "ArchStdEvent": "ASE_SVE_INT8_SPEC" - }, - { - "ArchStdEvent": "ASE_SVE_INT16_SPEC" - }, - { - "ArchStdEvent": "ASE_SVE_INT32_SPEC" - }, - { - "ArchStdEvent": "ASE_SVE_INT64_SPEC" - } -] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json new file mode 100644 index 000000000000..da7c129f2569 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json @@ -0,0 +1,54 @@ +[ + { + "ArchStdEvent": "L1D_CACHE_REFILL", + "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line. This event does not count cache line allocations from preload instructions or from hardware cache prefetching." + }, + { + "ArchStdEvent": "L1D_CACHE", + "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) count as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted." + }, + { + "ArchStdEvent": "L1D_CACHE_WB", + "PublicDescription": "Counts write-backs of dirty data from the L1 data cache to the L2 cache. This occurs when either a dirty cache line is evicted from L1 data cache and allocated in the L2 cache or dirty data is written to the L2 and possibly to the next level of cache. This event counts both victim cache line evictions and cache write-backs from snoops or cache maintenance operations. The following cache operations are not counted:\n\n1. Invalidations which do not result in data being transferred out of the L1 (such as evictions of clean data),\n2. Full line writes which write to L2 without writing L1, such as write streaming mode." + }, + { + "ArchStdEvent": "L1D_CACHE_LMISS_RD", + "PublicDescription": "Counts cache line refills into the level 1 data cache from any memory read operations, that incurred additional latency." + }, + { + "ArchStdEvent": "L1D_CACHE_RD", + "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches count as both a write access and read access." + }, + { + "ArchStdEvent": "L1D_CACHE_WR", + "PublicDescription": "Counts level 1 data cache accesses generated by store operations. This event also counts accesses caused by a DC ZVA (data cache zero, specified by virtual address) instruction. Near atomic operations that resolve in the CPUs caches count as a write access and read access." + }, + { + "ArchStdEvent": "L1D_CACHE_REFILL_RD", + "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load instructions where the memory read operation misses in the level 1 data cache. This event only counts one event per cache line." + }, + { + "ArchStdEvent": "L1D_CACHE_REFILL_WR", + "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed store instructions where the memory write operation misses in the level 1 data cache. This event only counts one event per cache line." + }, + { + "ArchStdEvent": "L1D_CACHE_REFILL_INNER", + "PublicDescription": "Counts level 1 data cache refills where the cache line data came from caches inside the immediate cluster of the core." + }, + { + "ArchStdEvent": "L1D_CACHE_REFILL_OUTER", + "PublicDescription": "Counts level 1 data cache refills for which the cache line data came from outside the immediate cluster of the core, like an SLC in the system interconnect or DRAM." + }, + { + "ArchStdEvent": "L1D_CACHE_WB_VICTIM", + "PublicDescription": "Counts dirty cache line evictions from the level 1 data cache caused by a new cache line allocation. This event does not count evictions caused by cache maintenance operations." + }, + { + "ArchStdEvent": "L1D_CACHE_WB_CLEAN", + "PublicDescription": "Counts write-backs from the level 1 data cache that are a result of a coherency operation made by another CPU. Event count includes cache maintenance operations." + }, + { + "ArchStdEvent": "L1D_CACHE_INVAL", + "PublicDescription": "Counts each explicit invalidation of a cache line in the level 1 data cache caused by:\n\n- Cache Maintenance Operations (CMO) that operate by a virtual address.\n- Broadcast cache coherency operations from another CPU in the system.\n\nThis event does not count for the following conditions:\n\n1. A cache refill invalidates a cache line.\n2. A CMO which is executed on that CPU and invalidates a cache line specified by set/way.\n\nNote that CMOs that operate by set/way cannot be broadcast from one CPU to another." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1i_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1i_cache.json new file mode 100644 index 000000000000..633f1030359d --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1i_cache.json @@ -0,0 +1,14 @@ +[ + { + "ArchStdEvent": "L1I_CACHE_REFILL", + "PublicDescription": "Counts cache line refills in the level 1 instruction cache caused by a missed instruction fetch. Instruction fetches may include accessing multiple instructions, but the single cache line allocation is counted once." + }, + { + "ArchStdEvent": "L1I_CACHE", + "PublicDescription": "Counts instruction fetches which access the level 1 instruction cache. Instruction cache accesses caused by cache maintenance operations are not counted." + }, + { + "ArchStdEvent": "L1I_CACHE_LMISS", + "PublicDescription": "Counts cache line refills into the level 1 instruction cache, that incurred additional latency." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json new file mode 100644 index 000000000000..0e31d0daf88b --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json @@ -0,0 +1,50 @@ +[ + { + "ArchStdEvent": "L2D_CACHE", + "PublicDescription": "Counts level 2 cache accesses. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level caches or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache." + }, + { + "ArchStdEvent": "L2D_CACHE_REFILL", + "PublicDescription": "Counts cache line refills into the level 2 cache. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 caches or translation resolutions due to accesses." + }, + { + "ArchStdEvent": "L2D_CACHE_WB", + "PublicDescription": "Counts write-backs of data from the L2 cache to outside the CPU. This includes snoops to the L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line invalidations which do not write data outside the CPU and snoops which return data from an L1 cache are not counted. Data would not be written outside the cache when invalidating a clean cache line." + }, + { + "ArchStdEvent": "L2D_CACHE_ALLOCATE", + "PublicDescription": "TBD" + }, + { + "ArchStdEvent": "L2D_CACHE_RD", + "PublicDescription": "Counts level 2 cache accesses due to memory read operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." + }, + { + "ArchStdEvent": "L2D_CACHE_WR", + "PublicDescription": "Counts level 2 cache accesses due to memory write operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." + }, + { + "ArchStdEvent": "L2D_CACHE_REFILL_RD", + "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." + }, + { + "ArchStdEvent": "L2D_CACHE_REFILL_WR", + "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses." + }, + { + "ArchStdEvent": "L2D_CACHE_WB_VICTIM", + "PublicDescription": "Counts evictions from the level 2 cache because of a line being allocated into the L2 cache." + }, + { + "ArchStdEvent": "L2D_CACHE_WB_CLEAN", + "PublicDescription": "Counts write-backs from the level 2 cache that are a result of either:\n\n1. Cache maintenance operations,\n\n2. Snoop responses or,\n\n3. Direct cache transfers to another CPU due to a forwarding snoop request." + }, + { + "ArchStdEvent": "L2D_CACHE_INVAL", + "PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by cache maintenance operations that operate by a virtual address, or by external coherency operations. This event does not count if either:\n\n1. A cache refill invalidates a cache line or,\n2. A Cache Maintenance Operation (CMO), which invalidates a cache line specified by set/way, is executed on that CPU.\n\nCMOs that operate by set/way cannot be broadcast from one CPU to another." + }, + { + "ArchStdEvent": "L2D_CACHE_LMISS_RD", + "PublicDescription": "Counts cache line refills into the level 2 unified cache from any memory read operations that incurred additional latency." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json new file mode 100644 index 000000000000..45bfba532df7 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json @@ -0,0 +1,22 @@ +[ + { + "ArchStdEvent": "L3D_CACHE_ALLOCATE", + "PublicDescription": "Counts level 3 cache line allocates that do not fetch data from outside the level 3 data or unified cache. For example, allocates due to streaming stores." + }, + { + "ArchStdEvent": "L3D_CACHE_REFILL", + "PublicDescription": "Counts level 3 accesses that receive data from outside the L3 cache." + }, + { + "ArchStdEvent": "L3D_CACHE", + "PublicDescription": "Counts level 3 cache accesses. level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses." + }, + { + "ArchStdEvent": "L3D_CACHE_RD", + "PublicDescription": "TBD" + }, + { + "ArchStdEvent": "L3D_CACHE_LMISS_RD", + "PublicDescription": "Counts any cache line refill into the level 3 cache from memory read operations that incurred additional latency." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json new file mode 100644 index 000000000000..bb712d57d58a --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json @@ -0,0 +1,10 @@ +[ + { + "ArchStdEvent": "LL_CACHE_RD", + "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources." + }, + { + "ArchStdEvent": "LL_CACHE_MISS_RD", + "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json index 7b2b21ac150f..106a97f8b2e7 100644 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json @@ -1,41 +1,46 @@ [ { - "ArchStdEvent": "MEM_ACCESS" + "ArchStdEvent": "MEM_ACCESS", + "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." }, { - "ArchStdEvent": "REMOTE_ACCESS" + "ArchStdEvent": "MEMORY_ERROR", + "PublicDescription": "Counts any detected correctable or uncorrectable physical memory errors (ECC or parity) in protected CPUs RAMs. On the core, this event counts errors in the caches (including data and tag rams). Any detected memory error (from either a speculative and abandoned access, or an architecturally executed access) is counted. Note that errors are only detected when the actual protected memory is accessed by an operation." }, { - "ArchStdEvent": "MEM_ACCESS_RD" + "ArchStdEvent": "REMOTE_ACCESS", + "PublicDescription": "Counts accesses to another chip, which is implemented as a different CMN mesh in the system. If the CHI bus response back to the core indicates that the data source is from another chip (mesh), then the counter is updated. If no data is returned, even if the system snoops another chip/mesh, then the counter is not updated." }, { - "ArchStdEvent": "MEM_ACCESS_WR" + "ArchStdEvent": "MEM_ACCESS_RD", + "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." }, { - "ArchStdEvent": "UNALIGNED_LD_SPEC" + "ArchStdEvent": "MEM_ACCESS_WR", + "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." }, { - "ArchStdEvent": "UNALIGNED_ST_SPEC" + "ArchStdEvent": "LDST_ALIGN_LAT", + "PublicDescription": "Counts the number of memory read and write accesses in a cycle that incurred additional latency, due to the alignment of the address and the size of data being accessed, which results in store crossing a single cache line." }, { - "ArchStdEvent": "UNALIGNED_LDST_SPEC" + "ArchStdEvent": "LD_ALIGN_LAT", + "PublicDescription": "Counts the number of memory read accesses in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed, which results in load crossing a single cache line." }, { - "ArchStdEvent": "LDST_ALIGN_LAT" + "ArchStdEvent": "ST_ALIGN_LAT", + "PublicDescription": "Counts the number of memory write access in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed incurred additional latency." }, { - "ArchStdEvent": "LD_ALIGN_LAT" + "ArchStdEvent": "MEM_ACCESS_CHECKED", + "PublicDescription": "Counts the number of memory read and write accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)." }, { - "ArchStdEvent": "ST_ALIGN_LAT" + "ArchStdEvent": "MEM_ACCESS_CHECKED_RD", + "PublicDescription": "Counts the number of memory read accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)." }, { - "ArchStdEvent": "MEM_ACCESS_CHECKED" - }, - { - "ArchStdEvent": "MEM_ACCESS_CHECKED_RD" - }, - { - "ArchStdEvent": "MEM_ACCESS_CHECKED_WR" + "ArchStdEvent": "MEM_ACCESS_CHECKED_WR", + "PublicDescription": "Counts the number of memory write accesses in a cycle that is tag checked by the Memory Tagging Extension (MTE)." } ] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json index 8ad15b726dca..5f449270b448 100644 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json @@ -1,272 +1,303 @@ [ { - "ArchStdEvent": "FRONTEND_BOUND", - "MetricExpr": "((stall_slot_frontend) if (#slots - 5) else (stall_slot_frontend - cpu_cycles)) / (#slots * cpu_cycles)" + "ArchStdEvent": "backend_bound", + "MetricExpr": "(100 * ((STALL_SLOT_BACKEND / (CPU_CYCLES * #slots)) - ((BR_MIS_PRED * 3) / CPU_CYCLES)))" }, { - "ArchStdEvent": "BAD_SPECULATION", - "MetricExpr": "(1 - op_retired / op_spec) * (1 - (stall_slot if (#slots - 5) else (stall_slot - cpu_cycles)) / (#slots * cpu_cycles))" + "MetricName": "backend_stalled_cycles", + "MetricExpr": "((STALL_BACKEND / CPU_CYCLES) * 100)", + "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the backend unit of the processor.", + "MetricGroup": "Cycle_Accounting", + "ScaleUnit": "1percent of cycles" }, { - "ArchStdEvent": "RETIRING", - "MetricExpr": "(op_retired / op_spec) * (1 - (stall_slot if (#slots - 5) else (stall_slot - cpu_cycles)) / (#slots * cpu_cycles))" + "ArchStdEvent": "bad_speculation", + "MetricExpr": "(100 * (((1 - (OP_RETIRED / OP_SPEC)) * (1 - (((STALL_SLOT) if (strcmp_cpuid_str(0x410fd493) | strcmp_cpuid_str(0x410fd490) ^ 1) else (STALL_SLOT - CPU_CYCLES)) / (CPU_CYCLES * #slots)))) + ((BR_MIS_PRED * 4) / CPU_CYCLES)))" }, { - "ArchStdEvent": "BACKEND_BOUND" + "MetricName": "branch_misprediction_ratio", + "MetricExpr": "(BR_MIS_PRED_RETIRED / BR_RETIRED)", + "BriefDescription": "This metric measures the ratio of branches mispredicted to the total number of branches architecturally executed. This gives an indication of the effectiveness of the branch prediction unit.", + "MetricGroup": "Miss_Ratio;Branch_Effectiveness", + "ScaleUnit": "1per branch" }, { - "MetricExpr": "L1D_TLB_REFILL / L1D_TLB", - "BriefDescription": "The rate of L1D TLB refill to the overall L1D TLB lookups", - "MetricGroup": "TLB", - "MetricName": "l1d_tlb_miss_rate", - "ScaleUnit": "100%" + "MetricName": "branch_mpki", + "MetricExpr": "((BR_MIS_PRED_RETIRED / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of branch mispredictions per thousand instructions executed.", + "MetricGroup": "MPKI;Branch_Effectiveness", + "ScaleUnit": "1MPKI" }, { - "MetricExpr": "L1I_TLB_REFILL / L1I_TLB", - "BriefDescription": "The rate of L1I TLB refill to the overall L1I TLB lookups", - "MetricGroup": "TLB", - "MetricName": "l1i_tlb_miss_rate", - "ScaleUnit": "100%" + "MetricName": "branch_percentage", + "MetricExpr": "(((BR_IMMED_SPEC + BR_INDIRECT_SPEC) / INST_SPEC) * 100)", + "BriefDescription": "This metric measures branch operations as a percentage of operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" }, { - "MetricExpr": "L2D_TLB_REFILL / L2D_TLB", - "BriefDescription": "The rate of L2D TLB refill to the overall L2D TLB lookups", - "MetricGroup": "TLB", - "MetricName": "l2_tlb_miss_rate", - "ScaleUnit": "100%" + "MetricName": "crypto_percentage", + "MetricExpr": "((CRYPTO_SPEC / INST_SPEC) * 100)", + "BriefDescription": "This metric measures crypto operations as a percentage of operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" }, { - "MetricExpr": "DTLB_WALK / INST_RETIRED * 1000", - "BriefDescription": "The rate of TLB Walks per kilo instructions for data accesses", - "MetricGroup": "TLB", "MetricName": "dtlb_mpki", + "MetricExpr": "((DTLB_WALK / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of data TLB Walks per thousand instructions executed.", + "MetricGroup": "MPKI;DTLB_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "DTLB_WALK / L1D_TLB", - "BriefDescription": "The rate of DTLB Walks to the overall L1D TLB lookups", - "MetricGroup": "TLB", - "MetricName": "dtlb_walk_rate", - "ScaleUnit": "100%" + "MetricName": "dtlb_walk_ratio", + "MetricExpr": "(DTLB_WALK / L1D_TLB)", + "BriefDescription": "This metric measures the ratio of data TLB Walks to the total number of data TLB accesses. This gives an indication of the effectiveness of the data TLB accesses.", + "MetricGroup": "Miss_Ratio;DTLB_Effectiveness", + "ScaleUnit": "1per TLB access" }, { - "MetricExpr": "ITLB_WALK / INST_RETIRED * 1000", - "BriefDescription": "The rate of TLB Walks per kilo instructions for instruction accesses", - "MetricGroup": "TLB", - "MetricName": "itlb_mpki", - "ScaleUnit": "1MPKI" + "ArchStdEvent": "frontend_bound", + "MetricExpr": "(100 * ((((STALL_SLOT_FRONTEND) if (strcmp_cpuid_str(0x410fd493) | strcmp_cpuid_str(0x410fd490) ^ 1) else (STALL_SLOT_FRONTEND - CPU_CYCLES)) / (CPU_CYCLES * #slots)) - (BR_MIS_PRED / CPU_CYCLES)))" }, { - "MetricExpr": "ITLB_WALK / L1I_TLB", - "BriefDescription": "The rate of ITLB Walks to the overall L1I TLB lookups", - "MetricGroup": "TLB", - "MetricName": "itlb_walk_rate", - "ScaleUnit": "100%" + "MetricName": "frontend_stalled_cycles", + "MetricExpr": "((STALL_FRONTEND / CPU_CYCLES) * 100)", + "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the frontend unit of the processor.", + "MetricGroup": "Cycle_Accounting", + "ScaleUnit": "1percent of cycles" }, { - "MetricExpr": "L1I_CACHE_REFILL / INST_RETIRED * 1000", - "BriefDescription": "The rate of L1 I-Cache misses per kilo instructions", - "MetricGroup": "Cache", - "MetricName": "l1i_cache_mpki", + "MetricName": "integer_dp_percentage", + "MetricExpr": "((DP_SPEC / INST_SPEC) * 100)", + "BriefDescription": "This metric measures scalar integer operations as a percentage of operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" + }, + { + "MetricName": "ipc", + "MetricExpr": "(INST_RETIRED / CPU_CYCLES)", + "BriefDescription": "This metric measures the number of instructions retired per cycle.", + "MetricGroup": "General", + "ScaleUnit": "1per cycle" + }, + { + "MetricName": "itlb_mpki", + "MetricExpr": "((ITLB_WALK / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of instruction TLB Walks per thousand instructions executed.", + "MetricGroup": "MPKI;ITLB_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE", - "BriefDescription": "The rate of L1 I-Cache misses to the overall L1 I-Cache", - "MetricGroup": "Cache", - "MetricName": "l1i_cache_miss_rate", - "ScaleUnit": "100%" + "MetricName": "itlb_walk_ratio", + "MetricExpr": "(ITLB_WALK / L1I_TLB)", + "BriefDescription": "This metric measures the ratio of instruction TLB Walks to the total number of instruction TLB accesses. This gives an indication of the effectiveness of the instruction TLB accesses.", + "MetricGroup": "Miss_Ratio;ITLB_Effectiveness", + "ScaleUnit": "1per TLB access" + }, + { + "MetricName": "l1d_cache_miss_ratio", + "MetricExpr": "(L1D_CACHE_REFILL / L1D_CACHE)", + "BriefDescription": "This metric measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cache accesses. This gives an indication of the effectiveness of the level 1 data cache.", + "MetricGroup": "Miss_Ratio;L1D_Cache_Effectiveness", + "ScaleUnit": "1per cache access" }, { - "MetricExpr": "L1D_CACHE_REFILL / INST_RETIRED * 1000", - "BriefDescription": "The rate of L1 D-Cache misses per kilo instructions", - "MetricGroup": "Cache", "MetricName": "l1d_cache_mpki", + "MetricExpr": "((L1D_CACHE_REFILL / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of level 1 data cache accesses missed per thousand instructions executed.", + "MetricGroup": "MPKI;L1D_Cache_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE", - "BriefDescription": "The rate of L1 D-Cache misses to the overall L1 D-Cache", - "MetricGroup": "Cache", - "MetricName": "l1d_cache_miss_rate", - "ScaleUnit": "100%" + "MetricName": "l1d_tlb_miss_ratio", + "MetricExpr": "(L1D_TLB_REFILL / L1D_TLB)", + "BriefDescription": "This metric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TLB accesses. This gives an indication of the effectiveness of the level 1 data TLB.", + "MetricGroup": "Miss_Ratio;DTLB_Effectiveness", + "ScaleUnit": "1per TLB access" }, { - "MetricExpr": "L2D_CACHE_REFILL / INST_RETIRED * 1000", - "BriefDescription": "The rate of L2 D-Cache misses per kilo instructions", - "MetricGroup": "Cache", - "MetricName": "l2d_cache_mpki", + "MetricName": "l1d_tlb_mpki", + "MetricExpr": "((L1D_TLB_REFILL / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.", + "MetricGroup": "MPKI;DTLB_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE", - "BriefDescription": "The rate of L2 D-Cache misses to the overall L2 D-Cache", - "MetricGroup": "Cache", - "MetricName": "l2d_cache_miss_rate", - "ScaleUnit": "100%" + "MetricName": "l1i_cache_miss_ratio", + "MetricExpr": "(L1I_CACHE_REFILL / L1I_CACHE)", + "BriefDescription": "This metric measures the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction cache accesses. This gives an indication of the effectiveness of the level 1 instruction cache.", + "MetricGroup": "Miss_Ratio;L1I_Cache_Effectiveness", + "ScaleUnit": "1per cache access" }, { - "MetricExpr": "L3D_CACHE_REFILL / INST_RETIRED * 1000", - "BriefDescription": "The rate of L3 D-Cache misses per kilo instructions", - "MetricGroup": "Cache", - "MetricName": "l3d_cache_mpki", + "MetricName": "l1i_cache_mpki", + "MetricExpr": "((L1I_CACHE_REFILL / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed per thousand instructions executed.", + "MetricGroup": "MPKI;L1I_Cache_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "L3D_CACHE_REFILL / L3D_CACHE", - "BriefDescription": "The rate of L3 D-Cache misses to the overall L3 D-Cache", - "MetricGroup": "Cache", - "MetricName": "l3d_cache_miss_rate", - "ScaleUnit": "100%" + "MetricName": "l1i_tlb_miss_ratio", + "MetricExpr": "(L1I_TLB_REFILL / L1I_TLB)", + "BriefDescription": "This metric measures the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instruction TLB accesses. This gives an indication of the effectiveness of the level 1 instruction TLB.", + "MetricGroup": "Miss_Ratio;ITLB_Effectiveness", + "ScaleUnit": "1per TLB access" }, { - "MetricExpr": "LL_CACHE_MISS_RD / INST_RETIRED * 1000", - "BriefDescription": "The rate of LL Cache read misses per kilo instructions", - "MetricGroup": "Cache", - "MetricName": "ll_cache_read_mpki", + "MetricName": "l1i_tlb_mpki", + "MetricExpr": "((L1I_TLB_REFILL / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.", + "MetricGroup": "MPKI;ITLB_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "LL_CACHE_MISS_RD / LL_CACHE_RD", - "BriefDescription": "The rate of LL Cache read misses to the overall LL Cache read", - "MetricGroup": "Cache", - "MetricName": "ll_cache_read_miss_rate", - "ScaleUnit": "100%" + "MetricName": "l2_cache_miss_ratio", + "MetricExpr": "(L2D_CACHE_REFILL / L2D_CACHE)", + "BriefDescription": "This metric measures the ratio of level 2 cache accesses missed to the total number of level 2 cache accesses. This gives an indication of the effectiveness of the level 2 cache, which is a unified cache that stores both data and instruction. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.", + "MetricGroup": "Miss_Ratio;L2_Cache_Effectiveness", + "ScaleUnit": "1per cache access" }, { - "MetricExpr": "(LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD", - "BriefDescription": "The rate of LL Cache read hit to the overall LL Cache read", - "MetricGroup": "Cache", - "MetricName": "ll_cache_read_hit_rate", - "ScaleUnit": "100%" + "MetricName": "l2_cache_mpki", + "MetricExpr": "((L2D_CACHE_REFILL / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of level 2 unified cache accesses missed per thousand instructions executed. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.", + "MetricGroup": "MPKI;L2_Cache_Effectiveness", + "ScaleUnit": "1MPKI" }, { - "MetricExpr": "BR_MIS_PRED_RETIRED / INST_RETIRED * 1000", - "BriefDescription": "The rate of branches mis-predicted per kilo instructions", - "MetricGroup": "Branch", - "MetricName": "branch_mpki", + "MetricName": "l2_tlb_miss_ratio", + "MetricExpr": "(L2D_TLB_REFILL / L2D_TLB)", + "BriefDescription": "This metric measures the ratio of level 2 unified TLB accesses missed to the total number of level 2 unified TLB accesses. This gives an indication of the effectiveness of the level 2 TLB.", + "MetricGroup": "Miss_Ratio;ITLB_Effectiveness;DTLB_Effectiveness", + "ScaleUnit": "1per TLB access" + }, + { + "MetricName": "l2_tlb_mpki", + "MetricExpr": "((L2D_TLB_REFILL / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of level 2 unified TLB accesses missed per thousand instructions executed.", + "MetricGroup": "MPKI;ITLB_Effectiveness;DTLB_Effectiveness", "ScaleUnit": "1MPKI" }, { - "MetricExpr": "BR_RETIRED / INST_RETIRED * 1000", - "BriefDescription": "The rate of branches retired per kilo instructions", - "MetricGroup": "Branch", - "MetricName": "branch_pki", - "ScaleUnit": "1PKI" + "MetricName": "ll_cache_read_hit_ratio", + "MetricExpr": "((LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD)", + "BriefDescription": "This metric measures the ratio of last level cache read accesses hit in the cache to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.", + "MetricGroup": "LL_Cache_Effectiveness", + "ScaleUnit": "1per cache access" }, { - "MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED", - "BriefDescription": "The rate of branches mis-predited to the overall branches", - "MetricGroup": "Branch", - "MetricName": "branch_miss_pred_rate", - "ScaleUnit": "100%" + "MetricName": "ll_cache_read_miss_ratio", + "MetricExpr": "(LL_CACHE_MISS_RD / LL_CACHE_RD)", + "BriefDescription": "This metric measures the ratio of last level cache read accesses missed to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.", + "MetricGroup": "Miss_Ratio;LL_Cache_Effectiveness", + "ScaleUnit": "1per cache access" }, { - "MetricExpr": "instructions / CPU_CYCLES", - "BriefDescription": "The average number of instructions executed for each cycle.", - "MetricGroup": "PEutilization", - "MetricName": "ipc" + "MetricName": "ll_cache_read_mpki", + "MetricExpr": "((LL_CACHE_MISS_RD / INST_RETIRED) * 1000)", + "BriefDescription": "This metric measures the number of last level cache read accesses missed per thousand instructions executed.", + "MetricGroup": "MPKI;LL_Cache_Effectiveness", + "ScaleUnit": "1MPKI" }, { - "MetricExpr": "ipc / 5", - "BriefDescription": "IPC percentage of peak. The peak of IPC is 5.", - "MetricGroup": "PEutilization", - "MetricName": "ipc_rate", - "ScaleUnit": "100%" + "MetricName": "load_percentage", + "MetricExpr": "((LD_SPEC / INST_SPEC) * 100)", + "BriefDescription": "This metric measures load operations as a percentage of operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" }, { - "MetricExpr": "INST_RETIRED / CPU_CYCLES", - "BriefDescription": "Architecturally executed Instructions Per Cycle (IPC)", - "MetricGroup": "PEutilization", - "MetricName": "retired_ipc" + "ArchStdEvent": "retiring", + "MetricExpr": "(100 * ((OP_RETIRED / OP_SPEC) * (1 - (((STALL_SLOT) if (strcmp_cpuid_str(0x410fd493) | strcmp_cpuid_str(0x410fd490) ^ 1) else (STALL_SLOT - CPU_CYCLES)) / (CPU_CYCLES * #slots)))))" }, { - "MetricExpr": "INST_SPEC / CPU_CYCLES", - "BriefDescription": "Speculatively executed Instructions Per Cycle (IPC)", - "MetricGroup": "PEutilization", - "MetricName": "spec_ipc" + "MetricName": "scalar_fp_percentage", + "MetricExpr": "((VFP_SPEC / INST_SPEC) * 100)", + "BriefDescription": "This metric measures scalar floating point operations as a percentage of operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" }, { - "MetricExpr": "OP_RETIRED / OP_SPEC", - "BriefDescription": "Of all the micro-operations issued, what percentage are retired(committed)", - "MetricGroup": "PEutilization", - "MetricName": "retired_rate", - "ScaleUnit": "100%" + "MetricName": "simd_percentage", + "MetricExpr": "((ASE_SPEC / INST_SPEC) * 100)", + "BriefDescription": "This metric measures advanced SIMD operations as a percentage of total operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" }, { - "MetricExpr": "1 - OP_RETIRED / OP_SPEC", - "BriefDescription": "Of all the micro-operations issued, what percentage are not retired(committed)", - "MetricGroup": "PEutilization", - "MetricName": "wasted_rate", - "ScaleUnit": "100%" + "MetricName": "store_percentage", + "MetricExpr": "((ST_SPEC / INST_SPEC) * 100)", + "BriefDescription": "This metric measures store operations as a percentage of operations speculatively executed.", + "MetricGroup": "Operation_Mix", + "ScaleUnit": "1percent of operations" }, { - "MetricExpr": "OP_RETIRED / OP_SPEC * (1 - (STALL_SLOT if (#slots - 5) else (STALL_SLOT - CPU_CYCLES)) / (#slots * CPU_CYCLES))", - "BriefDescription": "The truly effective ratio of micro-operations executed by the CPU, which means that misprediction and stall are not included", - "MetricGroup": "PEutilization", - "MetricName": "cpu_utilization", - "ScaleUnit": "100%" + "MetricExpr": "L3D_CACHE_REFILL / INST_RETIRED * 1000", + "BriefDescription": "The rate of L3 D-Cache misses per kilo instructions", + "MetricGroup": "MPKI;L3_Cache_Effectiveness", + "MetricName": "l3d_cache_mpki", + "ScaleUnit": "1MPKI" }, { - "MetricExpr": "LD_SPEC / INST_SPEC", - "BriefDescription": "The rate of load instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", - "MetricName": "load_spec_rate", + "MetricExpr": "L3D_CACHE_REFILL / L3D_CACHE", + "BriefDescription": "The rate of L3 D-Cache misses to the overall L3 D-Cache", + "MetricGroup": "Miss_Ratio;L3_Cache_Effectiveness", + "MetricName": "l3d_cache_miss_rate", "ScaleUnit": "100%" }, { - "MetricExpr": "ST_SPEC / INST_SPEC", - "BriefDescription": "The rate of store instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", - "MetricName": "store_spec_rate", - "ScaleUnit": "100%" + "MetricExpr": "BR_RETIRED / INST_RETIRED * 1000", + "BriefDescription": "The rate of branches retired per kilo instructions", + "MetricGroup": "MPKI;Branch_Effectiveness", + "MetricName": "branch_pki", + "ScaleUnit": "1PKI" }, { - "MetricExpr": "DP_SPEC / INST_SPEC", - "BriefDescription": "The rate of integer data-processing instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", - "MetricName": "data_process_spec_rate", + "MetricExpr": "ipc / #slots", + "BriefDescription": "IPC percentage of peak. The peak of IPC is the number of slots.", + "MetricGroup": "General", + "MetricName": "ipc_rate", "ScaleUnit": "100%" }, { - "MetricExpr": "ASE_SPEC / INST_SPEC", - "BriefDescription": "The rate of advanced SIMD instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", - "MetricName": "advanced_simd_spec_rate", - "ScaleUnit": "100%" + "MetricExpr": "INST_SPEC / CPU_CYCLES", + "BriefDescription": "Speculatively executed Instructions Per Cycle (IPC)", + "MetricGroup": "General", + "MetricName": "spec_ipc" }, { - "MetricExpr": "VFP_SPEC / INST_SPEC", - "BriefDescription": "The rate of floating point instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", - "MetricName": "float_point_spec_rate", + "MetricExpr": "OP_RETIRED / OP_SPEC", + "BriefDescription": "Of all the micro-operations issued, what percentage are retired(committed)", + "MetricGroup": "General", + "MetricName": "retired_rate", "ScaleUnit": "100%" }, { - "MetricExpr": "CRYPTO_SPEC / INST_SPEC", - "BriefDescription": "The rate of crypto instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", - "MetricName": "crypto_spec_rate", + "MetricExpr": "1 - OP_RETIRED / OP_SPEC", + "BriefDescription": "Of all the micro-operations issued, what percentage are not retired(committed)", + "MetricGroup": "General", + "MetricName": "wasted_rate", "ScaleUnit": "100%" }, { "MetricExpr": "BR_IMMED_SPEC / INST_SPEC", - "BriefDescription": "The rate of branch immediate instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", + "BriefDescription": "The rate of branch immediate instructions speculatively executed to overall instructions speculatively executed", + "MetricGroup": "Operation_Mix", "MetricName": "branch_immed_spec_rate", "ScaleUnit": "100%" }, { "MetricExpr": "BR_RETURN_SPEC / INST_SPEC", - "BriefDescription": "The rate of procedure return instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", + "BriefDescription": "The rate of procedure return instructions speculatively executed to overall instructions speculatively executed", + "MetricGroup": "Operation_Mix", "MetricName": "branch_return_spec_rate", "ScaleUnit": "100%" }, { "MetricExpr": "BR_INDIRECT_SPEC / INST_SPEC", - "BriefDescription": "The rate of indirect branch instructions speculatively executed to overall instructions speclatively executed", - "MetricGroup": "InstructionMix", + "BriefDescription": "The rate of indirect branch instructions speculatively executed to overall instructions speculatively executed", + "MetricGroup": "Operation_Mix", "MetricName": "branch_indirect_spec_rate", "ScaleUnit": "100%" } diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/pipeline.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/pipeline.json deleted file mode 100644 index f9fae15f7555..000000000000 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/pipeline.json +++ /dev/null @@ -1,23 +0,0 @@ -[ - { - "ArchStdEvent": "STALL_FRONTEND" - }, - { - "ArchStdEvent": "STALL_BACKEND" - }, - { - "ArchStdEvent": "STALL" - }, - { - "ArchStdEvent": "STALL_SLOT_BACKEND" - }, - { - "ArchStdEvent": "STALL_SLOT_FRONTEND" - }, - { - "ArchStdEvent": "STALL_SLOT" - }, - { - "ArchStdEvent": "STALL_BACKEND_MEM" - } -] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json new file mode 100644 index 000000000000..f297b049b62f --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json @@ -0,0 +1,30 @@ +[ + { + "ArchStdEvent": "SW_INCR", + "PublicDescription": "Counts software writes to the PMSWINC_EL0 (software PMU increment) register. The PMSWINC_EL0 register is a manually updated counter for use by application software.\n\nThis event could be used to measure any user program event, such as accesses to a particular data structure (by writing to the PMSWINC_EL0 register each time the data structure is accessed).\n\nTo use the PMSWINC_EL0 register and event, developers must insert instructions that write to the PMSWINC_EL0 register into the source code.\n\nSince the SW_INCR event records writes to the PMSWINC_EL0 register, there is no need to do a read/increment/write sequence to the PMSWINC_EL0 register." + }, + { + "ArchStdEvent": "INST_RETIRED", + "PublicDescription": "Counts instructions that have been architecturally executed." + }, + { + "ArchStdEvent": "CID_WRITE_RETIRED", + "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR register, which usually contain the kernel PID and can be output with hardware trace." + }, + { + "ArchStdEvent": "TTBR_WRITE_RETIRED", + "PublicDescription": "Counts architectural writes to TTBR0/1_EL1. If virtualization host extensions are enabled (by setting the HCR_EL2.E2H bit to 1), then accesses to TTBR0/1_EL1 that are redirected to TTBR0/1_EL2, or accesses to TTBR0/1_EL12, are counted. TTBRn registers are typically updated when the kernel is swapping user-space threads or applications." + }, + { + "ArchStdEvent": "BR_RETIRED", + "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted." + }, + { + "ArchStdEvent": "BR_MIS_PRED_RETIRED", + "PublicDescription": "Counts branches counted by BR_RETIRED which were mispredicted and caused a pipeline flush." + }, + { + "ArchStdEvent": "OP_RETIRED", + "PublicDescription": "Counts micro-operations that are architecturally executed. This is a count of number of micro-operations retired from the commit queue in a single cycle." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spe.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spe.json index 20f2165c85fe..5de8b0f3a440 100644 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spe.json +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spe.json @@ -1,14 +1,18 @@ [ { - "ArchStdEvent": "SAMPLE_POP" + "ArchStdEvent": "SAMPLE_POP", + "PublicDescription": "Counts statistical profiling sample population, the count of all operations that could be sampled but may or may not be chosen for sampling." }, { - "ArchStdEvent": "SAMPLE_FEED" + "ArchStdEvent": "SAMPLE_FEED", + "PublicDescription": "Counts statistical profiling samples taken for sampling." }, { - "ArchStdEvent": "SAMPLE_FILTRATE" + "ArchStdEvent": "SAMPLE_FILTRATE", + "PublicDescription": "Counts statistical profiling samples taken which are not removed by filtering." }, { - "ArchStdEvent": "SAMPLE_COLLISION" + "ArchStdEvent": "SAMPLE_COLLISION", + "PublicDescription": "Counts statistical profiling samples that have collided with a previous sample and so therefore not taken." } ] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json new file mode 100644 index 000000000000..1af961f8a6c8 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json @@ -0,0 +1,110 @@ +[ + { + "ArchStdEvent": "BR_MIS_PRED", + "PublicDescription": "Counts branches which are speculatively executed and mispredicted." + }, + { + "ArchStdEvent": "BR_PRED", + "PublicDescription": "Counts branches speculatively executed and were predicted right." + }, + { + "ArchStdEvent": "INST_SPEC", + "PublicDescription": "Counts operations that have been speculatively executed." + }, + { + "ArchStdEvent": "OP_SPEC", + "PublicDescription": "Counts micro-operations speculatively executed. This is the count of the number of micro-operations dispatched in a cycle." + }, + { + "ArchStdEvent": "UNALIGNED_LD_SPEC", + "PublicDescription": "Counts unaligned memory read operations issued by the CPU. This event counts unaligned accesses (as defined by the actual instruction), even if they are subsequently issued as multiple aligned accesses. The event does not count preload operations (PLD, PLI)." + }, + { + "ArchStdEvent": "UNALIGNED_ST_SPEC", + "PublicDescription": "Counts unaligned memory write operations issued by the CPU. This event counts unaligned accesses (as defined by the actual instruction), even if they are subsequently issued as multiple aligned accesses." + }, + { + "ArchStdEvent": "UNALIGNED_LDST_SPEC", + "PublicDescription": "Counts unaligned memory operations issued by the CPU. This event counts unaligned accesses (as defined by the actual instruction), even if they are subsequently issued as multiple aligned accesses." + }, + { + "ArchStdEvent": "LDREX_SPEC", + "PublicDescription": "Counts Load-Exclusive operations that have been speculatively executed. Eg: LDREX, LDX" + }, + { + "ArchStdEvent": "STREX_PASS_SPEC", + "PublicDescription": "Counts store-exclusive operations that have been speculatively executed and have successfully completed the store operation." + }, + { + "ArchStdEvent": "STREX_FAIL_SPEC", + "PublicDescription": "Counts store-exclusive operations that have been speculatively executed and have not successfully completed the store operation." + }, + { + "ArchStdEvent": "STREX_SPEC", + "PublicDescription": "Counts store-exclusive operations that have been speculatively executed." + }, + { + "ArchStdEvent": "LD_SPEC", + "PublicDescription": "Counts speculatively executed load operations including Single Instruction Multiple Data (SIMD) load operations." + }, + { + "ArchStdEvent": "ST_SPEC", + "PublicDescription": "Counts speculatively executed store operations including Single Instruction Multiple Data (SIMD) store operations." + }, + { + "ArchStdEvent": "DP_SPEC", + "PublicDescription": "Counts speculatively executed logical or arithmetic instructions such as MOV/MVN operations." + }, + { + "ArchStdEvent": "ASE_SPEC", + "PublicDescription": "Counts speculatively executed Advanced SIMD operations excluding load, store and move micro-operations that move data to or from SIMD (vector) registers." + }, + { + "ArchStdEvent": "VFP_SPEC", + "PublicDescription": "Counts speculatively executed floating point operations. This event does not count operations that move data to or from floating point (vector) registers." + }, + { + "ArchStdEvent": "PC_WRITE_SPEC", + "PublicDescription": "Counts speculatively executed operations which cause software changes of the PC. Those operations include all taken branch operations." + }, + { + "ArchStdEvent": "CRYPTO_SPEC", + "PublicDescription": "Counts speculatively executed cryptographic operations except for PMULL and VMULL operations." + }, + { + "ArchStdEvent": "BR_IMMED_SPEC", + "PublicDescription": "Counts immediate branch operations which are speculatively executed." + }, + { + "ArchStdEvent": "BR_RETURN_SPEC", + "PublicDescription": "Counts procedure return operations (RET) which are speculatively executed." + }, + { + "ArchStdEvent": "BR_INDIRECT_SPEC", + "PublicDescription": "Counts indirect branch operations including procedure returns, which are speculatively executed. This includes operations that force a software change of the PC, other than exception-generating operations. Eg: BR Xn, RET" + }, + { + "ArchStdEvent": "ISB_SPEC", + "PublicDescription": "Counts ISB operations that are executed." + }, + { + "ArchStdEvent": "DSB_SPEC", + "PublicDescription": "Counts DSB operations that are speculatively issued to Load/Store unit in the CPU." + }, + { + "ArchStdEvent": "DMB_SPEC", + "PublicDescription": "Counts DMB operations that are speculatively issued to the Load/Store unit in the CPU. This event does not count implied barriers from load acquire/store release operations." + }, + { + "ArchStdEvent": "RC_LD_SPEC", + "PublicDescription": "Counts any load acquire operations that are speculatively executed. Eg: LDAR, LDARH, LDARB" + }, + { + "ArchStdEvent": "RC_ST_SPEC", + "PublicDescription": "Counts any store release operations that are speculatively executed. Eg: STLR, STLRH, STLRB'" + }, + { + "ArchStdEvent": "ASE_INST_SPEC", + "PublicDescription": "Counts speculatively executed Advanced SIMD operations." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json new file mode 100644 index 000000000000..bbbebc805034 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json @@ -0,0 +1,30 @@ +[ + { + "ArchStdEvent": "STALL_FRONTEND", + "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. All the frontend slots were empty during the cycle when this event counts." + }, + { + "ArchStdEvent": "STALL_BACKEND", + "PublicDescription": "Counts cycles whenever the rename unit is unable to send any micro-operations to the backend of the pipeline because of backend resource constraints. Backend resource constraints can include issue stage fullness, execution stage fullness, or other internal pipeline resource fullness. All the backend slots were empty during the cycle when this event counts." + }, + { + "ArchStdEvent": "STALL", + "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall)." + }, + { + "ArchStdEvent": "STALL_SLOT_BACKEND", + "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints." + }, + { + "ArchStdEvent": "STALL_SLOT_FRONTEND", + "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend due to frontend resource constraints." + }, + { + "ArchStdEvent": "STALL_SLOT", + "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall)." + }, + { + "ArchStdEvent": "STALL_BACKEND_MEM", + "PublicDescription": "Counts cycles when the backend is stalled because there is a pending demand load request in progress in the last level core cache." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/sve.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/sve.json new file mode 100644 index 000000000000..51dab48cb2ba --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/sve.json @@ -0,0 +1,50 @@ +[ + { + "ArchStdEvent": "SVE_INST_SPEC", + "PublicDescription": "Counts speculatively executed operations that are SVE operations." + }, + { + "ArchStdEvent": "SVE_PRED_SPEC", + "PublicDescription": "Counts speculatively executed predicated SVE operations." + }, + { + "ArchStdEvent": "SVE_PRED_EMPTY_SPEC", + "PublicDescription": "Counts speculatively executed predicated SVE operations with no active predicate elements." + }, + { + "ArchStdEvent": "SVE_PRED_FULL_SPEC", + "PublicDescription": "Counts speculatively executed predicated SVE operations with all predicate elements active." + }, + { + "ArchStdEvent": "SVE_PRED_PARTIAL_SPEC", + "PublicDescription": "Counts speculatively executed predicated SVE operations with at least one but not all active predicate elements." + }, + { + "ArchStdEvent": "SVE_PRED_NOT_FULL_SPEC", + "PublicDescription": "Counts speculatively executed predicated SVE operations with at least one non active predicate elements." + }, + { + "ArchStdEvent": "SVE_LDFF_SPEC", + "PublicDescription": "Counts speculatively executed SVE first fault or non-fault load operations." + }, + { + "ArchStdEvent": "SVE_LDFF_FAULT_SPEC", + "PublicDescription": "Counts speculatively executed SVE first fault or non-fault load operations that clear at least one bit in the FFR." + }, + { + "ArchStdEvent": "ASE_SVE_INT8_SPEC", + "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type an 8-bit integer." + }, + { + "ArchStdEvent": "ASE_SVE_INT16_SPEC", + "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 16-bit integer." + }, + { + "ArchStdEvent": "ASE_SVE_INT32_SPEC", + "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 32-bit integer." + }, + { + "ArchStdEvent": "ASE_SVE_INT64_SPEC", + "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 64-bit integer." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json new file mode 100644 index 000000000000..b550af1831f5 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json @@ -0,0 +1,66 @@ +[ + { + "ArchStdEvent": "L1I_TLB_REFILL", + "PublicDescription": "Counts level 1 instruction TLB refills from any Instruction fetch. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB." + }, + { + "ArchStdEvent": "L1D_TLB_REFILL", + "PublicDescription": "Counts level 1 data TLB accesses that resulted in TLB refills. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count on an access from an AT(address translation) instruction." + }, + { + "ArchStdEvent": "L1D_TLB", + "PublicDescription": "Counts level 1 data TLB accesses caused by any memory load or store operation. Note that load or store instructions can be broken up into multiple memory operations. This event does not count TLB maintenance operations." + }, + { + "ArchStdEvent": "L1I_TLB", + "PublicDescription": "Counts level 1 instruction TLB accesses, whether the access hits or misses in the TLB. This event counts both demand accesses and prefetch or preload generated accesses." + }, + { + "ArchStdEvent": "L2D_TLB_REFILL", + "PublicDescription": "Counts level 2 TLB refills caused by memory operations from both data and instruction fetch, except for those caused by TLB maintenance operations and hardware prefetches." + }, + { + "ArchStdEvent": "L2D_TLB", + "PublicDescription": "Counts level 2 TLB accesses except those caused by TLB maintenance operations." + }, + { + "ArchStdEvent": "DTLB_WALK", + "PublicDescription": "Counts data memory translation table walks caused by a miss in the L2 TLB driven by a memory access. Note that partial translations that also cause a table walk are counted. This event does not count table walks caused by TLB maintenance operations." + }, + { + "ArchStdEvent": "ITLB_WALK", + "PublicDescription": "Counts instruction memory translation table walks caused by a miss in the L2 TLB driven by a memory access. Partial translations that also cause a table walk are counted. This event does not count table walks caused by TLB maintenance operations." + }, + { + "ArchStdEvent": "L1D_TLB_REFILL_RD", + "PublicDescription": "Counts level 1 data TLB refills caused by memory read operations. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count on an access from an Address Translation (AT) instruction." + }, + { + "ArchStdEvent": "L1D_TLB_REFILL_WR", + "PublicDescription": "Counts level 1 data TLB refills caused by data side memory write operations. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count with an access from an Address Translation (AT) instruction." + }, + { + "ArchStdEvent": "L1D_TLB_RD", + "PublicDescription": "Counts level 1 data TLB accesses caused by memory read operations. This event counts whether the access hits or misses in the TLB. This event does not count TLB maintenance operations." + }, + { + "ArchStdEvent": "L1D_TLB_WR", + "PublicDescription": "Counts any L1 data side TLB accesses caused by memory write operations. This event counts whether the access hits or misses in the TLB. This event does not count TLB maintenance operations." + }, + { + "ArchStdEvent": "L2D_TLB_REFILL_RD", + "PublicDescription": "Counts level 2 TLB refills caused by memory read operations from both data and instruction fetch except for those caused by TLB maintenance operations or hardware prefetches." + }, + { + "ArchStdEvent": "L2D_TLB_REFILL_WR", + "PublicDescription": "Counts level 2 TLB refills caused by memory write operations from both data and instruction fetch except for those caused by TLB maintenance operations." + }, + { + "ArchStdEvent": "L2D_TLB_RD", + "PublicDescription": "Counts level 2 TLB accesses caused by memory read operations from both data and instruction fetch except for those caused by TLB maintenance operations." + }, + { + "ArchStdEvent": "L2D_TLB_WR", + "PublicDescription": "Counts level 2 TLB accesses caused by memory write operations from both data and instruction fetch except for those caused by TLB maintenance operations." + } +] diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/trace.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/trace.json index 3116135c59e2..98f6fabfebc7 100644 --- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/trace.json +++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/trace.json @@ -1,29 +1,38 @@ [ { - "ArchStdEvent": "TRB_WRAP" + "ArchStdEvent": "TRB_WRAP", + "PublicDescription": "This event is generated each time the current write pointer is wrapped to the base pointer." }, { - "ArchStdEvent": "TRCEXTOUT0" + "ArchStdEvent": "TRCEXTOUT0", + "PublicDescription": "This event is generated each time an event is signaled by ETE external event 0." }, { - "ArchStdEvent": "TRCEXTOUT1" + "ArchStdEvent": "TRCEXTOUT1", + "PublicDescription": "This event is generated each time an event is signaled by ETE external event 1." }, { - "ArchStdEvent": "TRCEXTOUT2" + "ArchStdEvent": "TRCEXTOUT2", + "PublicDescription": "This event is generated each time an event is signaled by ETE external event 2." }, { - "ArchStdEvent": "TRCEXTOUT3" + "ArchStdEvent": "TRCEXTOUT3", + "PublicDescription": "This event is generated each time an event is signaled by ETE external event 3." }, { - "ArchStdEvent": "CTI_TRIGOUT4" + "ArchStdEvent": "CTI_TRIGOUT4", + "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 4." }, { - "ArchStdEvent": "CTI_TRIGOUT5" + "ArchStdEvent": "CTI_TRIGOUT5", + "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 5." }, { - "ArchStdEvent": "CTI_TRIGOUT6" + "ArchStdEvent": "CTI_TRIGOUT6", + "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 6." }, { - "ArchStdEvent": "CTI_TRIGOUT7" + "ArchStdEvent": "CTI_TRIGOUT7", + "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 7." } ] diff --git a/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/ali_drw.json b/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/ali_drw.json new file mode 100644 index 000000000000..e21c469a8ef0 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/ali_drw.json @@ -0,0 +1,373 @@ +[ + { + "BriefDescription": "A Write or Read Op at HIF interface. The unit is 64B.", + "ConfigCode": "0x0", + "EventName": "hif_rd_or_wr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Write Op at HIF interface. The unit is 64B.", + "ConfigCode": "0x1", + "EventName": "hif_wr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Read Op at HIF interface. The unit is 64B.", + "ConfigCode": "0x2", + "EventName": "hif_rd", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Read-Modify-Write Op at HIF interface. The unit is 64B.", + "ConfigCode": "0x3", + "EventName": "hif_rmw", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A high priority Read at HIF interface. The unit is 64B.", + "ConfigCode": "0x4", + "EventName": "hif_hi_pri_rd", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A write data cycle at DFI interface (to DRAM).", + "ConfigCode": "0x7", + "EventName": "dfi_wr_data_cycles", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A read data cycle at DFI interface (to DRAM).", + "ConfigCode": "0x8", + "EventName": "dfi_rd_data_cycles", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A high priority read becomes critical.", + "ConfigCode": "0x9", + "EventName": "hpr_xact_when_critical", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A low priority read becomes critical.", + "ConfigCode": "0xA", + "EventName": "lpr_xact_when_critical", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A write becomes critical.", + "ConfigCode": "0xB", + "EventName": "wr_xact_when_critical", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "An Activate(ACT) command to DRAM.", + "ConfigCode": "0xC", + "EventName": "op_is_activate", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Read or Write CAS command to DRAM.", + "ConfigCode": "0xD", + "EventName": "op_is_rd_or_wr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "An Activate(ACT) command for read to DRAM.", + "ConfigCode": "0xE", + "EventName": "op_is_rd_activate", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Read CAS command to DRAM.", + "ConfigCode": "0xF", + "EventName": "op_is_rd", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Write CAS command to DRAM.", + "ConfigCode": "0x10", + "EventName": "op_is_wr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Masked Write command to DRAM.", + "ConfigCode": "0x11", + "EventName": "op_is_mwr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Precharge(PRE) command to DRAM.", + "ConfigCode": "0x12", + "EventName": "op_is_precharge", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Precharge(PRE) required by read or write.", + "ConfigCode": "0x13", + "EventName": "precharge_for_rdwr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Precharge(PRE) required by other conditions.", + "ConfigCode": "0x14", + "EventName": "precharge_for_other", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A read-write turnaround.", + "ConfigCode": "0x15", + "EventName": "rdwr_transitions", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A write combine(merge) in write data buffer.", + "ConfigCode": "0x16", + "EventName": "write_combine", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Write-After-Read hazard.", + "ConfigCode": "0x17", + "EventName": "war_hazard", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Read-After-Write hazard.", + "ConfigCode": "0x18", + "EventName": "raw_hazard", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Write-After-Write hazard.", + "ConfigCode": "0x19", + "EventName": "waw_hazard", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank0 enters self-refresh(SRE).", + "ConfigCode": "0x1A", + "EventName": "op_is_enter_selfref_rk0", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank1 enters self-refresh(SRE).", + "ConfigCode": "0x1B", + "EventName": "op_is_enter_selfref_rk1", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank2 enters self-refresh(SRE).", + "ConfigCode": "0x1C", + "EventName": "op_is_enter_selfref_rk2", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank3 enters self-refresh(SRE).", + "ConfigCode": "0x1D", + "EventName": "op_is_enter_selfref_rk3", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank0 enters power-down(PDE).", + "ConfigCode": "0x1E", + "EventName": "op_is_enter_powerdown_rk0", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank1 enters power-down(PDE).", + "ConfigCode": "0x1F", + "EventName": "op_is_enter_powerdown_rk1", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank2 enters power-down(PDE).", + "ConfigCode": "0x20", + "EventName": "op_is_enter_powerdown_rk2", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "Rank3 enters power-down(PDE).", + "ConfigCode": "0x21", + "EventName": "op_is_enter_powerdown_rk3", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A cycle that Rank0 stays in self-refresh mode.", + "ConfigCode": "0x26", + "EventName": "selfref_mode_rk0", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A cycle that Rank1 stays in self-refresh mode.", + "ConfigCode": "0x27", + "EventName": "selfref_mode_rk1", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A cycle that Rank2 stays in self-refresh mode.", + "ConfigCode": "0x28", + "EventName": "selfref_mode_rk2", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A cycle that Rank3 stays in self-refresh mode.", + "ConfigCode": "0x29", + "EventName": "selfref_mode_rk3", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "An auto-refresh(REF) command to DRAM.", + "ConfigCode": "0x2A", + "EventName": "op_is_refresh", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A critical auto-refresh(REF) command to DRAM.", + "ConfigCode": "0x2B", + "EventName": "op_is_crit_ref", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "An MRR or MRW command to DRAM.", + "ConfigCode": "0x2D", + "EventName": "op_is_load_mode", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A ZQCal command to DRAM.", + "ConfigCode": "0x2E", + "EventName": "op_is_zqcl", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "At least one entry in read queue reaches the visible window limit.", + "ConfigCode": "0x30", + "EventName": "visible_window_limit_reached_rd", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "At least one entry in write queue reaches the visible window limit.", + "ConfigCode": "0x31", + "EventName": "visible_window_limit_reached_wr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A DQS Oscillator MPC command to DRAM.", + "ConfigCode": "0x34", + "EventName": "op_is_dqsosc_mpc", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A DQS Oscillator MRR command to DRAM.", + "ConfigCode": "0x35", + "EventName": "op_is_dqsosc_mrr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A Temperature Compensated Refresh(TCR) MRR command to DRAM.", + "ConfigCode": "0x36", + "EventName": "op_is_tcr_mrr", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A ZQCal Start command to DRAM.", + "ConfigCode": "0x37", + "EventName": "op_is_zqstart", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A ZQCal Latch command to DRAM.", + "ConfigCode": "0x38", + "EventName": "op_is_zqlatch", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A packet at CHI TXREQ interface (request).", + "ConfigCode": "0x39", + "EventName": "chi_txreq", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A packet at CHI TXDAT interface (read data).", + "ConfigCode": "0x3A", + "EventName": "chi_txdat", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A packet at CHI RXDAT interface (write data).", + "ConfigCode": "0x3B", + "EventName": "chi_rxdat", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A packet at CHI RXRSP interface.", + "ConfigCode": "0x3C", + "EventName": "chi_rxrsp", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "A violation detected in TZC.", + "ConfigCode": "0x3D", + "EventName": "tsz_vio", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "BriefDescription": "The ddr cycles.", + "ConfigCode": "0x80", + "EventName": "ddr_cycles", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + } +] diff --git a/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/metrics.json b/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/metrics.json new file mode 100644 index 000000000000..bc865b374b6a --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/metrics.json @@ -0,0 +1,20 @@ +[ + { + "MetricName": "ddr_read_bandwidth.all", + "BriefDescription": "The ddr read bandwidth(MB/s).", + "MetricGroup": "ali_drw", + "MetricExpr": "hif_rd * 64 / 1e6 / duration_time", + "ScaleUnit": "1MB/s", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + }, + { + "MetricName": "ddr_write_bandwidth.all", + "BriefDescription": "The ddr write bandwidth(MB/s).", + "MetricGroup": "ali_drw", + "MetricExpr": "(hif_wr + hif_rmw) * 64 / 1e6 / duration_time", + "ScaleUnit": "1MB/s", + "Unit": "ali_drw", + "Compat": "ali_drw_pmu" + } +] diff --git a/tools/perf/pmu-events/arch/arm64/sbsa.json b/tools/perf/pmu-events/arch/arm64/sbsa.json index f90b338261ac..4eed79a28f6e 100644 --- a/tools/perf/pmu-events/arch/arm64/sbsa.json +++ b/tools/perf/pmu-events/arch/arm64/sbsa.json @@ -1,34 +1,34 @@ [ { - "MetricExpr": "stall_slot_frontend / (#slots * cpu_cycles)", - "BriefDescription": "Frontend bound L1 topdown metric", + "MetricExpr": "100 * (stall_slot_frontend / (#slots * cpu_cycles))", + "BriefDescription": "This metric is the percentage of total slots that were stalled due to resource constraints in the frontend of the processor.", "DefaultMetricgroupName": "TopdownL1", "MetricGroup": "Default;TopdownL1", "MetricName": "frontend_bound", - "ScaleUnit": "100%" + "ScaleUnit": "1percent of slots" }, { - "MetricExpr": "(1 - op_retired / op_spec) * (1 - stall_slot / (#slots * cpu_cycles))", - "BriefDescription": "Bad speculation L1 topdown metric", + "MetricExpr": "100 * ((1 - op_retired / op_spec) * (1 - stall_slot / (#slots * cpu_cycles)))", + "BriefDescription": "This metric is the percentage of total slots that executed operations and didn't retire due to a pipeline flush.\nThis indicates cycles that were utilized but inefficiently.", "DefaultMetricgroupName": "TopdownL1", "MetricGroup": "Default;TopdownL1", "MetricName": "bad_speculation", - "ScaleUnit": "100%" + "ScaleUnit": "1percent of slots" }, { - "MetricExpr": "(op_retired / op_spec) * (1 - stall_slot / (#slots * cpu_cycles))", - "BriefDescription": "Retiring L1 topdown metric", + "MetricExpr": "100 * ((op_retired / op_spec) * (1 - stall_slot / (#slots * cpu_cycles)))", + "BriefDescription": "This metric is the percentage of total slots that retired operations, which indicates cycles that were utilized efficiently.", "DefaultMetricgroupName": "TopdownL1", "MetricGroup": "Default;TopdownL1", "MetricName": "retiring", - "ScaleUnit": "100%" + "ScaleUnit": "1percent of slots" }, { - "MetricExpr": "stall_slot_backend / (#slots * cpu_cycles)", - "BriefDescription": "Backend Bound L1 topdown metric", + "MetricExpr": "100 * (stall_slot_backend / (#slots * cpu_cycles))", + "BriefDescription": "This metric is the percentage of total slots that were stalled due to resource constraints in the backend of the processor.", "DefaultMetricgroupName": "TopdownL1", "MetricGroup": "Default;TopdownL1", "MetricName": "backend_bound", - "ScaleUnit": "100%" + "ScaleUnit": "1percent of slots" } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/cache.json b/tools/perf/pmu-events/arch/powerpc/power10/cache.json index 605be14f441c..839ae26945fb 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/cache.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/cache.json @@ -1,53 +1,8 @@ [ { - "EventCode": "0x1003C", - "EventName": "PM_EXEC_STALL_DMISS_L2L3", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from either the local L2 or local L3." - }, - { - "EventCode": "0x1E054", - "EventName": "PM_EXEC_STALL_DMISS_L21_L31", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from another core's L2 or L3 on the same chip." - }, - { - "EventCode": "0x34054", - "EventName": "PM_EXEC_STALL_DMISS_L2L3_NOCONFLICT", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, without a dispatch conflict." - }, - { - "EventCode": "0x34056", - "EventName": "PM_EXEC_STALL_LOAD_FINISH", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was finishing a load after its data was reloaded from a data source beyond the local L1; cycles in which the LSU was processing an L1-hit; cycles in which the NTF instruction merged with another load in the LMQ; cycles in which the NTF instruction is waiting for a data reload for a load miss, but the data comes back with a non-NTF instruction." - }, - { - "EventCode": "0x3006C", - "EventName": "PM_RUN_CYC_SMT2_MODE", - "BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT2 mode." - }, - { "EventCode": "0x300F4", "EventName": "PM_RUN_INST_CMPL_CONC", - "BriefDescription": "PowerPC instructions completed by this thread when all threads in the core had the run-latch set." - }, - { - "EventCode": "0x4C016", - "EventName": "PM_EXEC_STALL_DMISS_L2L3_CONFLICT", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, with a dispatch conflict." - }, - { - "EventCode": "0x4D014", - "EventName": "PM_EXEC_STALL_LOAD", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a load instruction executing in the Load Store Unit." - }, - { - "EventCode": "0x4D016", - "EventName": "PM_EXEC_STALL_PTESYNC", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a PTESYNC instruction executing in the Load Store Unit." - }, - { - "EventCode": "0x401EA", - "EventName": "PM_THRESH_EXC_128", - "BriefDescription": "Threshold counter exceeded a value of 128." + "BriefDescription": "PowerPC instruction completed by this thread when all threads in the core had the run-latch set." }, { "EventCode": "0x400F6", diff --git a/tools/perf/pmu-events/arch/powerpc/power10/floating_point.json b/tools/perf/pmu-events/arch/powerpc/power10/floating_point.json index 54acb55e2c8c..e816cd10c129 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/floating_point.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/floating_point.json @@ -1,7 +1,67 @@ [ { - "EventCode": "0x4016E", - "EventName": "PM_THRESH_NOT_MET", - "BriefDescription": "Threshold counter did not meet threshold." + "EventCode": "0x100F4", + "EventName": "PM_FLOP_CMPL", + "BriefDescription": "Floating Point Operations Completed. Includes any type. It counts once for each 1, 2, 4 or 8 flop instruction. Use PM_1|2|4|8_FLOP_CMPL events to count flops." + }, + { + "EventCode": "0x45050", + "EventName": "PM_1FLOP_CMPL", + "BriefDescription": "One floating point instruction completed (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg)." + }, + { + "EventCode": "0x45052", + "EventName": "PM_4FLOP_CMPL", + "BriefDescription": "Four floating point instruction completed (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg)." + }, + { + "EventCode": "0x45054", + "EventName": "PM_FMA_CMPL", + "BriefDescription": "Two floating point instruction completed (FMA class of instructions: fmadd, fnmadd, fmsub, fnmsub). Scalar instructions only." + }, + { + "EventCode": "0x45056", + "EventName": "PM_SCALAR_FLOP_CMPL", + "BriefDescription": "Scalar floating point instruction completed." + }, + { + "EventCode": "0x4505A", + "EventName": "PM_SP_FLOP_CMPL", + "BriefDescription": "Single Precision floating point instruction completed." + }, + { + "EventCode": "0x4505C", + "EventName": "PM_MATH_FLOP_CMPL", + "BriefDescription": "Math floating point instruction completed." + }, + { + "EventCode": "0x4D052", + "EventName": "PM_2FLOP_CMPL", + "BriefDescription": "Double Precision vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg completed." + }, + { + "EventCode": "0x4D054", + "EventName": "PM_8FLOP_CMPL", + "BriefDescription": "Four Double Precision vector instruction completed." + }, + { + "EventCode": "0x4D056", + "EventName": "PM_NON_FMA_FLOP_CMPL", + "BriefDescription": "Non FMA instruction completed." + }, + { + "EventCode": "0x4D058", + "EventName": "PM_VECTOR_FLOP_CMPL", + "BriefDescription": "Vector floating point instruction completed." + }, + { + "EventCode": "0x4D05A", + "EventName": "PM_NON_MATH_FLOP_CMPL", + "BriefDescription": "Non Math instruction completed." + }, + { + "EventCode": "0x4D05C", + "EventName": "PM_DPP_FLOP_CMPL", + "BriefDescription": "Double-Precision or Quad-Precision instruction completed." } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/frontend.json b/tools/perf/pmu-events/arch/powerpc/power10/frontend.json index 558f9530f54e..5977f5e64212 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/frontend.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/frontend.json @@ -1,43 +1,13 @@ [ { - "EventCode": "0x10004", - "EventName": "PM_EXEC_STALL_TRANSLATION", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered a TLB miss or ERAT miss and waited for it to resolve." + "EventCode": "0x1D054", + "EventName": "PM_DTLB_HIT_2M", + "BriefDescription": "Data TLB hit (DERAT reload) page size 2M. Implies radix translation. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x10006", - "EventName": "PM_DISP_STALL_HELD_OTHER_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch for any other reason." - }, - { - "EventCode": "0x10010", - "EventName": "PM_PMC4_OVERFLOW", - "BriefDescription": "The event selected for PMC4 caused the event counter to overflow." - }, - { - "EventCode": "0x10020", - "EventName": "PM_PMC4_REWIND", - "BriefDescription": "The speculative event selected for PMC4 rewinds and the counter for PMC4 is not charged." - }, - { - "EventCode": "0x10038", - "EventName": "PM_DISP_STALL_TRANSLATION", - "BriefDescription": "Cycles when dispatch was stalled for this thread because the MMU was handling a translation miss." - }, - { - "EventCode": "0x1003A", - "EventName": "PM_DISP_STALL_BR_MPRED_IC_L2", - "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2 after suffering a branch mispredict." - }, - { - "EventCode": "0x1D05E", - "EventName": "PM_DISP_STALL_HELD_HALT_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because of power management." - }, - { - "EventCode": "0x1E050", - "EventName": "PM_DISP_STALL_HELD_STF_MAPPER_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because the STF mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR." + "EventCode": "0x1D058", + "EventName": "PM_ITLB_HIT_64K", + "BriefDescription": "Instruction TLB hit (IERAT reload) page size 64K. When MMCR1[17]=0 this event counts only for demand misses. When MMCR1[17]=1 this event includes demand misses and prefetches." }, { "EventCode": "0x1F054", @@ -45,21 +15,6 @@ "BriefDescription": "The PTE required by the instruction was resident in the TLB (data TLB access). When MMCR1[16]=0 this event counts only demand hits. When MMCR1[16]=1 this event includes demand and prefetch. Applies to both HPT and RPT." }, { - "EventCode": "0x10064", - "EventName": "PM_DISP_STALL_IC_L2", - "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2." - }, - { - "EventCode": "0x101E8", - "EventName": "PM_THRESH_EXC_256", - "BriefDescription": "Threshold counter exceeded a count of 256." - }, - { - "EventCode": "0x101EC", - "EventName": "PM_THRESH_MET", - "BriefDescription": "Threshold exceeded." - }, - { "EventCode": "0x100F2", "EventName": "PM_1PLUS_PPC_CMPL", "BriefDescription": "Cycles in which at least one instruction is completed by this thread." @@ -67,57 +22,7 @@ { "EventCode": "0x100F6", "EventName": "PM_IERAT_MISS", - "BriefDescription": "IERAT Reloaded to satisfy an IERAT miss. All page sizes are counted by this event." - }, - { - "EventCode": "0x100F8", - "EventName": "PM_DISP_STALL_CYC", - "BriefDescription": "Cycles the ICT has no itags assigned to this thread (no instructions were dispatched during these cycles)." - }, - { - "EventCode": "0x20006", - "EventName": "PM_DISP_STALL_HELD_ISSQ_FULL_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch due to Issue queue full. Includes issue queue and branch queue." - }, - { - "EventCode": "0x20114", - "EventName": "PM_MRK_L2_RC_DISP", - "BriefDescription": "Marked instruction RC dispatched in L2." - }, - { - "EventCode": "0x2C010", - "EventName": "PM_EXEC_STALL_LSU", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the Load Store Unit. This does not include simple fixed point instructions." - }, - { - "EventCode": "0x2C016", - "EventName": "PM_DISP_STALL_IERAT_ONLY_MISS", - "BriefDescription": "Cycles when dispatch was stalled while waiting to resolve an instruction ERAT miss." - }, - { - "EventCode": "0x2C01E", - "EventName": "PM_DISP_STALL_BR_MPRED_IC_L3", - "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3 after suffering a branch mispredict." - }, - { - "EventCode": "0x2D01A", - "EventName": "PM_DISP_STALL_IC_MISS", - "BriefDescription": "Cycles when dispatch was stalled for this thread due to an Icache Miss." - }, - { - "EventCode": "0x2E018", - "EventName": "PM_DISP_STALL_FETCH", - "BriefDescription": "Cycles when dispatch was stalled for this thread because Fetch was being held." - }, - { - "EventCode": "0x2E01A", - "EventName": "PM_DISP_STALL_HELD_XVFC_MAPPER_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because the XVFC mapper/SRB was full." - }, - { - "EventCode": "0x2C142", - "EventName": "PM_MRK_XFER_FROM_SRC_PMC2", - "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[15:27]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." + "BriefDescription": "IERAT Reloaded to satisfy an IERAT miss. All page sizes are counted by this event. This event only counts instruction demand access." }, { "EventCode": "0x24050", @@ -135,11 +40,6 @@ "BriefDescription": "Branch Taken instruction completed." }, { - "EventCode": "0x30004", - "EventName": "PM_DISP_STALL_FLUSH", - "BriefDescription": "Cycles when dispatch was stalled because of a flush that happened to an instruction(s) that was not yet NTC. PM_EXEC_STALL_NTC_FLUSH only includes instructions that were flushed after becoming NTC." - }, - { "EventCode": "0x3000A", "EventName": "PM_DISP_STALL_ITLB_MISS", "BriefDescription": "Cycles when dispatch was stalled while waiting to resolve an instruction TLB miss." @@ -150,59 +50,24 @@ "BriefDescription": "The instruction that was next to complete (oldest in the pipeline) did not complete because it suffered a flush." }, { - "EventCode": "0x30014", - "EventName": "PM_EXEC_STALL_STORE", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a store instruction executing in the Load Store Unit." - }, - { - "EventCode": "0x30018", - "EventName": "PM_DISP_STALL_HELD_SCOREBOARD_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch while waiting on the Scoreboard. This event combines VSCR and FPSCR together." - }, - { - "EventCode": "0x30026", - "EventName": "PM_EXEC_STALL_STORE_MISS", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a store whose cache line was not resident in the L1 and was waiting for allocation of the missing line into the L1." - }, - { - "EventCode": "0x3012A", - "EventName": "PM_MRK_L2_RC_DONE", - "BriefDescription": "L2 RC machine completed the transaction for the marked instruction." - }, - { "EventCode": "0x3F046", "EventName": "PM_ITLB_HIT_1G", "BriefDescription": "Instruction TLB hit (IERAT reload) page size 1G, which implies Radix Page Table translation is in use. When MMCR1[17]=0 this event counts only for demand misses. When MMCR1[17]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x34058", - "EventName": "PM_DISP_STALL_BR_MPRED_ICMISS", - "BriefDescription": "Cycles when dispatch was stalled after a mispredicted branch resulted in an instruction cache miss." - }, - { - "EventCode": "0x3D05C", - "EventName": "PM_DISP_STALL_HELD_RENAME_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because the mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR and XVFC." - }, - { - "EventCode": "0x3E052", - "EventName": "PM_DISP_STALL_IC_L3", - "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3." + "EventCode": "0x3C05A", + "EventName": "PM_DTLB_HIT_64K", + "BriefDescription": "Data TLB hit (DERAT reload) page size 64K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { "EventCode": "0x3E054", "EventName": "PM_LD_MISS_L1", - "BriefDescription": "Load Missed L1, counted at execution time (can be greater than loads finished). LMQ merges are not included in this count. i.e. if a load instruction misses on an address that is already allocated on the LMQ, this event will not increment for that load). Note that this count is per slice, so if a load spans multiple slices this event will increment multiple times for a single load." - }, - { - "EventCode": "0x301EA", - "EventName": "PM_THRESH_EXC_1024", - "BriefDescription": "Threshold counter exceeded a value of 1024." + "BriefDescription": "Load missed L1, counted at finish time. LMQ merges are not included in this count. i.e. if a load instruction misses on an address that is already allocated on the LMQ, this event will not increment for that load). Note that this count is per slice, so if a load spans multiple slices this event will increment multiple times for a single load." }, { "EventCode": "0x300FA", "EventName": "PM_INST_FROM_L3MISS", - "BriefDescription": "The processor's instruction cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss." + "BriefDescription": "The processor's instruction cache was reloaded from beyond the local core's L3 due to a demand miss." }, { "EventCode": "0x40006", @@ -210,38 +75,18 @@ "BriefDescription": "Cycles in which an instruction or group of instructions were cancelled after being issued. This event increments once per occurrence, regardless of how many instructions are included in the issue group." }, { - "EventCode": "0x40116", - "EventName": "PM_MRK_LARX_FIN", - "BriefDescription": "Marked load and reserve instruction (LARX) finished. LARX and STCX are instructions used to acquire a lock." - }, - { - "EventCode": "0x4C010", - "EventName": "PM_DISP_STALL_BR_MPRED_IC_L3MISS", - "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from sources beyond the local L3 after suffering a mispredicted branch." - }, - { - "EventCode": "0x4D01E", - "EventName": "PM_DISP_STALL_BR_MPRED", - "BriefDescription": "Cycles when dispatch was stalled for this thread due to a mispredicted branch." - }, - { - "EventCode": "0x4E010", - "EventName": "PM_DISP_STALL_IC_L3MISS", - "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from any source beyond the local L3." - }, - { - "EventCode": "0x4E01A", - "EventName": "PM_DISP_STALL_HELD_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch for any reason." + "EventCode": "0x44056", + "EventName": "PM_VECTOR_ST_CMPL", + "BriefDescription": "Vector store instruction completed." }, { - "EventCode": "0x4003C", - "EventName": "PM_DISP_STALL_HELD_SYNC_CYC", - "BriefDescription": "Cycles in which the NTC instruction is held at dispatch because of a synchronizing instruction that requires the ICT to be empty before dispatch." + "EventCode": "0x4E054", + "EventName": "PM_DTLB_HIT_1G", + "BriefDescription": "Data TLB hit (DERAT reload) page size 1G. Implies radix translation. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x44056", - "EventName": "PM_VECTOR_ST_CMPL", - "BriefDescription": "Vector store instructions completed." + "EventCode": "0x400FC", + "EventName": "PM_ITLB_MISS", + "BriefDescription": "Instruction TLB reload (after a miss), all page sizes. Includes only demand misses." } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/marked.json b/tools/perf/pmu-events/arch/powerpc/power10/marked.json index 58b5dfe3a273..78f71a9eadfd 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/marked.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/marked.json @@ -1,15 +1,35 @@ [ { - "EventCode": "0x1002C", - "EventName": "PM_LD_PREFETCH_CACHE_LINE_MISS", - "BriefDescription": "The L1 cache was reloaded with a line that fulfills a prefetch request." - }, - { "EventCode": "0x10132", "EventName": "PM_MRK_INST_ISSUED", "BriefDescription": "Marked instruction issued. Note that stores always get issued twice, the address gets issued to the LSU and the data gets issued to the VSU. Also, issues can sometimes get killed/cancelled and cause multiple sequential issues for the same instruction." }, { + "EventCode": "0x10134", + "EventName": "PM_MRK_ST_DONE_L2", + "BriefDescription": "Marked store completed in L2." + }, + { + "EventCode": "0x1C142", + "EventName": "PM_MRK_XFER_FROM_SRC_PMC1", + "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[0:12]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." + }, + { + "EventCode": "0x1C144", + "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC1", + "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[0:12]." + }, + { + "EventCode": "0x1D15C", + "EventName": "PM_MRK_DTLB_MISS_1G", + "BriefDescription": "Marked Data TLB reload (after a miss) page size 1G. Implies radix translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." + }, + { + "EventCode": "0x1F150", + "EventName": "PM_MRK_ST_L2_CYC", + "BriefDescription": "Cycles from L2 RC dispatch to L2 RC completion." + }, + { "EventCode": "0x101E0", "EventName": "PM_MRK_INST_DISP", "BriefDescription": "The thread has dispatched a randomly sampled marked instruction." @@ -20,14 +40,39 @@ "BriefDescription": "Marked Branch Taken instruction completed." }, { - "EventCode": "0x20112", - "EventName": "PM_MRK_NTF_FIN", - "BriefDescription": "The marked instruction became the oldest in the pipeline before it finished. It excludes instructions that finish at dispatch." + "EventCode": "0x101E4", + "EventName": "PM_MRK_L1_ICACHE_MISS", + "BriefDescription": "Marked instruction suffered an instruction cache miss." + }, + { + "EventCode": "0x101EA", + "EventName": "PM_MRK_L1_RELOAD_VALID", + "BriefDescription": "Marked demand reload." }, { - "EventCode": "0x2C01C", - "EventName": "PM_EXEC_STALL_DMISS_OFF_CHIP", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a remote chip." + "EventCode": "0x20114", + "EventName": "PM_MRK_L2_RC_DISP", + "BriefDescription": "Marked instruction RC dispatched in L2." + }, + { + "EventCode": "0x2011C", + "EventName": "PM_MRK_NTF_CYC", + "BriefDescription": "Cycles in which the marked instruction is the oldest in the pipeline (next-to-finish or next-to-complete)." + }, + { + "EventCode": "0x20130", + "EventName": "PM_MRK_INST_DECODED", + "BriefDescription": "An instruction was marked at decode time. Random Instruction Sampling (RIS) only." + }, + { + "EventCode": "0x20132", + "EventName": "PM_MRK_DFU_ISSUE", + "BriefDescription": "The marked instruction was a decimal floating point operation issued to the VSU. Measured at issue time." + }, + { + "EventCode": "0x20134", + "EventName": "PM_MRK_FXU_ISSUE", + "BriefDescription": "The marked instruction was a fixed point operation issued to the VSU. Measured at issue time." }, { "EventCode": "0x20138", @@ -40,6 +85,16 @@ "BriefDescription": "Marked Branch instruction finished." }, { + "EventCode": "0x2013C", + "EventName": "PM_MRK_FX_LSU_FIN", + "BriefDescription": "The marked instruction was simple fixed point that was issued to the store unit. Measured at finish time." + }, + { + "EventCode": "0x2C142", + "EventName": "PM_MRK_XFER_FROM_SRC_PMC2", + "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[15:27]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." + }, + { "EventCode": "0x2C144", "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC2", "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[15:27]." @@ -60,19 +115,54 @@ "BriefDescription": "A marked branch completed. All branches are included." }, { - "EventCode": "0x200FD", - "EventName": "PM_L1_ICACHE_MISS", - "BriefDescription": "Demand iCache Miss." + "EventCode": "0x2D154", + "EventName": "PM_MRK_DERAT_MISS_64K", + "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 64K for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." + }, + { + "EventCode": "0x201E0", + "EventName": "PM_MRK_DATA_FROM_MEMORY", + "BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss for a marked load." }, { - "EventCode": "0x30130", - "EventName": "PM_MRK_INST_FIN", - "BriefDescription": "marked instruction finished. Excludes instructions that finish at dispatch. Note that stores always finish twice since the address gets issued to the LSU and the data gets issued to the VSU." + "EventCode": "0x201E2", + "EventName": "PM_MRK_LD_MISS_L1", + "BriefDescription": "Marked demand data load miss counted at finish time." + }, + { + "EventCode": "0x201E4", + "EventName": "PM_MRK_DATA_FROM_L3MISS", + "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss for a marked load." + }, + { + "EventCode": "0x3012A", + "EventName": "PM_MRK_L2_RC_DONE", + "BriefDescription": "L2 RC machine completed the transaction for the marked instruction." + }, + { + "EventCode": "0x3012E", + "EventName": "PM_MRK_DTLB_MISS_2M", + "BriefDescription": "Marked Data TLB reload (after a miss) page size 2M, which implies Radix Page Table translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." + }, + { + "EventCode": "0x30132", + "EventName": "PM_MRK_VSU_FIN", + "BriefDescription": "VSU marked instruction finished. Excludes simple FX instructions issued to the Store Unit." }, { "EventCode": "0x34146", "EventName": "PM_MRK_LD_CMPL", - "BriefDescription": "Marked loads completed." + "BriefDescription": "Marked load instruction completed." + }, + { + "EventCode": "0x3C142", + "EventName": "PM_MRK_XFER_FROM_SRC_PMC3", + "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[30:42]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." + }, + { + "EventCode": "0x3C144", + "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC3", + "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[30:42]." }, { "EventCode": "0x3E158", @@ -82,12 +172,22 @@ { "EventCode": "0x3E15A", "EventName": "PM_MRK_ST_FIN", - "BriefDescription": "The marked instruction was a store of any kind." + "BriefDescription": "Marked store instruction finished." + }, + { + "EventCode": "0x3F150", + "EventName": "PM_MRK_ST_DRAIN_CYC", + "BriefDescription": "Cycles in which the marked store drained from the core to the L2." }, { - "EventCode": "0x30068", - "EventName": "PM_L1_ICACHE_RELOADED_PREF", - "BriefDescription": "Counts all Icache prefetch reloads ( includes demand turned into prefetch)." + "EventCode": "0x30162", + "EventName": "PM_MRK_ISSUE_DEPENDENT_LOAD", + "BriefDescription": "The marked instruction was dependent on a load. It is eligible for issue kill." + }, + { + "EventCode": "0x301E2", + "EventName": "PM_MRK_ST_CMPL", + "BriefDescription": "Marked store completed and sent to nest. Note that this count excludes cache-inhibited stores." }, { "EventCode": "0x301E4", @@ -95,48 +195,78 @@ "BriefDescription": "Marked Branch Mispredicted. Includes direction and target." }, { - "EventCode": "0x300F6", - "EventName": "PM_LD_DEMAND_MISS_L1", - "BriefDescription": "The L1 cache was reloaded with a line that fulfills a demand miss request. Counted at reload time, before finish." + "EventCode": "0x301E6", + "EventName": "PM_MRK_DERAT_MISS", + "BriefDescription": "Marked Erat Miss (Data TLB Access) All page sizes. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." + }, + { + "EventCode": "0x4010E", + "EventName": "PM_MRK_TLBIE_FIN", + "BriefDescription": "Marked TLBIE instruction finished. Includes TLBIE and TLBIEL instructions." + }, + { + "EventCode": "0x40116", + "EventName": "PM_MRK_LARX_FIN", + "BriefDescription": "Marked load and reserve instruction (LARX) finished. LARX and STCX are instructions used to acquire a lock." + }, + { + "EventCode": "0x40132", + "EventName": "PM_MRK_LSU_FIN", + "BriefDescription": "LSU marked instruction finish." + }, + { + "EventCode": "0x44146", + "EventName": "PM_MRK_STCX_CORE_CYC", + "BriefDescription": "Cycles spent in the core portion of a marked STCX instruction. It starts counting when the instruction is decoded and stops counting when it drains into the L2." }, { - "EventCode": "0x300FE", - "EventName": "PM_DATA_FROM_L3MISS", - "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss." + "EventCode": "0x4C142", + "EventName": "PM_MRK_XFER_FROM_SRC_PMC4", + "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[45:57]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." }, { - "EventCode": "0x40012", - "EventName": "PM_L1_ICACHE_RELOADED_ALL", - "BriefDescription": "Counts all Icache reloads includes demand, prefetch, prefetch turned into demand and demand turned into prefetch." + "EventCode": "0x4C144", + "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC4", + "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[45:57]." }, { - "EventCode": "0x40134", - "EventName": "PM_MRK_INST_TIMEO", - "BriefDescription": "Marked instruction finish timeout (instruction was lost)." + "EventCode": "0x4C15C", + "EventName": "PM_MRK_DERAT_MISS_1G", + "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 1G for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x4505A", - "EventName": "PM_SP_FLOP_CMPL", - "BriefDescription": "Single Precision floating point instructions completed." + "EventCode": "0x4C15E", + "EventName": "PM_MRK_DTLB_MISS_64K", + "BriefDescription": "Marked Data TLB reload (after a miss) page size 64K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x4D058", - "EventName": "PM_VECTOR_FLOP_CMPL", - "BriefDescription": "Vector floating point instructions completed." + "EventCode": "0x4E15E", + "EventName": "PM_MRK_INST_FLUSHED", + "BriefDescription": "The marked instruction was flushed." }, { - "EventCode": "0x4D05A", - "EventName": "PM_NON_MATH_FLOP_CMPL", - "BriefDescription": "Non Math instructions completed." + "EventCode": "0x40164", + "EventName": "PM_MRK_DERAT_MISS_2M", + "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 2M for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { "EventCode": "0x401E0", "EventName": "PM_MRK_INST_CMPL", - "BriefDescription": "marked instruction completed." + "BriefDescription": "Marked instruction completed." + }, + { + "EventCode": "0x401E4", + "EventName": "PM_MRK_DTLB_MISS", + "BriefDescription": "The DPTEG required for the marked load/store instruction in execution was missing from the TLB. This event only counts for demand misses." + }, + { + "EventCode": "0x401E6", + "EventName": "PM_MRK_INST_FROM_L3MISS", + "BriefDescription": "The processor's instruction cache was reloaded from beyond the local core's L3 due to a demand miss for a marked instruction." }, { - "EventCode": "0x400FE", - "EventName": "PM_DATA_FROM_MEMORY", - "BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss." + "EventCode": "0x401E8", + "EventName": "PM_MRK_DATA_FROM_L2MISS", + "BriefDescription": "The processor's L1 data cache was reloaded from a source beyond the local core's L2 due to a demand miss for a marked instruction." } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/memory.json b/tools/perf/pmu-events/arch/powerpc/power10/memory.json index 843b51f531e9..885262957beb 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/memory.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/memory.json @@ -1,25 +1,10 @@ [ { - "EventCode": "0x1000A", - "EventName": "PM_PMC3_REWIND", - "BriefDescription": "The speculative event selected for PMC3 rewinds and the counter for PMC3 is not charged." - }, - { "EventCode": "0x1C040", "EventName": "PM_XFER_FROM_SRC_PMC1", "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[0:12]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." }, { - "EventCode": "0x1C142", - "EventName": "PM_MRK_XFER_FROM_SRC_PMC1", - "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[0:12]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." - }, - { - "EventCode": "0x1C144", - "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC1", - "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[0:12]." - }, - { "EventCode": "0x1C056", "EventName": "PM_DERAT_MISS_4K", "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 4K. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." @@ -35,24 +20,9 @@ "BriefDescription": "Data TLB reload (after a miss) page size 2M. Implies radix translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x1E056", - "EventName": "PM_EXEC_STALL_STORE_PIPE", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the store unit. This does not include cycles spent handling store misses, PTESYNC instructions or TLBIE instructions." - }, - { - "EventCode": "0x1F150", - "EventName": "PM_MRK_ST_L2_CYC", - "BriefDescription": "Cycles from L2 RC dispatch to L2 RC completion." - }, - { "EventCode": "0x10062", "EventName": "PM_LD_L3MISS_PEND_CYC", - "BriefDescription": "Cycles L3 miss was pending for this thread." - }, - { - "EventCode": "0x20010", - "EventName": "PM_PMC1_OVERFLOW", - "BriefDescription": "The event selected for PMC1 caused the event counter to overflow." + "BriefDescription": "Cycles in which an L3 miss was pending for this thread." }, { "EventCode": "0x2001A", @@ -80,9 +50,9 @@ "BriefDescription": "Data TLB reload (after a miss) page size 4K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x2D154", - "EventName": "PM_MRK_DERAT_MISS_64K", - "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 64K for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." + "EventCode": "0x2C05A", + "EventName": "PM_DERAT_MISS_1G", + "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 1G. Implies radix translation. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { "EventCode": "0x200F6", @@ -90,9 +60,9 @@ "BriefDescription": "DERAT Reloaded to satisfy a DERAT miss. All page sizes are counted by this event. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { - "EventCode": "0x30016", - "EventName": "PM_EXEC_STALL_DERAT_DTLB_MISS", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered a TLB miss and waited for it resolve." + "EventCode": "0x34044", + "EventName": "PM_DERAT_MISS_PREF", + "BriefDescription": "DERAT miss (TLB access) while servicing a data prefetch." }, { "EventCode": "0x3C040", @@ -100,16 +70,6 @@ "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[30:42]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." }, { - "EventCode": "0x3C142", - "EventName": "PM_MRK_XFER_FROM_SRC_PMC3", - "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[30:42]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." - }, - { - "EventCode": "0x3C144", - "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC3", - "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[30:42]." - }, - { "EventCode": "0x3C054", "EventName": "PM_DERAT_MISS_16M", "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16M. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." @@ -125,24 +85,14 @@ "BriefDescription": "Load and reserve instruction (LARX) finished. LARX and STCX are instructions used to acquire a lock." }, { - "EventCode": "0x301E2", - "EventName": "PM_MRK_ST_CMPL", - "BriefDescription": "Marked store completed and sent to nest. Note that this count excludes cache-inhibited stores." - }, - { "EventCode": "0x300FC", "EventName": "PM_DTLB_MISS", - "BriefDescription": "The DPTEG required for the load/store instruction in execution was missing from the TLB. It includes pages of all sizes for demand and prefetch activity." - }, - { - "EventCode": "0x4D02C", - "EventName": "PM_PMC1_REWIND", - "BriefDescription": "The speculative event selected for PMC1 rewinds and the counter for PMC1 is not charged." + "BriefDescription": "The DPTEG required for the load/store instruction in execution was missing from the TLB. This event only counts for demand misses." }, { "EventCode": "0x4003E", "EventName": "PM_LD_CMPL", - "BriefDescription": "Loads completed." + "BriefDescription": "Load instruction completed." }, { "EventCode": "0x4C040", @@ -150,16 +100,6 @@ "BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[45:57]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." }, { - "EventCode": "0x4C142", - "EventName": "PM_MRK_XFER_FROM_SRC_PMC4", - "BriefDescription": "For a marked data transfer instruction, the processor's L1 data cache was reloaded from the source specified in MMCR3[45:57]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads." - }, - { - "EventCode": "0x4C144", - "EventName": "PM_MRK_XFER_FROM_SRC_CYC_PMC4", - "BriefDescription": "Cycles taken for a marked demand miss to reload a line from the source specified in MMCR3[45:57]." - }, - { "EventCode": "0x4C056", "EventName": "PM_DTLB_MISS_16M", "BriefDescription": "Data TLB reload (after a miss) page size 16M. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." @@ -168,20 +108,5 @@ "EventCode": "0x4C05A", "EventName": "PM_DTLB_MISS_1G", "BriefDescription": "Data TLB reload (after a miss) page size 1G. Implies radix translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." - }, - { - "EventCode": "0x4C15E", - "EventName": "PM_MRK_DTLB_MISS_64K", - "BriefDescription": "Marked Data TLB reload (after a miss) page size 64K. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." - }, - { - "EventCode": "0x4D056", - "EventName": "PM_NON_FMA_FLOP_CMPL", - "BriefDescription": "Non FMA instruction completed." - }, - { - "EventCode": "0x40164", - "EventName": "PM_MRK_DERAT_MISS_2M", - "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 2M for a marked instruction. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/metrics.json b/tools/perf/pmu-events/arch/powerpc/power10/metrics.json index 6f53583a0c62..4d66b75c6ad5 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/metrics.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/metrics.json @@ -16,133 +16,139 @@ "BriefDescription": "Average cycles per completed instruction when dispatch was stalled for any reason", "MetricExpr": "PM_DISP_STALL_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI;CPI_STALL_RATIO", - "MetricName": "DISPATCHED_CPI" + "MetricName": "DISPATCH_STALL_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled because there was a flush", "MetricExpr": "PM_DISP_STALL_FLUSH / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_FLUSH_CPI" + "MetricName": "DISPATCH_STALL_FLUSH_CPI" + }, + { + "BriefDescription": "Average cycles per completed instruction when dispatch was stalled because Fetch was being held, so there was nothing in the pipeline for this thread", + "MetricExpr": "PM_DISP_STALL_FETCH / PM_RUN_INST_CMPL", + "MetricGroup": "CPI", + "MetricName": "DISPATCH_STALL_FETCH_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled because the MMU was handling a translation miss", "MetricExpr": "PM_DISP_STALL_TRANSLATION / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_TRANSLATION_CPI" + "MetricName": "DISPATCH_STALL_TRANSLATION_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled waiting to resolve an instruction ERAT miss", "MetricExpr": "PM_DISP_STALL_IERAT_ONLY_MISS / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_IERAT_ONLY_MISS_CPI" + "MetricName": "DISPATCH_STALL_IERAT_ONLY_MISS_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled waiting to resolve an instruction TLB miss", "MetricExpr": "PM_DISP_STALL_ITLB_MISS / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_ITLB_MISS_CPI" + "MetricName": "DISPATCH_STALL_ITLB_MISS_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled due to an icache miss", "MetricExpr": "PM_DISP_STALL_IC_MISS / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_IC_MISS_CPI" + "MetricName": "DISPATCH_STALL_IC_MISS_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled while the instruction was fetched from the local L2", "MetricExpr": "PM_DISP_STALL_IC_L2 / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_IC_L2_CPI" + "MetricName": "DISPATCH_STALL_IC_L2_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled while the instruction was fetched from the local L3", "MetricExpr": "PM_DISP_STALL_IC_L3 / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_IC_L3_CPI" + "MetricName": "DISPATCH_STALL_IC_L3_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled while the instruction was fetched from any source beyond the local L3", "MetricExpr": "PM_DISP_STALL_IC_L3MISS / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_IC_L3MISS_CPI" + "MetricName": "DISPATCH_STALL_IC_L3MISS_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled due to an icache miss after a branch mispredict", "MetricExpr": "PM_DISP_STALL_BR_MPRED_ICMISS / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_BR_MPRED_ICMISS_CPI" + "MetricName": "DISPATCH_STALL_BR_MPRED_ICMISS_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled while instruction was fetched from the local L2 after suffering a branch mispredict", "MetricExpr": "PM_DISP_STALL_BR_MPRED_IC_L2 / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_BR_MPRED_IC_L2_CPI" + "MetricName": "DISPATCH_STALL_BR_MPRED_IC_L2_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled while instruction was fetched from the local L3 after suffering a branch mispredict", "MetricExpr": "PM_DISP_STALL_BR_MPRED_IC_L3 / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_BR_MPRED_IC_L3_CPI" + "MetricName": "DISPATCH_STALL_BR_MPRED_IC_L3_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled while instruction was fetched from any source beyond the local L3 after suffering a branch mispredict", "MetricExpr": "PM_DISP_STALL_BR_MPRED_IC_L3MISS / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_BR_MPRED_IC_L3MISS_CPI" + "MetricName": "DISPATCH_STALL_BR_MPRED_IC_L3MISS_CPI" }, { "BriefDescription": "Average cycles per completed instruction when dispatch was stalled due to a branch mispredict", "MetricExpr": "PM_DISP_STALL_BR_MPRED / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_BR_MPRED_CPI" + "MetricName": "DISPATCH_STALL_BR_MPRED_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch for any reason", "MetricExpr": "PM_DISP_STALL_HELD_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_HELD_CPI" + "MetricName": "DISPATCH_STALL_HELD_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch because of a synchronizing instruction that requires the ICT to be empty before dispatch", "MetricExpr": "PM_DISP_STALL_HELD_SYNC_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISP_HELD_STALL_SYNC_CPI" + "MetricName": "DISPATCH_STALL_HELD_SYNC_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch while waiting on the scoreboard", "MetricExpr": "PM_DISP_STALL_HELD_SCOREBOARD_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISP_HELD_STALL_SCOREBOARD_CPI" + "MetricName": "DISPATCH_STALL_HELD_SCOREBOARD_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch due to issue queue full", "MetricExpr": "PM_DISP_STALL_HELD_ISSQ_FULL_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISP_HELD_STALL_ISSQ_FULL_CPI" + "MetricName": "DISPATCH_STALL_HELD_ISSQ_FULL_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch because the mapper/SRB was full", "MetricExpr": "PM_DISP_STALL_HELD_RENAME_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_HELD_RENAME_CPI" + "MetricName": "DISPATCH_STALL_HELD_RENAME_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch because the STF mapper/SRB was full", "MetricExpr": "PM_DISP_STALL_HELD_STF_MAPPER_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_HELD_STF_MAPPER_CPI" + "MetricName": "DISPATCH_STALL_HELD_STF_MAPPER_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch because the XVFC mapper/SRB was full", "MetricExpr": "PM_DISP_STALL_HELD_XVFC_MAPPER_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_HELD_XVFC_MAPPER_CPI" + "MetricName": "DISPATCH_STALL_HELD_XVFC_MAPPER_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch for any other reason", "MetricExpr": "PM_DISP_STALL_HELD_OTHER_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_HELD_OTHER_CPI" + "MetricName": "DISPATCH_STALL_HELD_OTHER_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction has been dispatched but not issued for any reason", @@ -352,13 +358,13 @@ "BriefDescription": "Average cycles per completed instruction when dispatch was stalled because fetch was being held, so there was nothing in the pipeline for this thread", "MetricExpr": "PM_DISP_STALL_FETCH / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_FETCH_CPI" + "MetricName": "DISPATCH_STALL_FETCH_CPI" }, { "BriefDescription": "Average cycles per completed instruction when the NTC instruction was held at dispatch because of power management", "MetricExpr": "PM_DISP_STALL_HELD_HALT_CYC / PM_RUN_INST_CMPL", "MetricGroup": "CPI", - "MetricName": "DISPATCHED_HELD_HALT_CPI" + "MetricName": "DISPATCH_STALL_HELD_HALT_CPI" }, { "BriefDescription": "Percentage of flushes per completed instruction", @@ -395,6 +401,13 @@ "ScaleUnit": "1%" }, { + "BriefDescription": "Percentage of completed instructions that were stores that missed the L1", + "MetricExpr": "PM_ST_MISS_L1 * 100 / PM_RUN_INST_CMPL", + "MetricGroup": "Others", + "MetricName": "L1_ST_MISS_RATE", + "ScaleUnit": "1%" + }, + { "BriefDescription": "Percentage of completed instructions when the DPTEG required for the load/store instruction in execution was missing from the TLB", "MetricExpr": "PM_DTLB_MISS / PM_RUN_INST_CMPL * 100", "MetricGroup": "Others", @@ -454,12 +467,6 @@ "MetricName": "LOADS_PER_INST" }, { - "BriefDescription": "Average number of finished stores per completed instruction", - "MetricExpr": "PM_ST_FIN / PM_RUN_INST_CMPL", - "MetricGroup": "General", - "MetricName": "STORES_PER_INST" - }, - { "BriefDescription": "Percentage of demand loads that reloaded from beyond the L2 per completed instruction", "MetricExpr": "PM_DATA_FROM_L2MISS / PM_RUN_INST_CMPL * 100", "MetricGroup": "dL1_Reloads", @@ -474,6 +481,13 @@ "ScaleUnit": "1%" }, { + "BriefDescription": "Percentage of ITLB misses per completed run instruction", + "MetricExpr": "PM_ITLB_MISS / PM_RUN_INST_CMPL * 100", + "MetricGroup": "General", + "MetricName": "ITLB_MISS_RATE", + "ScaleUnit": "1%" + }, + { "BriefDescription": "Percentage of DERAT misses with 4k page size per completed instruction", "MetricExpr": "PM_DERAT_MISS_4K / PM_RUN_INST_CMPL * 100", "MetricGroup": "Translation", @@ -566,7 +580,7 @@ "BriefDescription": "Average number of STCX instructions finshed per completed instruction", "MetricExpr": "PM_STCX_FIN / PM_RUN_INST_CMPL", "MetricGroup": "General", - "MetricName": "STXC_PER_INST" + "MetricName": "STCX_PER_INST" }, { "BriefDescription": "Average number of LARX instructions finshed per completed instruction", @@ -629,6 +643,13 @@ "ScaleUnit": "1%" }, { + "BriefDescription": "Percentage of DERAT misses with 1G page size per completed run instruction", + "MetricExpr": "PM_DERAT_MISS_1G * 100 / PM_RUN_INST_CMPL", + "MetricGroup": "Translation", + "MetricName": "DERAT_1G_MISS_RATE", + "ScaleUnit": "1%" + }, + { "BriefDescription": "DERAT miss ratio for 4K page size", "MetricExpr": "PM_DERAT_MISS_4K / PM_DERAT_MISS", "MetricGroup": "Translation", @@ -647,6 +668,12 @@ "MetricName": "DERAT_16M_MISS_RATIO" }, { + "BriefDescription": "DERAT miss ratio for 1G page size", + "MetricExpr": "PM_DERAT_MISS_1G / PM_DERAT_MISS", + "MetricGroup": "Translation", + "MetricName": "DERAT_1G_MISS_RATIO" + }, + { "BriefDescription": "DERAT miss ratio for 64K page size", "MetricExpr": "PM_DERAT_MISS_64K / PM_DERAT_MISS", "MetricGroup": "Translation", diff --git a/tools/perf/pmu-events/arch/powerpc/power10/others.json b/tools/perf/pmu-events/arch/powerpc/power10/others.json index a771e4b6bec5..0e21e7ba1959 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/others.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/others.json @@ -1,28 +1,13 @@ [ { - "EventCode": "0x10016", - "EventName": "PM_VSU0_ISSUE", - "BriefDescription": "VSU instructions issued to VSU pipe 0." - }, - { - "EventCode": "0x1001C", - "EventName": "PM_ULTRAVISOR_INST_CMPL", - "BriefDescription": "PowerPC instructions that completed while the thread was in ultravisor state." - }, - { - "EventCode": "0x100F0", - "EventName": "PM_CYC", - "BriefDescription": "Processor cycles." - }, - { - "EventCode": "0x10134", - "EventName": "PM_MRK_ST_DONE_L2", - "BriefDescription": "Marked stores completed in L2 (RC machine done)." + "EventCode": "0x1002C", + "EventName": "PM_LD_PREFETCH_CACHE_LINE_MISS", + "BriefDescription": "The L1 cache was reloaded with a line that fulfills a prefetch request." }, { "EventCode": "0x1505E", "EventName": "PM_LD_HIT_L1", - "BriefDescription": "Loads that finished without experiencing an L1 miss." + "BriefDescription": "Load finished without experiencing an L1 miss." }, { "EventCode": "0x1F056", @@ -30,9 +15,9 @@ "BriefDescription": "Cycles in which Superslice 0 dispatches either 1 or 2 instructions." }, { - "EventCode": "0x1F15C", - "EventName": "PM_MRK_STCX_L2_CYC", - "BriefDescription": "Cycles spent in the nest portion of a marked Stcx instruction. It starts counting when the operation starts to drain to the L2 and it stops counting when the instruction retires from the Instruction Completion Table (ICT) in the Instruction Sequencing Unit (ISU)." + "EventCode": "0x1F05A", + "EventName": "PM_DISP_HELD_SYNC_CYC", + "BriefDescription": "Cycles dispatch is held because of a synchronizing instruction that requires the ICT to be empty before dispatch." }, { "EventCode": "0x10066", @@ -40,39 +25,14 @@ "BriefDescription": "Cycles in which the thread is in Adjunct state. MSR[S HV PR] bits = 011." }, { - "EventCode": "0x101E4", - "EventName": "PM_MRK_L1_ICACHE_MISS", - "BriefDescription": "Marked Instruction suffered an icache Miss." - }, - { - "EventCode": "0x101EA", - "EventName": "PM_MRK_L1_RELOAD_VALID", - "BriefDescription": "Marked demand reload." - }, - { - "EventCode": "0x100F4", - "EventName": "PM_FLOP_CMPL", - "BriefDescription": "Floating Point Operations Completed. Includes any type. It counts once for each 1, 2, 4 or 8 flop instruction. Use PM_1|2|4|8_FLOP_CMPL events to count flops." - }, - { - "EventCode": "0x100FA", - "EventName": "PM_RUN_LATCH_ANY_THREAD_CYC", - "BriefDescription": "Cycles when at least one thread has the run latch set." - }, - { "EventCode": "0x100FC", "EventName": "PM_LD_REF_L1", "BriefDescription": "All L1 D cache load references counted at finish, gated by reject. In P9 and earlier this event counted only cacheable loads but in P10 both cacheable and non-cacheable loads are included." }, { - "EventCode": "0x2000C", - "EventName": "PM_RUN_LATCH_ALL_THREADS_CYC", - "BriefDescription": "Cycles when the run latch is set for all threads." - }, - { "EventCode": "0x2E010", "EventName": "PM_ADJUNCT_INST_CMPL", - "BriefDescription": "PowerPC instructions that completed while the thread is in Adjunct state." + "BriefDescription": "PowerPC instruction completed while the thread was in Adjunct state." }, { "EventCode": "0x2E014", @@ -80,26 +40,6 @@ "BriefDescription": "Conditional store instruction (STCX) finished. LARX and STCX are instructions used to acquire a lock." }, { - "EventCode": "0x20130", - "EventName": "PM_MRK_INST_DECODED", - "BriefDescription": "An instruction was marked at decode time. Random Instruction Sampling (RIS) only." - }, - { - "EventCode": "0x20132", - "EventName": "PM_MRK_DFU_ISSUE", - "BriefDescription": "The marked instruction was a decimal floating point operation issued to the VSU. Measured at issue time." - }, - { - "EventCode": "0x20134", - "EventName": "PM_MRK_FXU_ISSUE", - "BriefDescription": "The marked instruction was a fixed point operation issued to the VSU. Measured at issue time." - }, - { - "EventCode": "0x2505C", - "EventName": "PM_VSU_ISSUE", - "BriefDescription": "At least one VSU instruction was issued to one of the VSU pipes. Up to 4 per cycle. Includes fixed point operations." - }, - { "EventCode": "0x2F054", "EventName": "PM_DISP_SS1_2_INSTR_CYC", "BriefDescription": "Cycles in which Superslice 1 dispatches either 1 or 2 instructions." @@ -110,39 +50,14 @@ "BriefDescription": "Cycles in which Superslice 1 dispatches either 3 or 4 instructions." }, { - "EventCode": "0x2006C", - "EventName": "PM_RUN_CYC_SMT4_MODE", - "BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT4 mode." - }, - { - "EventCode": "0x201E0", - "EventName": "PM_MRK_DATA_FROM_MEMORY", - "BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss for a marked load." - }, - { - "EventCode": "0x201E4", - "EventName": "PM_MRK_DATA_FROM_L3MISS", - "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss for a marked load." - }, - { - "EventCode": "0x201E8", - "EventName": "PM_THRESH_EXC_512", - "BriefDescription": "Threshold counter exceeded a value of 512." - }, - { "EventCode": "0x200F2", "EventName": "PM_INST_DISP", - "BriefDescription": "PowerPC instructions dispatched." - }, - { - "EventCode": "0x30132", - "EventName": "PM_MRK_VSU_FIN", - "BriefDescription": "VSU marked instructions finished. Excludes simple FX instructions issued to the Store Unit." + "BriefDescription": "PowerPC instruction dispatched." }, { - "EventCode": "0x30038", - "EventName": "PM_EXEC_STALL_DMISS_LMEM", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local memory, local OpenCapp cache, or local OpenCapp memory." + "EventCode": "0x200FD", + "EventName": "PM_L1_ICACHE_MISS", + "BriefDescription": "Demand instruction cache miss." }, { "EventCode": "0x3F04A", @@ -152,12 +67,7 @@ { "EventCode": "0x3405A", "EventName": "PM_PRIVILEGED_INST_CMPL", - "BriefDescription": "PowerPC Instructions that completed while the thread is in Privileged state." - }, - { - "EventCode": "0x3F150", - "EventName": "PM_MRK_ST_DRAIN_CYC", - "BriefDescription": "cycles to drain st from core to L2." + "BriefDescription": "PowerPC instruction completed while the thread was in Privileged state." }, { "EventCode": "0x3F054", @@ -170,74 +80,29 @@ "BriefDescription": "Cycles in which Superslice 0 dispatches either 5, 6, 7 or 8 instructions." }, { - "EventCode": "0x30162", - "EventName": "PM_MRK_ISSUE_DEPENDENT_LOAD", - "BriefDescription": "The marked instruction was dependent on a load. It is eligible for issue kill." - }, - { - "EventCode": "0x40114", - "EventName": "PM_MRK_START_PROBE_NOP_DISP", - "BriefDescription": "Marked Start probe nop dispatched. Instruction AND R0,R0,R0." - }, - { - "EventCode": "0x4001C", - "EventName": "PM_VSU_FIN", - "BriefDescription": "VSU instructions finished." - }, - { - "EventCode": "0x4C01A", - "EventName": "PM_EXEC_STALL_DMISS_OFF_NODE", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a distant chip." - }, - { - "EventCode": "0x4D012", - "EventName": "PM_PMC3_SAVED", - "BriefDescription": "The conditions for the speculative event selected for PMC3 are met and PMC3 is charged." - }, - { - "EventCode": "0x4D022", - "EventName": "PM_HYPERVISOR_INST_CMPL", - "BriefDescription": "PowerPC instructions that completed while the thread is in hypervisor state." - }, - { - "EventCode": "0x4D026", - "EventName": "PM_ULTRAVISOR_CYC", - "BriefDescription": "Cycles when the thread is in Ultravisor state. MSR[S HV PR]=110." + "EventCode": "0x30068", + "EventName": "PM_L1_ICACHE_RELOADED_PREF", + "BriefDescription": "Counts all instruction cache prefetch reloads (includes demand turned into prefetch)." }, { - "EventCode": "0x4D028", - "EventName": "PM_PRIVILEGED_CYC", - "BriefDescription": "Cycles when the thread is in Privileged state. MSR[S HV PR]=x00." + "EventCode": "0x300F6", + "EventName": "PM_LD_DEMAND_MISS_L1", + "BriefDescription": "The L1 cache was reloaded with a line that fulfills a demand miss request. Counted at reload time, before finish." }, { - "EventCode": "0x40030", - "EventName": "PM_INST_FIN", - "BriefDescription": "Instructions finished." + "EventCode": "0x300FE", + "EventName": "PM_DATA_FROM_L3MISS", + "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss." }, { - "EventCode": "0x44146", - "EventName": "PM_MRK_STCX_CORE_CYC", - "BriefDescription": "Cycles spent in the core portion of a marked Stcx instruction. It starts counting when the instruction is decoded and stops counting when it drains into the L2." + "EventCode": "0x40012", + "EventName": "PM_L1_ICACHE_RELOADED_ALL", + "BriefDescription": "Counts all instruction cache reloads includes demand, prefetch, prefetch turned into demand and demand turned into prefetch." }, { "EventCode": "0x44054", "EventName": "PM_VECTOR_LD_CMPL", - "BriefDescription": "Vector load instructions completed." - }, - { - "EventCode": "0x45054", - "EventName": "PM_FMA_CMPL", - "BriefDescription": "Two floating point instructions completed (FMA class of instructions: fmadd, fnmadd, fmsub, fnmsub). Scalar instructions only." - }, - { - "EventCode": "0x45056", - "EventName": "PM_SCALAR_FLOP_CMPL", - "BriefDescription": "Scalar floating point instructions completed." - }, - { - "EventCode": "0x4505C", - "EventName": "PM_MATH_FLOP_CMPL", - "BriefDescription": "Math floating point instructions completed." + "BriefDescription": "Vector load instruction completed." }, { "EventCode": "0x4D05E", @@ -245,28 +110,13 @@ "BriefDescription": "A branch completed. All branches are included." }, { - "EventCode": "0x4E15E", - "EventName": "PM_MRK_INST_FLUSHED", - "BriefDescription": "The marked instruction was flushed." - }, - { - "EventCode": "0x401E6", - "EventName": "PM_MRK_INST_FROM_L3MISS", - "BriefDescription": "The processor's instruction cache was reloaded from a source other than the local core's L1, L2, or L3 due to a demand miss for a marked instruction." - }, - { - "EventCode": "0x401E8", - "EventName": "PM_MRK_DATA_FROM_L2MISS", - "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1 or L2 due to a demand miss for a marked load." - }, - { "EventCode": "0x400F0", "EventName": "PM_LD_DEMAND_MISS_L1_FIN", - "BriefDescription": "Load Missed L1, counted at finish time." + "BriefDescription": "Load missed L1, counted at finish time." }, { - "EventCode": "0x500FA", - "EventName": "PM_RUN_INST_CMPL", - "BriefDescription": "Completed PowerPC instructions gated by the run latch." + "EventCode": "0x400FE", + "EventName": "PM_DATA_FROM_MEMORY", + "BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss." } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json b/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json index b8aded6045fa..21b23bb55d0d 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json @@ -1,8 +1,13 @@ [ { - "EventCode": "0x100FE", - "EventName": "PM_INST_CMPL", - "BriefDescription": "PowerPC instructions completed." + "EventCode": "0x10004", + "EventName": "PM_EXEC_STALL_TRANSLATION", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered a TLB miss or ERAT miss and waited for it to resolve." + }, + { + "EventCode": "0x10006", + "EventName": "PM_DISP_STALL_HELD_OTHER_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch for any other reason." }, { "EventCode": "0x1000C", @@ -12,7 +17,7 @@ { "EventCode": "0x1000E", "EventName": "PM_MMA_ISSUED", - "BriefDescription": "MMA instructions issued." + "BriefDescription": "MMA instruction issued." }, { "EventCode": "0x10012", @@ -30,14 +35,24 @@ "BriefDescription": "Cycles in which an instruction reload is pending to satisfy a demand miss." }, { - "EventCode": "0x10022", - "EventName": "PM_PMC2_SAVED", - "BriefDescription": "The conditions for the speculative event selected for PMC2 are met and PMC2 is charged." + "EventCode": "0x10028", + "EventName": "PM_NTC_FLUSH", + "BriefDescription": "The instruction was flushed after becoming next-to-complete (NTC)." + }, + { + "EventCode": "0x10038", + "EventName": "PM_DISP_STALL_TRANSLATION", + "BriefDescription": "Cycles when dispatch was stalled for this thread because the MMU was handling a translation miss." + }, + { + "EventCode": "0x1003A", + "EventName": "PM_DISP_STALL_BR_MPRED_IC_L2", + "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2 after suffering a branch mispredict." }, { - "EventCode": "0x10024", - "EventName": "PM_PMC5_OVERFLOW", - "BriefDescription": "The event selected for PMC5 caused the event counter to overflow." + "EventCode": "0x1003C", + "EventName": "PM_EXEC_STALL_DMISS_L2L3", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from either the local L2 or local L3." }, { "EventCode": "0x10058", @@ -55,11 +70,41 @@ "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 2M. Implies radix translation. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches." }, { + "EventCode": "0x1D05E", + "EventName": "PM_DISP_STALL_HELD_HALT_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch because of power management." + }, + { + "EventCode": "0x1E050", + "EventName": "PM_DISP_STALL_HELD_STF_MAPPER_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch because the STF mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR." + }, + { + "EventCode": "0x1E054", + "EventName": "PM_EXEC_STALL_DMISS_L21_L31", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from another core's L2 or L3 on the same chip." + }, + { + "EventCode": "0x1E056", + "EventName": "PM_EXEC_STALL_STORE_PIPE", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the store unit. This does not include cycles spent handling store misses, PTESYNC instructions or TLBIE instructions." + }, + { "EventCode": "0x1E05A", "EventName": "PM_CMPL_STALL_LWSYNC", "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a lwsync waiting to complete." }, { + "EventCode": "0x1F058", + "EventName": "PM_DISP_HELD_CYC", + "BriefDescription": "Cycles dispatch is held." + }, + { + "EventCode": "0x10064", + "EventName": "PM_DISP_STALL_IC_L2", + "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2." + }, + { "EventCode": "0x10068", "EventName": "PM_BR_FIN", "BriefDescription": "A branch instruction finished. Includes predicted/mispredicted/unconditional." @@ -70,9 +115,9 @@ "BriefDescription": "Simple fixed point instruction issued to the store unit. Measured at finish time." }, { - "EventCode": "0x1006C", - "EventName": "PM_RUN_CYC_ST_MODE", - "BriefDescription": "Cycles when the run latch is set and the core is in ST mode." + "EventCode": "0x100F8", + "EventName": "PM_DISP_STALL_CYC", + "BriefDescription": "Cycles the ICT has no itags assigned to this thread (no instructions were dispatched during these cycles)." }, { "EventCode": "0x20004", @@ -80,9 +125,9 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was dispatched but not issued yet." }, { - "EventCode": "0x2000A", - "EventName": "PM_HYPERVISOR_CYC", - "BriefDescription": "Cycles when the thread is in Hypervisor state. MSR[S HV PR]=010." + "EventCode": "0x20006", + "EventName": "PM_DISP_STALL_HELD_ISSQ_FULL_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch due to Issue queue full. Includes issue queue and branch queue." }, { "EventCode": "0x2000E", @@ -90,24 +135,59 @@ "BriefDescription": "LSU Finished an internal operation in LD1 port." }, { + "EventCode": "0x2C010", + "EventName": "PM_EXEC_STALL_LSU", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the Load Store Unit. This does not include simple fixed point instructions." + }, + { "EventCode": "0x2C014", "EventName": "PM_CMPL_STALL_SPECIAL", "BriefDescription": "Cycles in which the oldest instruction in the pipeline required special handling before completing." }, { + "EventCode": "0x2C016", + "EventName": "PM_DISP_STALL_IERAT_ONLY_MISS", + "BriefDescription": "Cycles when dispatch was stalled while waiting to resolve an instruction ERAT miss." + }, + { "EventCode": "0x2C018", "EventName": "PM_EXEC_STALL_DMISS_L3MISS", "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a source beyond the local L2 or local L3." }, { + "EventCode": "0x2C01C", + "EventName": "PM_EXEC_STALL_DMISS_OFF_CHIP", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a remote chip." + }, + { + "EventCode": "0x2C01E", + "EventName": "PM_DISP_STALL_BR_MPRED_IC_L3", + "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3 after suffering a branch mispredict." + }, + { "EventCode": "0x2D010", "EventName": "PM_LSU_ST1_FIN", "BriefDescription": "LSU Finished an internal operation in ST1 port." }, { + "EventCode": "0x10016", + "EventName": "PM_VSU0_ISSUE", + "BriefDescription": "VSU instruction issued to VSU pipe 0." + }, + { "EventCode": "0x2D012", "EventName": "PM_VSU1_ISSUE", - "BriefDescription": "VSU instructions issued to VSU pipe 1." + "BriefDescription": "VSU instruction issued to VSU pipe 1." + }, + { + "EventCode": "0x2505C", + "EventName": "PM_VSU_ISSUE", + "BriefDescription": "At least one VSU instruction was issued to one of the VSU pipes. Up to 4 per cycle. Includes fixed point operations." + }, + { + "EventCode": "0x4001C", + "EventName": "PM_VSU_FIN", + "BriefDescription": "VSU instruction finished." }, { "EventCode": "0x2D018", @@ -115,19 +195,34 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the VSU (includes FXU, VSU, CRU)." }, { + "EventCode": "0x2D01A", + "EventName": "PM_DISP_STALL_IC_MISS", + "BriefDescription": "Cycles when dispatch was stalled for this thread due to an instruction cache miss." + }, + { "EventCode": "0x2D01C", "EventName": "PM_CMPL_STALL_STCX", "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a stcx waiting for resolution from the nest before completing." }, { - "EventCode": "0x2E01E", - "EventName": "PM_EXEC_STALL_NTC_FLUSH", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in any unit before it was flushed. Note that if the flush of the oldest instruction happens after finish, the cycles from dispatch to issue will be included in PM_DISP_STALL and the cycles from issue to finish will be included in PM_EXEC_STALL and its corresponding children. This event will also count cycles when the previous NTF instruction is still completing and the new NTF instruction is stalled at dispatch." + "EventCode": "0x2E018", + "EventName": "PM_DISP_STALL_FETCH", + "BriefDescription": "Cycles when dispatch was stalled for this thread because Fetch was being held." + }, + { + "EventCode": "0x2E01A", + "EventName": "PM_DISP_STALL_HELD_XVFC_MAPPER_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch because the XVFC mapper/SRB was full." + }, + { + "EventCode": "0x2E01C", + "EventName": "PM_EXEC_STALL_TLBIE", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a TLBIE instruction executing in the Load Store Unit." }, { - "EventCode": "0x2013C", - "EventName": "PM_MRK_FX_LSU_FIN", - "BriefDescription": "The marked instruction was simple fixed point that was issued to the store unit. Measured at finish time." + "EventCode": "0x2E01E", + "EventName": "PM_EXEC_STALL_NTC_FLUSH", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in any unit before it was flushed. Note that if the flush of the oldest instruction happens after finish, the cycles from dispatch to issue will be included in PM_DISP_STALL and the cycles from issue to finish will be included in PM_EXEC_STALL and its corresponding children. This event will also count cycles when the previous next-to-finish (NTF) instruction is still completing and the new NTF instruction is stalled at dispatch." }, { "EventCode": "0x2405A", @@ -135,14 +230,19 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline (NTC) finishes. Note that instructions can finish out of order, therefore not all the instructions that finish have a Next-to-complete status." }, { - "EventCode": "0x201E2", - "EventName": "PM_MRK_LD_MISS_L1", - "BriefDescription": "Marked DL1 Demand Miss counted at finish time." + "EventCode": "0x20066", + "EventName": "PM_DISP_HELD_OTHER_CYC", + "BriefDescription": "Cycles dispatch is held for any other reason." + }, + { + "EventCode": "0x2006A", + "EventName": "PM_DISP_HELD_STF_MAPPER_CYC", + "BriefDescription": "Cycles dispatch is held because the STF mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR." }, { - "EventCode": "0x200F4", - "EventName": "PM_RUN_CYC", - "BriefDescription": "Processor cycles gated by the run latch." + "EventCode": "0x30004", + "EventName": "PM_DISP_STALL_FLUSH", + "BriefDescription": "Cycles when dispatch was stalled because of a flush that happened to an instruction(s) that was not yet next-to-complete (NTC). PM_EXEC_STALL_NTC_FLUSH only includes instructions that were flushed after becoming NTC." }, { "EventCode": "0x30008", @@ -150,29 +250,34 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting to finish in one of the execution units (BRU, LSU, VSU). Only cycles between issue and finish are counted in this category." }, { - "EventCode": "0x3001A", - "EventName": "PM_LSU_ST2_FIN", - "BriefDescription": "LSU Finished an internal operation in ST2 port." + "EventCode": "0x30014", + "EventName": "PM_EXEC_STALL_STORE", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a store instruction executing in the Load Store Unit." + }, + { + "EventCode": "0x30016", + "EventName": "PM_EXEC_STALL_DERAT_DTLB_MISS", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered a TLB miss and waited for it resolve." }, { - "EventCode": "0x30020", - "EventName": "PM_PMC2_REWIND", - "BriefDescription": "The speculative event selected for PMC2 rewinds and the counter for PMC2 is not charged." + "EventCode": "0x30018", + "EventName": "PM_DISP_STALL_HELD_SCOREBOARD_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch while waiting on the Scoreboard. This event combines VSCR and FPSCR together." }, { - "EventCode": "0x30022", - "EventName": "PM_PMC4_SAVED", - "BriefDescription": "The conditions for the speculative event selected for PMC4 are met and PMC4 is charged." + "EventCode": "0x3001A", + "EventName": "PM_LSU_ST2_FIN", + "BriefDescription": "LSU Finished an internal operation in ST2 port." }, { - "EventCode": "0x30024", - "EventName": "PM_PMC6_OVERFLOW", - "BriefDescription": "The event selected for PMC6 caused the event counter to overflow." + "EventCode": "0x30026", + "EventName": "PM_EXEC_STALL_STORE_MISS", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a store whose cache line was not resident in the L1 and was waiting for allocation of the missing line into the L1." }, { "EventCode": "0x30028", "EventName": "PM_CMPL_STALL_MEM_ECC", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for the non-speculative finish of either a stcx waiting for its result or a load waiting for non-critical sectors of data and ECC." + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for the non-speculative finish of either a STCX waiting for its result or a load waiting for non-critical sectors of data and ECC." }, { "EventCode": "0x30036", @@ -180,6 +285,11 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a simple fixed point instruction executing in the Load Store Unit." }, { + "EventCode": "0x30038", + "EventName": "PM_EXEC_STALL_DMISS_LMEM", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local memory, local OpenCAPI cache, or local OpenCAPI memory." + }, + { "EventCode": "0x3003A", "EventName": "PM_CMPL_STALL_EXCEPTION", "BriefDescription": "Cycles in which the oldest instruction in the pipeline was not allowed to complete because it was interrupted by ANY exception, which has to be serviced before the instruction can complete." @@ -187,17 +297,42 @@ { "EventCode": "0x3F044", "EventName": "PM_VSU2_ISSUE", - "BriefDescription": "VSU instructions issued to VSU pipe 2." + "BriefDescription": "VSU instruction issued to VSU pipe 2." }, { "EventCode": "0x30058", "EventName": "PM_TLBIE_FIN", - "BriefDescription": "TLBIE instructions finished in the LSU. Two TLBIEs can finish each cycle. All will be counted." + "BriefDescription": "TLBIE instruction finished in the LSU. Two TLBIEs can finish each cycle. All will be counted." }, { - "EventCode": "0x3D058", - "EventName": "PM_SCALAR_FSQRT_FDIV_ISSUE", - "BriefDescription": "Scalar versions of four floating point operations: fdiv,fsqrt (xvdivdp, xvdivsp, xvsqrtdp, xvsqrtsp)." + "EventCode": "0x34054", + "EventName": "PM_EXEC_STALL_DMISS_L2L3_NOCONFLICT", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, without a dispatch conflict." + }, + { + "EventCode": "0x34056", + "EventName": "PM_EXEC_STALL_LOAD_FINISH", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was finishing a load after its data was reloaded from a data source beyond the local L1; cycles in which the LSU was processing an L1-hit; cycles in which the next-to-finish (NTF) instruction merged with another load in the LMQ; cycles in which the NTF instruction is waiting for a data reload for a load miss, but the data comes back with a non-NTF instruction." + }, + { + "EventCode": "0x34058", + "EventName": "PM_DISP_STALL_BR_MPRED_ICMISS", + "BriefDescription": "Cycles when dispatch was stalled after a mispredicted branch resulted in an instruction cache miss." + }, + { + "EventCode": "0x3D05C", + "EventName": "PM_DISP_STALL_HELD_RENAME_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch because the mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR and XVFC." + }, + { + "EventCode": "0x3E052", + "EventName": "PM_DISP_STALL_IC_L3", + "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3." + }, + { + "EventCode": "0x30060", + "EventName": "PM_DISP_HELD_XVFC_MAPPER_CYC", + "BriefDescription": "Cycles dispatch is held because the XVFC mapper/SRB was full." }, { "EventCode": "0x30066", @@ -215,9 +350,9 @@ "BriefDescription": "Cycles in which both instructions in the ICT entry pair show as finished. These are the cycles between finish and completion for the oldest pair of instructions in the pipeline." }, { - "EventCode": "0x40010", - "EventName": "PM_PMC3_OVERFLOW", - "BriefDescription": "The event selected for PMC3 caused the event counter to overflow." + "EventCode": "0x4C010", + "EventName": "PM_DISP_STALL_BR_MPRED_IC_L3MISS", + "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from sources beyond the local L3 after suffering a mispredicted branch." }, { "EventCode": "0x4C012", @@ -225,16 +360,36 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline suffered an ERAT miss and waited for it resolve." }, { + "EventCode": "0x4C016", + "EventName": "PM_EXEC_STALL_DMISS_L2L3_CONFLICT", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from the local L2 or local L3, with a dispatch conflict." + }, + { "EventCode": "0x4C018", "EventName": "PM_CMPL_STALL", "BriefDescription": "Cycles in which the oldest instruction in the pipeline cannot complete because the thread was blocked for any reason." }, { + "EventCode": "0x4C01A", + "EventName": "PM_EXEC_STALL_DMISS_OFF_NODE", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was waiting for a load miss to resolve from a distant chip." + }, + { "EventCode": "0x4C01E", "EventName": "PM_LSU_ST3_FIN", "BriefDescription": "LSU Finished an internal operation in ST3 port." }, { + "EventCode": "0x4D014", + "EventName": "PM_EXEC_STALL_LOAD", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a load instruction executing in the Load Store Unit." + }, + { + "EventCode": "0x4D016", + "EventName": "PM_EXEC_STALL_PTESYNC", + "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a PTESYNC instruction executing in the Load Store Unit." + }, + { "EventCode": "0x4D018", "EventName": "PM_EXEC_STALL_BRU", "BriefDescription": "Cycles in which the oldest instruction in the pipeline was executing in the Branch unit." @@ -250,9 +405,24 @@ "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a TLBIEL instruction executing in the Load Store Unit. TLBIEL instructions have lower overhead than TLBIE instructions because they don't get set to the nest." }, { + "EventCode": "0x4D01E", + "EventName": "PM_DISP_STALL_BR_MPRED", + "BriefDescription": "Cycles when dispatch was stalled for this thread due to a mispredicted branch." + }, + { + "EventCode": "0x4E010", + "EventName": "PM_DISP_STALL_IC_L3MISS", + "BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from any source beyond the local L3." + }, + { "EventCode": "0x4E012", "EventName": "PM_EXEC_STALL_UNKNOWN", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline completed without an ntf_type pulse. The ntf_pulse was missed by the ISU because the NTF finishes and completions came too close together." + "BriefDescription": "Cycles in which the oldest instruction in the pipeline completed without an ntf_type pulse. The ntf_pulse was missed by the ISU because the next-to-finish (NTF) instruction finishes and completions came too close together." + }, + { + "EventCode": "0x4E01A", + "EventName": "PM_DISP_STALL_HELD_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch for any reason." }, { "EventCode": "0x4D020", @@ -260,24 +430,24 @@ "BriefDescription": "VSU instruction was issued to VSU pipe 3." }, { - "EventCode": "0x40132", - "EventName": "PM_MRK_LSU_FIN", - "BriefDescription": "LSU marked instruction finish." + "EventCode": "0x4003C", + "EventName": "PM_DISP_STALL_HELD_SYNC_CYC", + "BriefDescription": "Cycles in which the next-to-complete (NTC) instruction is held at dispatch because of a synchronizing instruction that requires the ICT to be empty before dispatch." }, { "EventCode": "0x45058", "EventName": "PM_IC_MISS_CMPL", - "BriefDescription": "Non-speculative icache miss, counted at completion." + "BriefDescription": "Non-speculative instruction cache miss, counted at completion." }, { - "EventCode": "0x4D050", - "EventName": "PM_VSU_NON_FLOP_CMPL", - "BriefDescription": "Non-floating point VSU instructions completed." + "EventCode": "0x40060", + "EventName": "PM_DISP_HELD_SCOREBOARD_CYC", + "BriefDescription": "Cycles dispatch is held while waiting on the Scoreboard. This event combines VSCR and FPSCR together." }, { - "EventCode": "0x4D052", - "EventName": "PM_2FLOP_CMPL", - "BriefDescription": "Double Precision vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg completed." + "EventCode": "0x40062", + "EventName": "PM_DISP_HELD_RENAME_CYC", + "BriefDescription": "Cycles dispatch is held because the mapper/SRB was full. Includes GPR (count, link, tar), VSR, VMR, FPR and XVFC." }, { "EventCode": "0x400F2", diff --git a/tools/perf/pmu-events/arch/powerpc/power10/pmc.json b/tools/perf/pmu-events/arch/powerpc/power10/pmc.json index b5d1bd39cfb2..c606ae03cd27 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/pmc.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/pmc.json @@ -1,22 +1,202 @@ [ { + "EventCode": "0x100FE", + "EventName": "PM_INST_CMPL", + "BriefDescription": "PowerPC instruction completed." + }, + { + "EventCode": "0x1000A", + "EventName": "PM_PMC3_REWIND", + "BriefDescription": "The speculative event selected for PMC3 rewinds and the counter for PMC3 is not charged." + }, + { + "EventCode": "0x10010", + "EventName": "PM_PMC4_OVERFLOW", + "BriefDescription": "The event selected for PMC4 caused the event counter to overflow." + }, + { + "EventCode": "0x1001C", + "EventName": "PM_ULTRAVISOR_INST_CMPL", + "BriefDescription": "PowerPC instruction completed while the thread was in ultravisor state." + }, + { + "EventCode": "0x100F0", + "EventName": "PM_CYC", + "BriefDescription": "Processor cycles." + }, + { + "EventCode": "0x10020", + "EventName": "PM_PMC4_REWIND", + "BriefDescription": "The speculative event selected for PMC4 rewinds and the counter for PMC4 is not charged." + }, + { + "EventCode": "0x10022", + "EventName": "PM_PMC2_SAVED", + "BriefDescription": "The conditions for the speculative event selected for PMC2 are met and PMC2 is charged." + }, + { + "EventCode": "0x10024", + "EventName": "PM_PMC5_OVERFLOW", + "BriefDescription": "The event selected for PMC5 caused the event counter to overflow." + }, + { + "EventCode": "0x1002A", + "EventName": "PM_PMC3_HELD_CYC", + "BriefDescription": "Cycles when the speculative counter for PMC3 is frozen." + }, + { + "EventCode": "0x1F15E", + "EventName": "PM_MRK_START_PROBE_NOP_CMPL", + "BriefDescription": "Marked Start probe nop (AND R0,R0,R0) completed." + }, + { + "EventCode": "0x1006C", + "EventName": "PM_RUN_CYC_ST_MODE", + "BriefDescription": "Cycles when the run latch is set and the core is in ST mode." + }, + { + "EventCode": "0x101E8", + "EventName": "PM_THRESH_EXC_256", + "BriefDescription": "Threshold counter exceeded a count of 256." + }, + { + "EventCode": "0x101EC", + "EventName": "PM_THRESH_MET", + "BriefDescription": "Threshold exceeded." + }, + { + "EventCode": "0x100FA", + "EventName": "PM_RUN_LATCH_ANY_THREAD_CYC", + "BriefDescription": "Cycles when at least one thread has the run latch set." + }, + { + "EventCode": "0x2000A", + "EventName": "PM_HYPERVISOR_CYC", + "BriefDescription": "Cycles when the thread is in Hypervisor state. MSR[S HV PR]=010." + }, + { + "EventCode": "0x2000C", + "EventName": "PM_RUN_LATCH_ALL_THREADS_CYC", + "BriefDescription": "Cycles when the run latch is set for all threads." + }, + { + "EventCode": "0x20010", + "EventName": "PM_PMC1_OVERFLOW", + "BriefDescription": "The event selected for PMC1 caused the event counter to overflow." + }, + { + "EventCode": "0x2006C", + "EventName": "PM_RUN_CYC_SMT4_MODE", + "BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT4 mode." + }, + { + "EventCode": "0x201E6", + "EventName": "PM_THRESH_EXC_32", + "BriefDescription": "Threshold counter exceeded a value of 32." + }, + { + "EventCode": "0x201E8", + "EventName": "PM_THRESH_EXC_512", + "BriefDescription": "Threshold counter exceeded a value of 512." + }, + { + "EventCode": "0x200F4", + "EventName": "PM_RUN_CYC", + "BriefDescription": "Processor cycles gated by the run latch." + }, + { + "EventCode": "0x30010", + "EventName": "PM_PMC2_OVERFLOW", + "BriefDescription": "The event selected for PMC2 caused the event counter to overflow." + }, + { + "EventCode": "0x30020", + "EventName": "PM_PMC2_REWIND", + "BriefDescription": "The speculative event selected for PMC2 rewinds and the counter for PMC2 is not charged." + }, + { + "EventCode": "0x30022", + "EventName": "PM_PMC4_SAVED", + "BriefDescription": "The conditions for the speculative event selected for PMC4 are met and PMC4 is charged." + }, + { + "EventCode": "0x30024", + "EventName": "PM_PMC6_OVERFLOW", + "BriefDescription": "The event selected for PMC6 caused the event counter to overflow." + }, + { + "EventCode": "0x3006C", + "EventName": "PM_RUN_CYC_SMT2_MODE", + "BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT2 mode." + }, + { "EventCode": "0x301E8", "EventName": "PM_THRESH_EXC_64", "BriefDescription": "Threshold counter exceeded a value of 64." }, { - "EventCode": "0x45050", - "EventName": "PM_1FLOP_CMPL", - "BriefDescription": "One floating point instruction completed (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg)." + "EventCode": "0x301EA", + "EventName": "PM_THRESH_EXC_1024", + "BriefDescription": "Threshold counter exceeded a value of 1024." + }, + { + "EventCode": "0x40010", + "EventName": "PM_PMC3_OVERFLOW", + "BriefDescription": "The event selected for PMC3 caused the event counter to overflow." + }, + { + "EventCode": "0x40114", + "EventName": "PM_MRK_START_PROBE_NOP_DISP", + "BriefDescription": "Marked Start probe nop dispatched. Instruction AND R0,R0,R0." + }, + { + "EventCode": "0x4D010", + "EventName": "PM_PMC1_SAVED", + "BriefDescription": "The conditions for the speculative event selected for PMC1 are met and PMC1 is charged." + }, + { + "EventCode": "0x4D012", + "EventName": "PM_PMC3_SAVED", + "BriefDescription": "The conditions for the speculative event selected for PMC3 are met and PMC3 is charged." + }, + { + "EventCode": "0x4D022", + "EventName": "PM_HYPERVISOR_INST_CMPL", + "BriefDescription": "PowerPC instruction completed while the thread was in hypervisor state." + }, + { + "EventCode": "0x4D026", + "EventName": "PM_ULTRAVISOR_CYC", + "BriefDescription": "Cycles when the thread is in Ultravisor state. MSR[S HV PR]=110." + }, + { + "EventCode": "0x4D028", + "EventName": "PM_PRIVILEGED_CYC", + "BriefDescription": "Cycles when the thread is in Privileged state. MSR[S HV PR]=x00." + }, + { + "EventCode": "0x4D02C", + "EventName": "PM_PMC1_REWIND", + "BriefDescription": "The speculative event selected for PMC1 rewinds and the counter for PMC1 is not charged." + }, + { + "EventCode": "0x40030", + "EventName": "PM_INST_FIN", + "BriefDescription": "Instruction finished." + }, + { + "EventCode": "0x40134", + "EventName": "PM_MRK_INST_TIMEO", + "BriefDescription": "Marked instruction finish timeout (instruction was lost)." }, { - "EventCode": "0x45052", - "EventName": "PM_4FLOP_CMPL", - "BriefDescription": "Four floating point instructions completed (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg)." + "EventCode": "0x401EA", + "EventName": "PM_THRESH_EXC_128", + "BriefDescription": "Threshold counter exceeded a value of 128." }, { - "EventCode": "0x4D054", - "EventName": "PM_8FLOP_CMPL", - "BriefDescription": "Four Double Precision vector instructions completed." + "EventCode": "0x400FA", + "EventName": "PM_RUN_INST_CMPL", + "BriefDescription": "PowerPC instruction completed while the run latch is set." } ] diff --git a/tools/perf/pmu-events/arch/powerpc/power10/translation.json b/tools/perf/pmu-events/arch/powerpc/power10/translation.json index db3766dca07c..ea73900d248a 100644 --- a/tools/perf/pmu-events/arch/powerpc/power10/translation.json +++ b/tools/perf/pmu-events/arch/powerpc/power10/translation.json @@ -1,35 +1,10 @@ [ { - "EventCode": "0x1F15E", - "EventName": "PM_MRK_START_PROBE_NOP_CMPL", - "BriefDescription": "Marked Start probe nop (AND R0,R0,R0) completed." - }, - { - "EventCode": "0x20016", - "EventName": "PM_ST_FIN", - "BriefDescription": "Store finish count. Includes speculative activity." - }, - { "EventCode": "0x20018", "EventName": "PM_ST_FWD", "BriefDescription": "Store forwards that finished." }, { - "EventCode": "0x2011C", - "EventName": "PM_MRK_NTF_CYC", - "BriefDescription": "Cycles during which the marked instruction is the oldest in the pipeline (NTF or NTC)." - }, - { - "EventCode": "0x2E01C", - "EventName": "PM_EXEC_STALL_TLBIE", - "BriefDescription": "Cycles in which the oldest instruction in the pipeline was a TLBIE instruction executing in the Load Store Unit." - }, - { - "EventCode": "0x201E6", - "EventName": "PM_THRESH_EXC_32", - "BriefDescription": "Threshold counter exceeded a value of 32." - }, - { "EventCode": "0x200F0", "EventName": "PM_ST_CMPL", "BriefDescription": "Stores completed from S2Q (2nd-level store queue). This event includes regular stores, stcx and cache inhibited stores. The following operations are excluded (pteupdate, snoop tlbie complete, store atomics, miso, load atomic payloads, tlbie, tlbsync, slbieg, isync, msgsnd, slbiag, cpabort, copy, tcheck, tend, stsync, dcbst, icbi, dcbf, hwsync, lwsync, ptesync, eieio, msgsync)." @@ -37,21 +12,11 @@ { "EventCode": "0x200FE", "EventName": "PM_DATA_FROM_L2MISS", - "BriefDescription": "The processor's data cache was reloaded from a source other than the local core's L1 or L2 due to a demand miss." - }, - { - "EventCode": "0x30010", - "EventName": "PM_PMC2_OVERFLOW", - "BriefDescription": "The event selected for PMC2 caused the event counter to overflow." - }, - { - "EventCode": "0x4D010", - "EventName": "PM_PMC1_SAVED", - "BriefDescription": "The conditions for the speculative event selected for PMC1 are met and PMC1 is charged." + "BriefDescription": "The processor's L1 data cache was reloaded from a source beyond the local core's L2 due to a demand miss." }, { - "EventCode": "0x4D05C", - "EventName": "PM_DPP_FLOP_CMPL", - "BriefDescription": "Double-Precision or Quad-Precision instructions completed." + "EventCode": "0x300F0", + "EventName": "PM_ST_MISS_L1", + "BriefDescription": "Store Missed L1." } ] diff --git a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json index daf9458f0b77..c6780d5c456b 100644 --- a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json @@ -558,6 +558,7 @@ }, { "BriefDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / tma_info_core_clks - max((cpu_atom@MEM_BOUND_STALLS.LOAD@ - cpu_atom@LD_HEAD.L1_MISS_AT_RET@) / tma_info_core_clks, 0) * cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@", "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -800,6 +801,7 @@ }, { "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a store forward block.", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "LD_HEAD.ST_ADDR_AT_RET / tma_info_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", @@ -1058,7 +1060,6 @@ }, { "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricName": "tma_fp_arith", @@ -1230,6 +1231,7 @@ }, { "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL", "MetricName": "tma_info_botlnk_l2_ic_misses", @@ -1267,6 +1269,7 @@ }, { "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))", "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_memory_bandwidth", @@ -1355,7 +1358,6 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.SCALAR_DOUBLE@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * (cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE@) + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / tma_info_core_core_clks", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc", @@ -1363,7 +1365,6 @@ }, { "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu_core@FP_ARITH_DISPATCHED.PORT_0@ + cpu_core@FP_ARITH_DISPATCHED.PORT_1@ + cpu_core@FP_ARITH_DISPATCHED.PORT_5@) / (2 * tma_info_core_core_clks)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", @@ -1769,7 +1770,6 @@ }, { "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu_core@UOPS_RETIRED.SLOTS\\,cmask\\=1@", "MetricGroup": "Pipeline;Ret", "MetricName": "tma_info_pipeline_retire", @@ -2002,6 +2002,7 @@ }, { "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(cpu_core@MEMORY_ACTIVITY.STALLS_L2_MISS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L3_MISS@) / tma_info_thread_clks", "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -2375,6 +2376,7 @@ }, { "BriefDescription": "This metric represents rate of split store accesses", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group", "MetricName": "tma_split_stores", @@ -2405,6 +2407,7 @@ }, { "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "13 * cpu_core@LD_BLOCKS.STORE_FORWARD@ / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json index 0f1628d698da..06e67e34e1bf 100644 --- a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json +++ b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json @@ -466,6 +466,7 @@ }, { "BriefDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_BOUND_STALLS.LOAD_LLC_HIT / tma_info_core_clks - max((MEM_BOUND_STALLS.LOAD - LD_HEAD.L1_MISS_AT_RET) / tma_info_core_clks, 0) * MEM_BOUND_STALLS.LOAD_LLC_HIT / MEM_BOUND_STALLS.LOAD", "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -682,6 +683,7 @@ }, { "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a store forward block.", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "LD_HEAD.ST_ADDR_AT_RET / tma_info_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json b/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json index 8fcc05c4e0a1..a6eed0d9a26d 100644 --- a/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json @@ -85,6 +85,7 @@ }, { "BriefDescription": "This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_4k_aliasing", @@ -319,7 +320,6 @@ }, { "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricName": "tma_fp_arith", @@ -464,6 +464,7 @@ }, { "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL", "MetricName": "tma_info_botlnk_l2_ic_misses", @@ -497,6 +498,7 @@ }, { "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))", "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_memory_bandwidth", @@ -574,14 +576,12 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc" }, { "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", @@ -927,7 +927,6 @@ }, { "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@", "MetricGroup": "Pipeline;Ret", "MetricName": "tma_info_pipeline_retire" @@ -1100,6 +1099,7 @@ }, { "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks", "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -1419,6 +1419,7 @@ }, { "BriefDescription": "This metric represents rate of split store accesses", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group", "MetricName": "tma_split_stores", @@ -1446,6 +1447,7 @@ }, { "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json b/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json index 9bb7e3f20f7f..7082ad5ba961 100644 --- a/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json +++ b/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json @@ -289,6 +289,7 @@ }, { "BriefDescription": "This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_4k_aliasing", @@ -523,7 +524,6 @@ }, { "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricName": "tma_fp_arith", @@ -668,6 +668,7 @@ }, { "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL", "MetricName": "tma_info_botlnk_l2_ic_misses", @@ -701,6 +702,7 @@ }, { "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))", "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_memory_bandwidth", @@ -778,14 +780,12 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc" }, { "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", @@ -1144,7 +1144,6 @@ }, { "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@", "MetricGroup": "Pipeline;Ret", "MetricName": "tma_info_pipeline_retire" @@ -1369,6 +1368,7 @@ }, { "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks", "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -1715,6 +1715,7 @@ }, { "BriefDescription": "This metric represents rate of split store accesses", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group", "MetricName": "tma_split_stores", @@ -1742,6 +1743,7 @@ }, { "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index 6650100830c4..3a8770e29fe8 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -19,12 +19,12 @@ GenuineIntel-6-3A,v24,ivybridge,core GenuineIntel-6-3E,v23,ivytown,core GenuineIntel-6-2D,v23,jaketown,core GenuineIntel-6-(57|85),v10,knightslanding,core -GenuineIntel-6-A[AC],v1.03,meteorlake,core +GenuineIntel-6-A[AC],v1.04,meteorlake,core GenuineIntel-6-1[AEF],v3,nehalemep,core GenuineIntel-6-2E,v3,nehalemex,core GenuineIntel-6-A7,v1.01,rocketlake,core GenuineIntel-6-2A,v19,sandybridge,core -GenuineIntel-6-(8F|CF),v1.14,sapphirerapids,core +GenuineIntel-6-(8F|CF),v1.15,sapphirerapids,core GenuineIntel-6-AF,v1.00,sierraforest,core GenuineIntel-6-(37|4A|4C|4D|5A),v15,silvermont,core GenuineIntel-6-(4E|5E|8E|9E|A5|A6),v57,skylake,core diff --git a/tools/perf/pmu-events/arch/x86/meteorlake/cache.json b/tools/perf/pmu-events/arch/x86/meteorlake/cache.json index e1ae7c92f38e..1de0200b32f6 100644 --- a/tools/perf/pmu-events/arch/x86/meteorlake/cache.json +++ b/tools/perf/pmu-events/arch/x86/meteorlake/cache.json @@ -37,6 +37,15 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.", + "EventCode": "0x48", + "EventName": "L1D_PEND_MISS.L2_STALLS", + "PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", + "SampleAfterValue": "1000003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { "BriefDescription": "Number of L1D misses that are outstanding", "EventCode": "0x48", "EventName": "L1D_PEND_MISS.PENDING", @@ -261,6 +270,15 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Cycles when L1D is locked", + "EventCode": "0x42", + "EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION", + "PublicDescription": "This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).", + "SampleAfterValue": "2000003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.", "EventCode": "0x2e", "EventName": "LONGEST_LAT_CACHE.MISS", @@ -515,6 +533,17 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.", + "Data_LA": "1", + "EventCode": "0xd2", + "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", + "PEBS": "1", + "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.", + "SampleAfterValue": "20011", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { "BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required", "Data_LA": "1", "EventCode": "0xd2", @@ -731,6 +760,14 @@ "Unit": "cpu_atom" }, { + "BriefDescription": "MEM_STORE_RETIRED.L2_HIT", + "EventCode": "0x44", + "EventName": "MEM_STORE_RETIRED.L2_HIT", + "SampleAfterValue": "200003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of load ops retired.", "Data_LA": "1", "EventCode": "0xd0", @@ -978,6 +1015,15 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Cacheable and Non-Cacheable code read requests", + "EventCode": "0x21", + "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD", + "PublicDescription": "Counts both cacheable and Non-Cacheable code read requests.", + "SampleAfterValue": "100003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { "BriefDescription": "Demand Data Read requests sent to uncore", "EventCode": "0x21", "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD", @@ -996,6 +1042,89 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.", + "CounterMask": "1", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD", + "PublicDescription": "Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.", + "SampleAfterValue": "1000003", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Cycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore.", + "CounterMask": "1", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD", + "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.", + "SampleAfterValue": "1000003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Cycles where at least 1 outstanding demand data read request is pending.", + "CounterMask": "1", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD", + "SampleAfterValue": "2000003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Cycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore.", + "CounterMask": "1", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO", + "PublicDescription": "Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.", + "SampleAfterValue": "1000003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { + "BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD", + "SampleAfterValue": "1000003", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycle.", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD", + "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.", + "SampleAfterValue": "1000003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { + "BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD", + "PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.", + "SampleAfterValue": "1000003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.", + "CounterMask": "6", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6", + "SampleAfterValue": "2000003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Store Read transactions pending for off-core. Highly correlated.", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO", + "PublicDescription": "Counts the number of off-core outstanding read-for-ownership (RFO) store transactions every cycle. An RFO transaction is considered to be in the Off-core outstanding state between L2 cache miss and transaction completion.", + "SampleAfterValue": "1000003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.", "EventCode": "0x2c", "EventName": "SQ_MISC.BUS_LOCK", @@ -1005,6 +1134,42 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Number of PREFETCHNTA instructions executed.", + "EventCode": "0x40", + "EventName": "SW_PREFETCH_ACCESS.NTA", + "PublicDescription": "Counts the number of PREFETCHNTA instructions executed.", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Number of PREFETCHW instructions executed.", + "EventCode": "0x40", + "EventName": "SW_PREFETCH_ACCESS.PREFETCHW", + "PublicDescription": "Counts the number of PREFETCHW instructions executed.", + "SampleAfterValue": "100003", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Number of PREFETCHT0 instructions executed.", + "EventCode": "0x40", + "EventName": "SW_PREFETCH_ACCESS.T0", + "PublicDescription": "Counts the number of PREFETCHT0 instructions executed.", + "SampleAfterValue": "100003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.", + "EventCode": "0x40", + "EventName": "SW_PREFETCH_ACCESS.T1_T2", + "PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.", + "SampleAfterValue": "100003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to an icache miss", "EventCode": "0x71", "EventName": "TOPDOWN_FE_BOUND.ICACHE", diff --git a/tools/perf/pmu-events/arch/x86/meteorlake/floating-point.json b/tools/perf/pmu-events/arch/x86/meteorlake/floating-point.json index 616489f0974a..f66506ee37ef 100644 --- a/tools/perf/pmu-events/arch/x86/meteorlake/floating-point.json +++ b/tools/perf/pmu-events/arch/x86/meteorlake/floating-point.json @@ -42,6 +42,14 @@ "Unit": "cpu_core" }, { + "BriefDescription": "FP_ARITH_DISPATCHED.PORT_5", + "EventCode": "0xb3", + "EventName": "FP_ARITH_DISPATCHED.PORT_5", + "SampleAfterValue": "2000003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", "EventCode": "0xc7", "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE", diff --git a/tools/perf/pmu-events/arch/x86/meteorlake/frontend.json b/tools/perf/pmu-events/arch/x86/meteorlake/frontend.json index 0f064518d1c0..8264419500a5 100644 --- a/tools/perf/pmu-events/arch/x86/meteorlake/frontend.json +++ b/tools/perf/pmu-events/arch/x86/meteorlake/frontend.json @@ -44,6 +44,14 @@ "Unit": "cpu_core" }, { + "BriefDescription": "DSB_FILL.FB_STALL_OT", + "EventCode": "0x62", + "EventName": "DSB_FILL.FB_STALL_OT", + "SampleAfterValue": "1000003", + "UMask": "0x10", + "Unit": "cpu_core" + }, + { "BriefDescription": "Retired ANT branches", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.ANY_ANT", @@ -56,6 +64,30 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Retired Instructions who experienced DSB miss.", + "EventCode": "0xc6", + "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS", + "MSRIndex": "0x3F7", + "MSRValue": "0x1", + "PEBS": "1", + "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.", + "SampleAfterValue": "100007", + "UMask": "0x3", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Retired Instructions who experienced a critical DSB miss.", + "EventCode": "0xc6", + "EventName": "FRONTEND_RETIRED.DSB_MISS", + "MSRIndex": "0x3F7", + "MSRValue": "0x11", + "PEBS": "1", + "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.", + "SampleAfterValue": "100007", + "UMask": "0x3", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB miss", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.ITLB_MISS", @@ -89,6 +121,18 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.", + "EventCode": "0xc6", + "EventName": "FRONTEND_RETIRED.L2_MISS", + "MSRIndex": "0x3F7", + "MSRValue": "0x13", + "PEBS": "1", + "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss.", + "SampleAfterValue": "100007", + "UMask": "0x3", + "Unit": "cpu_core" + }, + { "BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.LATENCY_GE_1", @@ -244,6 +288,18 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.", + "EventCode": "0xc6", + "EventName": "FRONTEND_RETIRED.STLB_MISS", + "MSRIndex": "0x3F7", + "MSRValue": "0x15", + "PEBS": "1", + "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss.", + "SampleAfterValue": "100007", + "UMask": "0x3", + "Unit": "cpu_core" + }, + { "BriefDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH", "EventCode": "0xc6", "EventName": "FRONTEND_RETIRED.UNKNOWN_BRANCH", diff --git a/tools/perf/pmu-events/arch/x86/meteorlake/memory.json b/tools/perf/pmu-events/arch/x86/meteorlake/memory.json index 67e949b4c789..2605e1d0ba9f 100644 --- a/tools/perf/pmu-events/arch/x86/meteorlake/memory.json +++ b/tools/perf/pmu-events/arch/x86/meteorlake/memory.json @@ -67,6 +67,15 @@ "Unit": "cpu_atom" }, { + "BriefDescription": "Number of machine clears due to memory ordering conflicts.", + "EventCode": "0xc3", + "EventName": "MACHINE_CLEARS.MEMORY_ORDERING", + "PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture", + "SampleAfterValue": "100003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { "BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.", "CounterMask": "3", "EventCode": "0x47", @@ -96,6 +105,35 @@ "Unit": "cpu_core" }, { + "BriefDescription": "MEMORY_ORDERING.MD_NUKE", + "EventCode": "0x09", + "EventName": "MEMORY_ORDERING.MD_NUKE", + "SampleAfterValue": "100003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Counts the number of memory ordering machine clears due to memory renaming.", + "EventCode": "0x09", + "EventName": "MEMORY_ORDERING.MRN_NUKE", + "SampleAfterValue": "100003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.", + "Data_LA": "1", + "EventCode": "0xcd", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024", + "MSRIndex": "0x3F6", + "MSRValue": "0x400", + "PEBS": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency.", + "SampleAfterValue": "53", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.", "Data_LA": "1", "EventCode": "0xcd", @@ -122,6 +160,19 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles.", + "Data_LA": "1", + "EventCode": "0xcd", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_2048", + "MSRIndex": "0x3F6", + "MSRValue": "0x800", + "PEBS": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles. Reported latency may be longer than just the memory latency.", + "SampleAfterValue": "23", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.", "Data_LA": "1", "EventCode": "0xcd", @@ -235,5 +286,34 @@ "SampleAfterValue": "100003", "UMask": "0x10", "Unit": "cpu_core" + }, + { + "BriefDescription": "Cycles where data return is pending for a Demand Data Read request who miss L3 cache.", + "CounterMask": "1", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD", + "PublicDescription": "Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQ.", + "SampleAfterValue": "1000003", + "UMask": "0x10", + "Unit": "cpu_core" + }, + { + "BriefDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache.", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD", + "PublicDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache. Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known by the requesting core to have missed the L3 cache.", + "SampleAfterValue": "2000003", + "UMask": "0x10", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Cycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cache.", + "CounterMask": "6", + "EventCode": "0x20", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_GE_6", + "PublicDescription": "Cycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cache. Note that this event does not capture all elapsed cycles while the requests are outstanding - only cycles from when the requests were known to have missed the L3 cache.", + "SampleAfterValue": "2000003", + "UMask": "0x10", + "Unit": "cpu_core" } ] diff --git a/tools/perf/pmu-events/arch/x86/meteorlake/other.json b/tools/perf/pmu-events/arch/x86/meteorlake/other.json index 2ec57f487525..f4c603599df4 100644 --- a/tools/perf/pmu-events/arch/x86/meteorlake/other.json +++ b/tools/perf/pmu-events/arch/x86/meteorlake/other.json @@ -1,5 +1,13 @@ [ { + "BriefDescription": "ASSISTS.PAGE_FAULT", + "EventCode": "0xc1", + "EventName": "ASSISTS.PAGE_FAULT", + "SampleAfterValue": "1000003", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts streaming stores that have any type of response.", "EventCode": "0x2A,0x2B", "EventName": "OCR.STREAMING_WR.ANY_RESPONSE", @@ -31,6 +39,14 @@ "Unit": "cpu_core" }, { + "BriefDescription": "RS.EMPTY_RESOURCE", + "EventCode": "0xa5", + "EventName": "RS.EMPTY_RESOURCE", + "SampleAfterValue": "1000003", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state. For Tremont, UMWAIT and TPAUSE will only put the CPU into C0.1 activity state (not C0.2 activity state)", "EventCode": "0x75", "EventName": "SERIALIZATION.C01_MS_SCB", diff --git a/tools/perf/pmu-events/arch/x86/meteorlake/pipeline.json b/tools/perf/pmu-events/arch/x86/meteorlake/pipeline.json index eeaa7a97f71c..352c5efafc06 100644 --- a/tools/perf/pmu-events/arch/x86/meteorlake/pipeline.json +++ b/tools/perf/pmu-events/arch/x86/meteorlake/pipeline.json @@ -312,6 +312,16 @@ "Unit": "cpu_core" }, { + "BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS", + "EventCode": "0xc5", + "EventName": "BR_MISP_RETIRED.RET", + "PEBS": "1", + "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired.", + "SampleAfterValue": "100007", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of mispredicted near RET branch instructions retired.", "EventCode": "0xc5", "EventName": "BR_MISP_RETIRED.RETURN", @@ -330,6 +340,33 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state.", + "EventCode": "0xec", + "EventName": "CPU_CLK_UNHALTED.C01", + "PublicDescription": "Counts core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.", + "SampleAfterValue": "2000003", + "UMask": "0x10", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state.", + "EventCode": "0xec", + "EventName": "CPU_CLK_UNHALTED.C02", + "PublicDescription": "Counts core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.", + "SampleAfterValue": "2000003", + "UMask": "0x20", + "Unit": "cpu_core" + }, + { + "BriefDescription": "Core clocks when the thread is in the C0.1 or C0.2 or running a PAUSE in C0 ACPI state.", + "EventCode": "0xec", + "EventName": "CPU_CLK_UNHALTED.C0_WAIT", + "PublicDescription": "Counts core clocks when the thread is in the C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions) or running the PAUSE instruction.", + "SampleAfterValue": "2000003", + "UMask": "0x70", + "Unit": "cpu_core" + }, + { "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles", "EventName": "CPU_CLK_UNHALTED.CORE", "SampleAfterValue": "2000003", @@ -362,6 +399,24 @@ "Unit": "cpu_core" }, { + "BriefDescription": "CPU_CLK_UNHALTED.PAUSE", + "EventCode": "0xec", + "EventName": "CPU_CLK_UNHALTED.PAUSE", + "SampleAfterValue": "2000003", + "UMask": "0x40", + "Unit": "cpu_core" + }, + { + "BriefDescription": "CPU_CLK_UNHALTED.PAUSE_INST", + "CounterMask": "1", + "EdgeDetect": "1", + "EventCode": "0xec", + "EventName": "CPU_CLK_UNHALTED.PAUSE_INST", + "SampleAfterValue": "2000003", + "UMask": "0x40", + "Unit": "cpu_core" + }, + { "BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.", "EventCode": "0x3c", "EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED", @@ -603,6 +658,15 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Retired NOP instructions.", + "EventCode": "0xc0", + "EventName": "INST_RETIRED.NOP", + "PublicDescription": "Counts all retired NOP or ENDBR32/64 or PREFETCHIT0/1 instructions", + "SampleAfterValue": "2000003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { "BriefDescription": "Precise instruction retired with PEBS precise-distribution", "EventName": "INST_RETIRED.PREC_DIST", "PEBS": "1", @@ -612,6 +676,15 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Iterations of Repeat string retired instructions.", + "EventCode": "0xc0", + "EventName": "INST_RETIRED.REP_ITERATION", + "PublicDescription": "Number of iterations of Repeat (REP) string retired instructions such as MOVS, CMPS, and SCAS. Each has a byte, word, and doubleword version and string instructions can be repeated using a repetition prefix, REP, that allows their architectural execution to be repeated a number of times as specified by the RCX register. Note the number of iterations is implementation-dependent.", + "SampleAfterValue": "2000003", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { "BriefDescription": "Cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.", "CounterMask": "1", "EventCode": "0xad", @@ -622,6 +695,17 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Clears speculative count", + "CounterMask": "1", + "EdgeDetect": "1", + "EventCode": "0xad", + "EventName": "INT_MISC.CLEARS_COUNT", + "PublicDescription": "Counts the number of speculative clears due to any type of branch misprediction or machine clears", + "SampleAfterValue": "500009", + "UMask": "0x1", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.", "EventCode": "0xad", "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES", @@ -631,6 +715,15 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Cycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the thread", + "EventCode": "0xad", + "EventName": "INT_MISC.RAT_STALLS", + "PublicDescription": "This event counts the number of cycles during which Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the current thread. This also includes the cycles during which the Allocator is serving another thread.", + "SampleAfterValue": "1000003", + "UMask": "0x8", + "Unit": "cpu_core" + }, + { "BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread", "EventCode": "0xad", "EventName": "INT_MISC.RECOVERY_CYCLES", @@ -734,6 +827,15 @@ "Unit": "cpu_atom" }, { + "BriefDescription": "False dependencies in MOB due to partial compare on address.", + "EventCode": "0x03", + "EventName": "LD_BLOCKS.ADDRESS_ALIAS", + "PublicDescription": "Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on address.", + "SampleAfterValue": "100003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready.", "EventCode": "0x03", "EventName": "LD_BLOCKS.DATA_UNKNOWN", @@ -743,6 +845,15 @@ "Unit": "cpu_atom" }, { + "BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.", + "EventCode": "0x03", + "EventName": "LD_BLOCKS.NO_SR", + "PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.", + "SampleAfterValue": "100003", + "UMask": "0x88", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts the number of retired loads that are blocked because its address partially overlapped with an older store.", "EventCode": "0x03", "EventName": "LD_BLOCKS.STORE_FORWARD", @@ -752,6 +863,15 @@ "Unit": "cpu_atom" }, { + "BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.", + "EventCode": "0x03", + "EventName": "LD_BLOCKS.STORE_FORWARD", + "PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.", + "SampleAfterValue": "100003", + "UMask": "0x82", + "Unit": "cpu_core" + }, + { "BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.", "CounterMask": "1", "EventCode": "0xa8", @@ -824,6 +944,24 @@ "Unit": "cpu_atom" }, { + "BriefDescription": "Self-modifying code (SMC) detected.", + "EventCode": "0xc3", + "EventName": "MACHINE_CLEARS.SMC", + "PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.", + "SampleAfterValue": "100003", + "UMask": "0x4", + "Unit": "cpu_core" + }, + { + "BriefDescription": "LFENCE instructions retired", + "EventCode": "0xe0", + "EventName": "MISC2_RETIRED.LFENCE", + "PublicDescription": "number of LFENCE retired instructions", + "SampleAfterValue": "400009", + "UMask": "0x20", + "Unit": "cpu_core" + }, + { "BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.", "EventCode": "0xa2", "EventName": "RESOURCE_STALLS.SCOREBOARD", @@ -1261,6 +1399,16 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Cycles with retired uop(s).", + "CounterMask": "1", + "EventCode": "0xc2", + "EventName": "UOPS_RETIRED.CYCLES", + "PublicDescription": "Counts cycles where at least one uop has retired.", + "SampleAfterValue": "1000003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { "BriefDescription": "Retired uops except the last uop of each instruction.", "EventCode": "0xc2", "EventName": "UOPS_RETIRED.HEAVY", @@ -1307,6 +1455,17 @@ "Unit": "cpu_core" }, { + "BriefDescription": "Cycles without actually retired uops.", + "CounterMask": "1", + "EventCode": "0xc2", + "EventName": "UOPS_RETIRED.STALLS", + "Invert": "1", + "PublicDescription": "This event counts cycles without actually retired uops.", + "SampleAfterValue": "1000003", + "UMask": "0x2", + "Unit": "cpu_core" + }, + { "BriefDescription": "Cycles with less than 10 actually retired uops.", "CounterMask": "10", "EventCode": "0xc2", diff --git a/tools/perf/pmu-events/arch/x86/rocketlake/rkl-metrics.json b/tools/perf/pmu-events/arch/x86/rocketlake/rkl-metrics.json index 1bb9cededa56..a0191c8b708d 100644 --- a/tools/perf/pmu-events/arch/x86/rocketlake/rkl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/rocketlake/rkl-metrics.json @@ -85,6 +85,7 @@ }, { "BriefDescription": "This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_4k_aliasing", @@ -319,7 +320,6 @@ }, { "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricName": "tma_fp_arith", @@ -464,6 +464,7 @@ }, { "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL", "MetricName": "tma_info_botlnk_l2_ic_misses", @@ -497,6 +498,7 @@ }, { "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))", "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_memory_bandwidth", @@ -574,14 +576,12 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc" }, { "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", @@ -933,7 +933,6 @@ }, { "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@", "MetricGroup": "Pipeline;Ret", "MetricName": "tma_info_pipeline_retire" @@ -1126,6 +1125,7 @@ }, { "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks", "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -1445,6 +1445,7 @@ }, { "BriefDescription": "This metric represents rate of split store accesses", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group", "MetricName": "tma_split_stores", @@ -1472,6 +1473,7 @@ }, { "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json b/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json index 31b6be9fb8c7..442ef3807a9d 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/other.json @@ -77,6 +77,24 @@ "UMask": "0x1" }, { + "BriefDescription": "Counts demand data reads that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Cluster.", + "EventCode": "0x2A,0x2B", + "EventName": "OCR.DEMAND_DATA_RD.LOCAL_SOCKET_PMM", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x700C00001", + "SampleAfterValue": "100003", + "UMask": "0x1" + }, + { + "BriefDescription": "Counts demand data reads that were supplied by PMM.", + "EventCode": "0x2A,0x2B", + "EventName": "OCR.DEMAND_DATA_RD.PMM", + "MSRIndex": "0x1a6,0x1a7", + "MSRValue": "0x703C00001", + "SampleAfterValue": "100003", + "UMask": "0x1" + }, + { "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.", "EventCode": "0x2A,0x2B", "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM", diff --git a/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json b/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json index c207c851a9f9..222212abd811 100644 --- a/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json +++ b/tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json @@ -553,7 +553,6 @@ }, { "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector + tma_fp_amx", "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricName": "tma_fp_arith", @@ -717,6 +716,7 @@ }, { "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL", "MetricName": "tma_info_botlnk_l2_ic_misses", @@ -750,6 +750,7 @@ }, { "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))", "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_memory_bandwidth", @@ -827,14 +828,12 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF + 4 * AMX_OPS_RETIRED.BF16", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc" }, { "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.PORT_1 + FP_ARITH_DISPATCHED.PORT_5) / (2 * tma_info_core_core_clks)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", @@ -1216,7 +1215,6 @@ }, { "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@", "MetricGroup": "Pipeline;Ret", "MetricName": "tma_info_pipeline_retire" @@ -1467,6 +1465,7 @@ }, { "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks", "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -1841,6 +1840,7 @@ }, { "BriefDescription": "This metric represents rate of split store accesses", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group", "MetricName": "tma_split_stores", @@ -1868,6 +1868,7 @@ }, { "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json b/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json index 94cb38540b5a..2795a404bb58 100644 --- a/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json @@ -923,7 +923,7 @@ }, { "BriefDescription": "Average number of parallel data read requests to external memory", - "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.DATA_READ / UNC_ARB_TRK_OCCUPANCY.DATA_READ@thresh\\=1@", + "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.DATA_READ / UNC_ARB_TRK_OCCUPANCY.DATA_READ@cmask\\=1@", "MetricGroup": "Mem;MemoryBW;SoC", "MetricName": "tma_info_system_mem_parallel_reads", "PublicDescription": "Average number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches" diff --git a/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json b/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json index c7c2d6ab1a93..fab084e1bc69 100644 --- a/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json +++ b/tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json @@ -79,6 +79,7 @@ }, { "BriefDescription": "This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_4k_aliasing", @@ -313,7 +314,6 @@ }, { "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group", "MetricName": "tma_fp_arith", @@ -458,6 +458,7 @@ }, { "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))", "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL", "MetricName": "tma_info_botlnk_l2_ic_misses", @@ -491,6 +492,7 @@ }, { "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks", + "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))", "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW", "MetricName": "tma_info_bottleneck_memory_bandwidth", @@ -568,14 +570,12 @@ }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks", "MetricGroup": "Flops;Ret", "MetricName": "tma_info_core_flopc" }, { "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)", "MetricGroup": "Cor;Flops;HPC", "MetricName": "tma_info_core_fp_arith_utilization", @@ -927,7 +927,6 @@ }, { "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.", - "MetricConstraint": "NO_GROUP_EVENTS", "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@", "MetricGroup": "Pipeline;Ret", "MetricName": "tma_info_pipeline_retire" @@ -1114,6 +1113,7 @@ }, { "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks", "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group", "MetricName": "tma_l3_bound", @@ -1433,6 +1433,7 @@ }, { "BriefDescription": "This metric represents rate of split store accesses", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group", "MetricName": "tma_split_stores", @@ -1460,6 +1461,7 @@ }, { "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores", + "MetricConstraint": "NO_GROUP_EVENTS_NMI", "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks", "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group", "MetricName": "tma_store_fwd_blk", diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c index a630c617e879..12bd043a05e3 100644 --- a/tools/perf/pmu-events/empty-pmu-events.c +++ b/tools/perf/pmu-events/empty-pmu-events.c @@ -266,19 +266,53 @@ static const struct pmu_sys_events pmu_sys_event_tables[] = { }, }; -int pmu_events_table_for_each_event(const struct pmu_events_table *table, pmu_event_iter_fn fn, - void *data) +int pmu_events_table__for_each_event(const struct pmu_events_table *table, struct perf_pmu *pmu, + pmu_event_iter_fn fn, void *data) { for (const struct pmu_event *pe = &table->entries[0]; pe->name; pe++) { - int ret = fn(pe, table, data); + int ret; + if (pmu && !pmu__name_match(pmu, pe->pmu)) + continue; + + ret = fn(pe, table, data); if (ret) return ret; } return 0; } -int pmu_metrics_table_for_each_metric(const struct pmu_metrics_table *table, pmu_metric_iter_fn fn, +int pmu_events_table__find_event(const struct pmu_events_table *table, + struct perf_pmu *pmu, + const char *name, + pmu_event_iter_fn fn, + void *data) +{ + for (const struct pmu_event *pe = &table->entries[0]; pe->name; pe++) { + if (pmu && !pmu__name_match(pmu, pe->pmu)) + continue; + + if (!strcasecmp(pe->name, name)) + return fn(pe, table, data); + } + return -1000; +} + +size_t pmu_events_table__num_events(const struct pmu_events_table *table, + struct perf_pmu *pmu) +{ + size_t count = 0; + + for (const struct pmu_event *pe = &table->entries[0]; pe->name; pe++) { + if (pmu && !pmu__name_match(pmu, pe->pmu)) + continue; + + count++; + } + return count; +} + +int pmu_metrics_table__for_each_metric(const struct pmu_metrics_table *table, pmu_metric_iter_fn fn, void *data) { for (const struct pmu_metric *pm = &table->entries[0]; pm->metric_expr; pm++) { @@ -371,7 +405,8 @@ const struct pmu_metrics_table *find_core_metrics_table(const char *arch, const int pmu_for_each_core_event(pmu_event_iter_fn fn, void *data) { for (const struct pmu_events_map *tables = &pmu_events_map[0]; tables->arch; tables++) { - int ret = pmu_events_table_for_each_event(&tables->event_table, fn, data); + int ret = pmu_events_table__for_each_event(&tables->event_table, + /*pmu=*/ NULL, fn, data); if (ret) return ret; @@ -384,7 +419,7 @@ int pmu_for_each_core_metric(pmu_metric_iter_fn fn, void *data) for (const struct pmu_events_map *tables = &pmu_events_map[0]; tables->arch; tables++) { - int ret = pmu_metrics_table_for_each_metric(&tables->metric_table, fn, data); + int ret = pmu_metrics_table__for_each_metric(&tables->metric_table, fn, data); if (ret) return ret; @@ -408,7 +443,7 @@ int pmu_for_each_sys_event(pmu_event_iter_fn fn, void *data) for (const struct pmu_sys_events *tables = &pmu_sys_event_tables[0]; tables->name; tables++) { - int ret = pmu_events_table_for_each_event(&tables->table, fn, data); + int ret = pmu_events_table__for_each_event(&tables->table, /*pmu=*/ NULL, fn, data); if (ret) return ret; diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py index 12e80bb7939b..a7e88332276d 100755 --- a/tools/perf/pmu-events/jevents.py +++ b/tools/perf/pmu-events/jevents.py @@ -42,7 +42,7 @@ _metricgroups = {} # Order specific JsonEvent attributes will be visited. _json_event_attributes = [ # cmp_sevent related attributes. - 'name', 'pmu', 'topic', 'desc', + 'name', 'topic', 'desc', # Seems useful, put it early. 'event', # Short things in alphabetical order. @@ -53,7 +53,7 @@ _json_event_attributes = [ # Attributes that are in pmu_metric rather than pmu_event. _json_metric_attributes = [ - 'pmu', 'metric_name', 'metric_group', 'metric_expr', 'metric_threshold', + 'metric_name', 'metric_group', 'metric_expr', 'metric_threshold', 'desc', 'long_desc', 'unit', 'compat', 'metricgroup_no_group', 'default_metricgroup_name', 'aggr_mode', 'event_grouping' ] @@ -113,13 +113,24 @@ class BigCString: strings: Set[str] big_string: Sequence[str] offsets: Dict[str, int] + insert_number: int + insert_point: Dict[str, int] + metrics: Set[str] def __init__(self): self.strings = set() + self.insert_number = 0; + self.insert_point = {} + self.metrics = set() - def add(self, s: str) -> None: + def add(self, s: str, metric: bool) -> None: """Called to add to the big string.""" - self.strings.add(s) + if s not in self.strings: + self.strings.add(s) + self.insert_point[s] = self.insert_number + self.insert_number += 1 + if metric: + self.metrics.add(s) def compute(self) -> None: """Called once all strings are added to compute the string and offsets.""" @@ -160,8 +171,11 @@ class BigCString: self.big_string = [] self.offsets = {} + def string_cmp_key(s: str) -> Tuple[bool, int, str]: + return (s in self.metrics, self.insert_point[s], s) + # Emit all strings that aren't folded in a sorted manner. - for s in sorted(self.strings): + for s in sorted(self.strings, key=string_cmp_key): if s not in folded_strings: self.offsets[s] = big_string_offset self.big_string.append(f'/* offset={big_string_offset} */ "') @@ -252,7 +266,7 @@ class JsonEvent: def unit_to_pmu(unit: str) -> Optional[str]: """Convert a JSON Unit to Linux PMU name.""" if not unit: - return None + return 'default_core' # Comment brought over from jevents.c: # it's not realistic to keep adding these, we need something more scalable ... table = { @@ -274,6 +288,7 @@ class JsonEvent: 'DFPMC': 'amd_df', 'cpu_core': 'cpu_core', 'cpu_atom': 'cpu_atom', + 'ali_drw': 'ali_drw', } return table[unit] if unit in table else f'uncore_{unit.lower()}' @@ -342,16 +357,15 @@ class JsonEvent: self.desc += extra_desc if self.long_desc and extra_desc: self.long_desc += extra_desc - if self.pmu: - if self.desc and not self.desc.endswith('. '): - self.desc += '. ' - self.desc = (self.desc if self.desc else '') + ('Unit: ' + self.pmu + ' ') - if arch_std and arch_std.lower() in _arch_std_events: - event = _arch_std_events[arch_std.lower()].event - # Copy from the architecture standard event to self for undefined fields. - for attr, value in _arch_std_events[arch_std.lower()].__dict__.items(): - if hasattr(self, attr) and not getattr(self, attr): - setattr(self, attr, value) + if arch_std: + if arch_std.lower() in _arch_std_events: + event = _arch_std_events[arch_std.lower()].event + # Copy from the architecture standard event to self for undefined fields. + for attr, value in _arch_std_events[arch_std.lower()].__dict__.items(): + if hasattr(self, attr) and not getattr(self, attr): + setattr(self, attr, value) + else: + raise argparse.ArgumentTypeError('Cannot find arch std event:', arch_std) self.event = real_event(self.name, event) @@ -433,13 +447,13 @@ def add_events_table_entries(item: os.DirEntry, topic: str) -> None: def print_pending_events() -> None: """Optionally close events table.""" - def event_cmp_key(j: JsonEvent) -> Tuple[bool, str, str, str, str]: + def event_cmp_key(j: JsonEvent) -> Tuple[str, str, bool, str, str]: def fix_none(s: Optional[str]) -> str: if s is None: return '' return s - return (j.desc is not None, fix_none(j.topic), fix_none(j.name), fix_none(j.pmu), + return (fix_none(j.pmu).replace(',','_'), fix_none(j.name), j.desc is not None, fix_none(j.topic), fix_none(j.metric_name)) global _pending_events @@ -454,13 +468,36 @@ def print_pending_events() -> None: global event_tables _event_tables.append(_pending_events_tblname) - _args.output_file.write( - f'static const struct compact_pmu_event {_pending_events_tblname}[] = {{\n') - + first = True + last_pmu = None + pmus = set() for event in sorted(_pending_events, key=event_cmp_key): + if event.pmu != last_pmu: + if not first: + _args.output_file.write('};\n') + pmu_name = event.pmu.replace(',', '_') + _args.output_file.write( + f'static const struct compact_pmu_event {_pending_events_tblname}_{pmu_name}[] = {{\n') + first = False + last_pmu = event.pmu + pmus.add((event.pmu, pmu_name)) + _args.output_file.write(event.to_c_string(metric=False)) _pending_events = [] + _args.output_file.write(f""" +}}; + +const struct pmu_table_entry {_pending_events_tblname}[] = {{ +""") + for (pmu, tbl_pmu) in sorted(pmus): + pmu_name = f"{pmu}\\000" + _args.output_file.write(f"""{{ + .entries = {_pending_events_tblname}_{tbl_pmu}, + .num_entries = ARRAY_SIZE({_pending_events_tblname}_{tbl_pmu}), + .pmu_name = {{ {_bcs.offsets[pmu_name]} /* {pmu_name} */ }}, +}}, +""") _args.output_file.write('};\n\n') def print_pending_metrics() -> None: @@ -486,13 +523,36 @@ def print_pending_metrics() -> None: global metric_tables _metric_tables.append(_pending_metrics_tblname) - _args.output_file.write( - f'static const struct compact_pmu_event {_pending_metrics_tblname}[] = {{\n') - + first = True + last_pmu = None + pmus = set() for metric in sorted(_pending_metrics, key=metric_cmp_key): + if metric.pmu != last_pmu: + if not first: + _args.output_file.write('};\n') + pmu_name = metric.pmu.replace(',', '_') + _args.output_file.write( + f'static const struct compact_pmu_event {_pending_metrics_tblname}_{pmu_name}[] = {{\n') + first = False + last_pmu = metric.pmu + pmus.add((metric.pmu, pmu_name)) + _args.output_file.write(metric.to_c_string(metric=True)) _pending_metrics = [] + _args.output_file.write(f""" +}}; + +const struct pmu_table_entry {_pending_metrics_tblname}[] = {{ +""") + for (pmu, tbl_pmu) in sorted(pmus): + pmu_name = f"{pmu}\\000" + _args.output_file.write(f"""{{ + .entries = {_pending_metrics_tblname}_{tbl_pmu}, + .num_entries = ARRAY_SIZE({_pending_metrics_tblname}_{tbl_pmu}), + .pmu_name = {{ {_bcs.offsets[pmu_name]} /* {pmu_name} */ }}, +}}, +""") _args.output_file.write('};\n\n') def get_topic(topic: str) -> str: @@ -521,17 +581,20 @@ def preprocess_one_file(parents: Sequence[str], item: os.DirEntry) -> None: assert len(mgroup) > 1, parents description = f"{metricgroup_descriptions[mgroup]}\\000" mgroup = f"{mgroup}\\000" - _bcs.add(mgroup) - _bcs.add(description) + _bcs.add(mgroup, metric=True) + _bcs.add(description, metric=True) _metricgroups[mgroup] = description return topic = get_topic(item.name) for event in read_json_events(item.path, topic): + pmu_name = f"{event.pmu}\\000" if event.name: - _bcs.add(event.build_c_string(metric=False)) + _bcs.add(pmu_name, metric=False) + _bcs.add(event.build_c_string(metric=False), metric=False) if event.metric_name: - _bcs.add(event.build_c_string(metric=True)) + _bcs.add(pmu_name, metric=True) + _bcs.add(event.build_c_string(metric=True), metric=True) def process_one_file(parents: Sequence[str], item: os.DirEntry) -> None: """Process a JSON file during the main walk.""" @@ -573,14 +636,14 @@ def print_mapping_table(archs: Sequence[str]) -> None: _args.output_file.write(""" /* Struct used to make the PMU event table implementation opaque to callers. */ struct pmu_events_table { - const struct compact_pmu_event *entries; - size_t length; + const struct pmu_table_entry *pmus; + uint32_t num_pmus; }; /* Struct used to make the PMU metric table implementation opaque to callers. */ struct pmu_metrics_table { - const struct compact_pmu_event *entries; - size_t length; + const struct pmu_table_entry *pmus; + uint32_t num_pmus; }; /* @@ -610,12 +673,12 @@ const struct pmu_events_map pmu_events_map[] = { \t.arch = "testarch", \t.cpuid = "testcpu", \t.event_table = { -\t\t.entries = pmu_events__test_soc_cpu, -\t\t.length = ARRAY_SIZE(pmu_events__test_soc_cpu), +\t\t.pmus = pmu_events__test_soc_cpu, +\t\t.num_pmus = ARRAY_SIZE(pmu_events__test_soc_cpu), \t}, \t.metric_table = { -\t\t.entries = pmu_metrics__test_soc_cpu, -\t\t.length = ARRAY_SIZE(pmu_metrics__test_soc_cpu), +\t\t.pmus = pmu_metrics__test_soc_cpu, +\t\t.num_pmus = ARRAY_SIZE(pmu_metrics__test_soc_cpu), \t} }, """) @@ -645,12 +708,12 @@ const struct pmu_events_map pmu_events_map[] = { \t.arch = "{arch}", \t.cpuid = "{cpuid}", \t.event_table = {{ -\t\t.entries = {event_tblname}, -\t\t.length = {event_size} +\t\t.pmus = {event_tblname}, +\t\t.num_pmus = {event_size} \t}}, \t.metric_table = {{ -\t\t.entries = {metric_tblname}, -\t\t.length = {metric_size} +\t\t.pmus = {metric_tblname}, +\t\t.num_pmus = {metric_size} \t}} }}, """) @@ -681,15 +744,15 @@ static const struct pmu_sys_events pmu_sys_event_tables[] = { for tblname in _sys_event_tables: _args.output_file.write(f"""\t{{ \t\t.event_table = {{ -\t\t\t.entries = {tblname}, -\t\t\t.length = ARRAY_SIZE({tblname}) +\t\t\t.pmus = {tblname}, +\t\t\t.num_pmus = ARRAY_SIZE({tblname}) \t\t}},""") metric_tblname = _sys_event_table_to_metric_table_mapping[tblname] if metric_tblname in _sys_metric_tables: _args.output_file.write(f""" \t\t.metric_table = {{ -\t\t\t.entries = {metric_tblname}, -\t\t\t.length = ARRAY_SIZE({metric_tblname}) +\t\t\t.pmus = {metric_tblname}, +\t\t\t.num_pmus = ARRAY_SIZE({metric_tblname}) \t\t}},""") printed_metric_tables.append(metric_tblname) _args.output_file.write(f""" @@ -749,15 +812,18 @@ static void decompress_metric(int offset, struct pmu_metric *pm) _args.output_file.write('\twhile (*p++);') _args.output_file.write("""} -int pmu_events_table_for_each_event(const struct pmu_events_table *table, - pmu_event_iter_fn fn, - void *data) +static int pmu_events_table__for_each_event_pmu(const struct pmu_events_table *table, + const struct pmu_table_entry *pmu, + pmu_event_iter_fn fn, + void *data) { - for (size_t i = 0; i < table->length; i++) { - struct pmu_event pe; - int ret; + int ret; + struct pmu_event pe = { + .pmu = &big_c_string[pmu->pmu_name.offset], + }; - decompress_event(table->entries[i].offset, &pe); + for (uint32_t i = 0; i < pmu->num_entries; i++) { + decompress_event(pmu->entries[i].offset, &pe); if (!pe.name) continue; ret = fn(&pe, table, data); @@ -765,17 +831,119 @@ int pmu_events_table_for_each_event(const struct pmu_events_table *table, return ret; } return 0; + } + +static int pmu_events_table__find_event_pmu(const struct pmu_events_table *table, + const struct pmu_table_entry *pmu, + const char *name, + pmu_event_iter_fn fn, + void *data) +{ + struct pmu_event pe = { + .pmu = &big_c_string[pmu->pmu_name.offset], + }; + int low = 0, high = pmu->num_entries - 1; + + while (low <= high) { + int cmp, mid = (low + high) / 2; + + decompress_event(pmu->entries[mid].offset, &pe); + + if (!pe.name && !name) + goto do_call; + + if (!pe.name && name) { + low = mid + 1; + continue; + } + if (pe.name && !name) { + high = mid - 1; + continue; + } + + cmp = strcasecmp(pe.name, name); + if (cmp < 0) { + low = mid + 1; + continue; + } + if (cmp > 0) { + high = mid - 1; + continue; + } + do_call: + return fn ? fn(&pe, table, data) : 0; + } + return -1000; } -int pmu_metrics_table_for_each_metric(const struct pmu_metrics_table *table, - pmu_metric_iter_fn fn, - void *data) +int pmu_events_table__for_each_event(const struct pmu_events_table *table, + struct perf_pmu *pmu, + pmu_event_iter_fn fn, + void *data) +{ + for (size_t i = 0; i < table->num_pmus; i++) { + const struct pmu_table_entry *table_pmu = &table->pmus[i]; + const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset]; + int ret; + + if (pmu && !pmu__name_match(pmu, pmu_name)) + continue; + + ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data); + if (pmu || ret) + return ret; + } + return 0; +} + +int pmu_events_table__find_event(const struct pmu_events_table *table, + struct perf_pmu *pmu, + const char *name, + pmu_event_iter_fn fn, + void *data) { - for (size_t i = 0; i < table->length; i++) { - struct pmu_metric pm; + for (size_t i = 0; i < table->num_pmus; i++) { + const struct pmu_table_entry *table_pmu = &table->pmus[i]; + const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset]; int ret; - decompress_metric(table->entries[i].offset, &pm); + if (!pmu__name_match(pmu, pmu_name)) + continue; + + ret = pmu_events_table__find_event_pmu(table, table_pmu, name, fn, data); + if (ret != -1000) + return ret; + } + return -1000; +} + +size_t pmu_events_table__num_events(const struct pmu_events_table *table, + struct perf_pmu *pmu) +{ + size_t count = 0; + + for (size_t i = 0; i < table->num_pmus; i++) { + const struct pmu_table_entry *table_pmu = &table->pmus[i]; + const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset]; + + if (pmu__name_match(pmu, pmu_name)) + count += table_pmu->num_entries; + } + return count; +} + +static int pmu_metrics_table__for_each_metric_pmu(const struct pmu_metrics_table *table, + const struct pmu_table_entry *pmu, + pmu_metric_iter_fn fn, + void *data) +{ + int ret; + struct pmu_metric pm = { + .pmu = &big_c_string[pmu->pmu_name.offset], + }; + + for (uint32_t i = 0; i < pmu->num_entries; i++) { + decompress_metric(pmu->entries[i].offset, &pm); if (!pm.metric_expr) continue; ret = fn(&pm, table, data); @@ -785,11 +953,25 @@ int pmu_metrics_table_for_each_metric(const struct pmu_metrics_table *table, return 0; } +int pmu_metrics_table__for_each_metric(const struct pmu_metrics_table *table, + pmu_metric_iter_fn fn, + void *data) +{ + for (size_t i = 0; i < table->num_pmus; i++) { + int ret = pmu_metrics_table__for_each_metric_pmu(table, &table->pmus[i], + fn, data); + + if (ret) + return ret; + } + return 0; +} + const struct pmu_events_table *perf_pmu__find_events_table(struct perf_pmu *pmu) { const struct pmu_events_table *table = NULL; char *cpuid = perf_pmu__getcpuid(pmu); - int i; + size_t i; /* on some platforms which uses cpus map, cpuid can be NULL for * PMUs other than CORE PMUs. @@ -809,7 +991,17 @@ const struct pmu_events_table *perf_pmu__find_events_table(struct perf_pmu *pmu) } } free(cpuid); - return table; + if (!pmu) + return table; + + for (i = 0; i < table->num_pmus; i++) { + const struct pmu_table_entry *table_pmu = &table->pmus[i]; + const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset]; + + if (pmu__name_match(pmu, pmu_name)) + return table; + } + return NULL; } const struct pmu_metrics_table *perf_pmu__find_metrics_table(struct perf_pmu *pmu) @@ -866,7 +1058,8 @@ int pmu_for_each_core_event(pmu_event_iter_fn fn, void *data) for (const struct pmu_events_map *tables = &pmu_events_map[0]; tables->arch; tables++) { - int ret = pmu_events_table_for_each_event(&tables->event_table, fn, data); + int ret = pmu_events_table__for_each_event(&tables->event_table, + /*pmu=*/ NULL, fn, data); if (ret) return ret; @@ -879,7 +1072,7 @@ int pmu_for_each_core_metric(pmu_metric_iter_fn fn, void *data) for (const struct pmu_events_map *tables = &pmu_events_map[0]; tables->arch; tables++) { - int ret = pmu_metrics_table_for_each_metric(&tables->metric_table, fn, data); + int ret = pmu_metrics_table__for_each_metric(&tables->metric_table, fn, data); if (ret) return ret; @@ -903,7 +1096,8 @@ int pmu_for_each_sys_event(pmu_event_iter_fn fn, void *data) for (const struct pmu_sys_events *tables = &pmu_sys_event_tables[0]; tables->name; tables++) { - int ret = pmu_events_table_for_each_event(&tables->event_table, fn, data); + int ret = pmu_events_table__for_each_event(&tables->event_table, + /*pmu=*/ NULL, fn, data); if (ret) return ret; @@ -916,7 +1110,7 @@ int pmu_for_each_sys_metric(pmu_metric_iter_fn fn, void *data) for (const struct pmu_sys_events *tables = &pmu_sys_event_tables[0]; tables->name; tables++) { - int ret = pmu_metrics_table_for_each_metric(&tables->metric_table, fn, data); + int ret = pmu_metrics_table__for_each_metric(&tables->metric_table, fn, data); if (ret) return ret; @@ -999,14 +1193,20 @@ such as "arm/cortex-a34".''', _args = ap.parse_args() _args.output_file.write(""" -#include "pmu-events/pmu-events.h" +#include <pmu-events/pmu-events.h> #include "util/header.h" #include "util/pmu.h" #include <string.h> #include <stddef.h> struct compact_pmu_event { - int offset; + int offset; +}; + +struct pmu_table_entry { + const struct compact_pmu_event *entries; + uint32_t num_entries; + struct compact_pmu_event pmu_name; }; """) diff --git a/tools/perf/pmu-events/metric.py b/tools/perf/pmu-events/metric.py index 85a3545f5b6a..0e9ec65d92ae 100644 --- a/tools/perf/pmu-events/metric.py +++ b/tools/perf/pmu-events/metric.py @@ -413,6 +413,10 @@ def has_event(event: Event) -> Function: # pylint: disable=invalid-name return Function('has_event', event) +def strcmp_cpuid_str(event: str) -> Function: + # pylint: disable=redefined-builtin + # pylint: disable=invalid-name + return Function('strcmp_cpuid_str', event) class Metric: """An individual metric that will specifiable on the perf command line.""" @@ -541,14 +545,23 @@ def ParsePerfJson(orig: str) -> Expression: """ # pylint: disable=eval-used py = orig.strip() + # First try to convert everything that looks like a string (event name) into Event(r"EVENT_NAME"). + # This isn't very selective so is followed up by converting some unwanted conversions back again py = re.sub(r'([a-zA-Z][^-+/\* \\\(\),]*(?:\\.[^-+/\* \\\(\),]*)*)', r'Event(r"\1")', py) + # If it started with a # it should have been a literal, rather than an event name py = re.sub(r'#Event\(r"([^"]*)"\)', r'Literal("#\1")', py) + # Convert accidentally converted hex constants ("0Event(r"xDEADBEEF)"") back to a constant, + # but keep it wrapped in Event(), otherwise Python drops the 0x prefix and it gets interpreted as + # a double by the Bison parser + py = re.sub(r'0Event\(r"[xX]([0-9a-fA-F]*)"\)', r'Event("0x\1")', py) + # Convert accidentally converted scientific notation constants back py = re.sub(r'([0-9]+)Event\(r"(e[0-9]+)"\)', r'\1\2', py) - keywords = ['if', 'else', 'min', 'max', 'd_ratio', 'source_count', 'has_event'] + # Convert all the known keywords back from events to just the keyword + keywords = ['if', 'else', 'min', 'max', 'd_ratio', 'source_count', 'has_event', 'strcmp_cpuid_str', + 'cpuid_not_more_than'] for kw in keywords: py = re.sub(rf'Event\(r"{kw}"\)', kw, py) - try: parsed = ast.parse(py, mode='eval') except SyntaxError as e: diff --git a/tools/perf/pmu-events/pmu-events.h b/tools/perf/pmu-events/pmu-events.h index caf59f23cd64..f5aa96f1685c 100644 --- a/tools/perf/pmu-events/pmu-events.h +++ b/tools/perf/pmu-events/pmu-events.h @@ -3,6 +3,7 @@ #define PMU_EVENTS_H #include <stdbool.h> +#include <stddef.h> struct perf_pmu; @@ -77,9 +78,19 @@ typedef int (*pmu_metric_iter_fn)(const struct pmu_metric *pm, const struct pmu_metrics_table *table, void *data); -int pmu_events_table_for_each_event(const struct pmu_events_table *table, pmu_event_iter_fn fn, +int pmu_events_table__for_each_event(const struct pmu_events_table *table, + struct perf_pmu *pmu, + pmu_event_iter_fn fn, void *data); -int pmu_metrics_table_for_each_metric(const struct pmu_metrics_table *table, pmu_metric_iter_fn fn, +int pmu_events_table__find_event(const struct pmu_events_table *table, + struct perf_pmu *pmu, + const char *name, + pmu_event_iter_fn fn, + void *data); +size_t pmu_events_table__num_events(const struct pmu_events_table *table, + struct perf_pmu *pmu); + +int pmu_metrics_table__for_each_metric(const struct pmu_metrics_table *table, pmu_metric_iter_fn fn, void *data); const struct pmu_events_table *perf_pmu__find_events_table(struct perf_pmu *pmu); diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Build b/tools/perf/scripts/python/Perf-Trace-Util/Build index 7d0e33ce6aba..5b0b5ff7e14a 100644 --- a/tools/perf/scripts/python/Perf-Trace-Util/Build +++ b/tools/perf/scripts/python/Perf-Trace-Util/Build @@ -1,3 +1,4 @@ perf-y += Context.o -CFLAGS_Context.o += $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs +# -Wno-declaration-after-statement: The python headers have mixed code with declarations (decls after asserts, for instance) +CFLAGS_Context.o += $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-nested-externs -Wno-declaration-after-statement diff --git a/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py b/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py index 7384dcb628c4..b75d31858e54 100644 --- a/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py +++ b/tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py @@ -54,6 +54,7 @@ try: import audit machine_to_id = { 'x86_64': audit.MACH_86_64, + 'aarch64': audit.MACH_AARCH64, 'alpha' : audit.MACH_ALPHA, 'ia64' : audit.MACH_IA64, 'ppc' : audit.MACH_PPC, @@ -73,9 +74,9 @@ try: except: if not audit_package_warned: audit_package_warned = True - print("Install the audit-libs-python package to get syscall names.\n" - "For example:\n # apt-get install python-audit (Ubuntu)" - "\n # yum install audit-libs-python (Fedora)" + print("Install the python-audit package to get syscall names.\n" + "For example:\n # apt-get install python3-audit (Ubuntu)" + "\n # yum install python3-audit (Fedora)" "\n etc.\n") def syscall_name(id): diff --git a/tools/perf/scripts/python/bin/gecko-record b/tools/perf/scripts/python/bin/gecko-record new file mode 100644 index 000000000000..f0d1aa55f171 --- /dev/null +++ b/tools/perf/scripts/python/bin/gecko-record @@ -0,0 +1,2 @@ +#!/bin/bash +perf record -F 99 -g "$@" diff --git a/tools/perf/scripts/python/bin/gecko-report b/tools/perf/scripts/python/bin/gecko-report new file mode 100755 index 000000000000..1867ec8d9757 --- /dev/null +++ b/tools/perf/scripts/python/bin/gecko-report @@ -0,0 +1,7 @@ +#!/bin/bash +# description: create firefox gecko profile json format from perf.data +if [ "$*" = "-i -" ]; then +perf script -s "$PERF_EXEC_PATH"/scripts/python/gecko.py +else +perf script -s "$PERF_EXEC_PATH"/scripts/python/gecko.py -- "$@" +fi diff --git a/tools/perf/scripts/python/gecko.py b/tools/perf/scripts/python/gecko.py new file mode 100644 index 000000000000..bc5a72f94bfa --- /dev/null +++ b/tools/perf/scripts/python/gecko.py @@ -0,0 +1,395 @@ +# gecko.py - Convert perf record output to Firefox's gecko profile format +# SPDX-License-Identifier: GPL-2.0 +# +# The script converts perf.data to Gecko Profile Format, +# which can be read by https://profiler.firefox.com/. +# +# Usage: +# +# perf record -a -g -F 99 sleep 60 +# perf script report gecko +# +# Combined: +# +# perf script gecko -F 99 -a sleep 60 + +import os +import sys +import time +import json +import string +import random +import argparse +import threading +import webbrowser +import urllib.parse +from os import system +from functools import reduce +from dataclasses import dataclass, field +from http.server import HTTPServer, SimpleHTTPRequestHandler, test +from typing import List, Dict, Optional, NamedTuple, Set, Tuple, Any + +# Add the Perf-Trace-Util library to the Python path +sys.path.append(os.environ['PERF_EXEC_PATH'] + \ + '/scripts/python/Perf-Trace-Util/lib/Perf/Trace') + +from perf_trace_context import * +from Core import * + +StringID = int +StackID = int +FrameID = int +CategoryID = int +Milliseconds = float + +# start_time is intialiazed only once for the all event traces. +start_time = None + +# https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/profile.js#L425 +# Follow Brendan Gregg's Flamegraph convention: orange for kernel and yellow for user space by default. +CATEGORIES = None + +# The product name is used by the profiler UI to show the Operating system and Processor. +PRODUCT = os.popen('uname -op').read().strip() + +# store the output file +output_file = None + +# Here key = tid, value = Thread +tid_to_thread = dict() + +# The HTTP server is used to serve the profile to the profiler UI. +http_server_thread = None + +# The category index is used by the profiler UI to show the color of the flame graph. +USER_CATEGORY_INDEX = 0 +KERNEL_CATEGORY_INDEX = 1 + +# https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L156 +class Frame(NamedTuple): + string_id: StringID + relevantForJS: bool + innerWindowID: int + implementation: None + optimizations: None + line: None + column: None + category: CategoryID + subcategory: int + +# https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L216 +class Stack(NamedTuple): + prefix_id: Optional[StackID] + frame_id: FrameID + +# https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L90 +class Sample(NamedTuple): + stack_id: Optional[StackID] + time_ms: Milliseconds + responsiveness: int + +@dataclass +class Thread: + """A builder for a profile of the thread. + + Attributes: + comm: Thread command-line (name). + pid: process ID of containing process. + tid: thread ID. + samples: Timeline of profile samples. + frameTable: interned stack frame ID -> stack frame. + stringTable: interned string ID -> string. + stringMap: interned string -> string ID. + stackTable: interned stack ID -> stack. + stackMap: (stack prefix ID, leaf stack frame ID) -> interned Stack ID. + frameMap: Stack Frame string -> interned Frame ID. + comm: str + pid: int + tid: int + samples: List[Sample] = field(default_factory=list) + frameTable: List[Frame] = field(default_factory=list) + stringTable: List[str] = field(default_factory=list) + stringMap: Dict[str, int] = field(default_factory=dict) + stackTable: List[Stack] = field(default_factory=list) + stackMap: Dict[Tuple[Optional[int], int], int] = field(default_factory=dict) + frameMap: Dict[str, int] = field(default_factory=dict) + """ + comm: str + pid: int + tid: int + samples: List[Sample] = field(default_factory=list) + frameTable: List[Frame] = field(default_factory=list) + stringTable: List[str] = field(default_factory=list) + stringMap: Dict[str, int] = field(default_factory=dict) + stackTable: List[Stack] = field(default_factory=list) + stackMap: Dict[Tuple[Optional[int], int], int] = field(default_factory=dict) + frameMap: Dict[str, int] = field(default_factory=dict) + + def _intern_stack(self, frame_id: int, prefix_id: Optional[int]) -> int: + """Gets a matching stack, or saves the new stack. Returns a Stack ID.""" + key = f"{frame_id}" if prefix_id is None else f"{frame_id},{prefix_id}" + # key = (prefix_id, frame_id) + stack_id = self.stackMap.get(key) + if stack_id is None: + # return stack_id + stack_id = len(self.stackTable) + self.stackTable.append(Stack(prefix_id=prefix_id, frame_id=frame_id)) + self.stackMap[key] = stack_id + return stack_id + + def _intern_string(self, string: str) -> int: + """Gets a matching string, or saves the new string. Returns a String ID.""" + string_id = self.stringMap.get(string) + if string_id is not None: + return string_id + string_id = len(self.stringTable) + self.stringTable.append(string) + self.stringMap[string] = string_id + return string_id + + def _intern_frame(self, frame_str: str) -> int: + """Gets a matching stack frame, or saves the new frame. Returns a Frame ID.""" + frame_id = self.frameMap.get(frame_str) + if frame_id is not None: + return frame_id + frame_id = len(self.frameTable) + self.frameMap[frame_str] = frame_id + string_id = self._intern_string(frame_str) + + symbol_name_to_category = KERNEL_CATEGORY_INDEX if frame_str.find('kallsyms') != -1 \ + or frame_str.find('/vmlinux') != -1 \ + or frame_str.endswith('.ko)') \ + else USER_CATEGORY_INDEX + + self.frameTable.append(Frame( + string_id=string_id, + relevantForJS=False, + innerWindowID=0, + implementation=None, + optimizations=None, + line=None, + column=None, + category=symbol_name_to_category, + subcategory=None, + )) + return frame_id + + def _add_sample(self, comm: str, stack: List[str], time_ms: Milliseconds) -> None: + """Add a timestamped stack trace sample to the thread builder. + Args: + comm: command-line (name) of the thread at this sample + stack: sampled stack frames. Root first, leaf last. + time_ms: timestamp of sample in milliseconds. + """ + # Ihreads may not set their names right after they are created. + # Instead, they might do it later. In such situations, to use the latest name they have set. + if self.comm != comm: + self.comm = comm + + prefix_stack_id = reduce(lambda prefix_id, frame: self._intern_stack + (self._intern_frame(frame), prefix_id), stack, None) + if prefix_stack_id is not None: + self.samples.append(Sample(stack_id=prefix_stack_id, + time_ms=time_ms, + responsiveness=0)) + + def _to_json_dict(self) -> Dict: + """Converts current Thread to GeckoThread JSON format.""" + # Gecko profile format is row-oriented data as List[List], + # And a schema for interpreting each index. + # Schema: + # https://github.com/firefox-devtools/profiler/blob/main/docs-developer/gecko-profile-format.md + # https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L230 + return { + "tid": self.tid, + "pid": self.pid, + "name": self.comm, + # https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L51 + "markers": { + "schema": { + "name": 0, + "startTime": 1, + "endTime": 2, + "phase": 3, + "category": 4, + "data": 5, + }, + "data": [], + }, + + # https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L90 + "samples": { + "schema": { + "stack": 0, + "time": 1, + "responsiveness": 2, + }, + "data": self.samples + }, + + # https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L156 + "frameTable": { + "schema": { + "location": 0, + "relevantForJS": 1, + "innerWindowID": 2, + "implementation": 3, + "optimizations": 4, + "line": 5, + "column": 6, + "category": 7, + "subcategory": 8, + }, + "data": self.frameTable, + }, + + # https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L216 + "stackTable": { + "schema": { + "prefix": 0, + "frame": 1, + }, + "data": self.stackTable, + }, + "stringTable": self.stringTable, + "registerTime": 0, + "unregisterTime": None, + "processType": "default", + } + +# Uses perf script python interface to parse each +# event and store the data in the thread builder. +def process_event(param_dict: Dict) -> None: + global start_time + global tid_to_thread + time_stamp = (param_dict['sample']['time'] // 1000) / 1000 + pid = param_dict['sample']['pid'] + tid = param_dict['sample']['tid'] + comm = param_dict['comm'] + + # Start time is the time of the first sample + if not start_time: + start_time = time_stamp + + # Parse and append the callchain of the current sample into a stack. + stack = [] + if param_dict['callchain']: + for call in param_dict['callchain']: + if 'sym' not in call: + continue + stack.append(f'{call["sym"]["name"]} (in {call["dso"]})') + if len(stack) != 0: + # Reverse the stack, as root come first and the leaf at the end. + stack = stack[::-1] + + # During perf record if -g is not used, the callchain is not available. + # In that case, the symbol and dso are available in the event parameters. + else: + func = param_dict['symbol'] if 'symbol' in param_dict else '[unknown]' + dso = param_dict['dso'] if 'dso' in param_dict else '[unknown]' + stack.append(f'{func} (in {dso})') + + # Add sample to the specific thread. + thread = tid_to_thread.get(tid) + if thread is None: + thread = Thread(comm=comm, pid=pid, tid=tid) + tid_to_thread[tid] = thread + thread._add_sample(comm=comm, stack=stack, time_ms=time_stamp) + +def trace_begin() -> None: + global output_file + if (output_file is None): + print("Staring Firefox Profiler on your default browser...") + global http_server_thread + http_server_thread = threading.Thread(target=test, args=(CORSRequestHandler, HTTPServer,)) + http_server_thread.daemon = True + http_server_thread.start() + +# Trace_end runs at the end and will be used to aggregate +# the data into the final json object and print it out to stdout. +def trace_end() -> None: + global output_file + threads = [thread._to_json_dict() for thread in tid_to_thread.values()] + + # Schema: https://github.com/firefox-devtools/profiler/blob/53970305b51b9b472e26d7457fee1d66cd4e2737/src/types/gecko-profile.js#L305 + gecko_profile_with_meta = { + "meta": { + "interval": 1, + "processType": 0, + "product": PRODUCT, + "stackwalk": 1, + "debug": 0, + "gcpoison": 0, + "asyncstack": 1, + "startTime": start_time, + "shutdownTime": None, + "version": 24, + "presymbolicated": True, + "categories": CATEGORIES, + "markerSchema": [], + }, + "libs": [], + "threads": threads, + "processes": [], + "pausedRanges": [], + } + # launch the profiler on local host if not specified --save-only args, otherwise print to file + if (output_file is None): + output_file = 'gecko_profile.json' + with open(output_file, 'w') as f: + json.dump(gecko_profile_with_meta, f, indent=2) + launchFirefox(output_file) + time.sleep(1) + print(f'[ perf gecko: Captured and wrote into {output_file} ]') + else: + print(f'[ perf gecko: Captured and wrote into {output_file} ]') + with open(output_file, 'w') as f: + json.dump(gecko_profile_with_meta, f, indent=2) + +# Used to enable Cross-Origin Resource Sharing (CORS) for requests coming from 'https://profiler.firefox.com', allowing it to access resources from this server. +class CORSRequestHandler(SimpleHTTPRequestHandler): + def end_headers (self): + self.send_header('Access-Control-Allow-Origin', 'https://profiler.firefox.com') + SimpleHTTPRequestHandler.end_headers(self) + +# start a local server to serve the gecko_profile.json file to the profiler.firefox.com +def launchFirefox(file): + safe_string = urllib.parse.quote_plus(f'http://localhost:8000/{file}') + url = 'https://profiler.firefox.com/from-url/' + safe_string + webbrowser.open(f'{url}') + +def main() -> None: + global output_file + global CATEGORIES + parser = argparse.ArgumentParser(description="Convert perf.data to Firefox\'s Gecko Profile format which can be uploaded to profiler.firefox.com for visualization") + + # Add the command-line options + # Colors must be defined according to this: + # https://github.com/firefox-devtools/profiler/blob/50124adbfa488adba6e2674a8f2618cf34b59cd2/res/css/categories.css + parser.add_argument('--user-color', default='yellow', help='Color for the User category', choices=['yellow', 'blue', 'purple', 'green', 'orange', 'red', 'grey', 'magenta']) + parser.add_argument('--kernel-color', default='orange', help='Color for the Kernel category', choices=['yellow', 'blue', 'purple', 'green', 'orange', 'red', 'grey', 'magenta']) + # If --save-only is specified, the output will be saved to a file instead of opening Firefox's profiler directly. + parser.add_argument('--save-only', help='Save the output to a file instead of opening Firefox\'s profiler') + + # Parse the command-line arguments + args = parser.parse_args() + # Access the values provided by the user + user_color = args.user_color + kernel_color = args.kernel_color + output_file = args.save_only + + CATEGORIES = [ + { + "name": 'User', + "color": user_color, + "subcategories": ['Other'] + }, + { + "name": 'Kernel', + "color": kernel_color, + "subcategories": ['Other'] + }, + ] + +if __name__ == '__main__': + main() diff --git a/tools/perf/tests/.gitignore b/tools/perf/tests/.gitignore deleted file mode 100644 index d053b325f728..000000000000 --- a/tools/perf/tests/.gitignore +++ /dev/null @@ -1,5 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0-only -llvm-src-base.c -llvm-src-kbuild.c -llvm-src-prologue.c -llvm-src-relocation.c diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build index fb9ac5dc4079..63d5e6d5f165 100644 --- a/tools/perf/tests/Build +++ b/tools/perf/tests/Build @@ -37,8 +37,6 @@ perf-y += sample-parsing.o perf-y += parse-no-sample-id-all.o perf-y += kmod-path.o perf-y += thread-map.o -perf-y += llvm.o llvm-src-base.o llvm-src-kbuild.o llvm-src-prologue.o llvm-src-relocation.o -perf-y += bpf.o perf-y += topology.o perf-y += mem.o perf-y += cpumap.o @@ -51,7 +49,6 @@ perf-y += sdt.o perf-y += is_printable_array.o perf-y += bitmap.o perf-y += perf-hooks.o -perf-y += clang.o perf-y += unit_number__scnprintf.o perf-y += mem2node.o perf-y += maps.o @@ -70,34 +67,6 @@ perf-y += sigtrap.o perf-y += event_groups.o perf-y += symbols.o -$(OUTPUT)tests/llvm-src-base.c: tests/bpf-script-example.c tests/Build - $(call rule_mkdir) - $(Q)echo '#include <tests/llvm.h>' > $@ - $(Q)echo 'const char test_llvm__bpf_base_prog[] =' >> $@ - $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@ - $(Q)echo ';' >> $@ - -$(OUTPUT)tests/llvm-src-kbuild.c: tests/bpf-script-test-kbuild.c tests/Build - $(call rule_mkdir) - $(Q)echo '#include <tests/llvm.h>' > $@ - $(Q)echo 'const char test_llvm__bpf_test_kbuild_prog[] =' >> $@ - $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@ - $(Q)echo ';' >> $@ - -$(OUTPUT)tests/llvm-src-prologue.c: tests/bpf-script-test-prologue.c tests/Build - $(call rule_mkdir) - $(Q)echo '#include <tests/llvm.h>' > $@ - $(Q)echo 'const char test_llvm__bpf_test_prologue_prog[] =' >> $@ - $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@ - $(Q)echo ';' >> $@ - -$(OUTPUT)tests/llvm-src-relocation.c: tests/bpf-script-test-relocation.c tests/Build - $(call rule_mkdir) - $(Q)echo '#include <tests/llvm.h>' > $@ - $(Q)echo 'const char test_llvm__bpf_test_relocation[] =' >> $@ - $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@ - $(Q)echo ';' >> $@ - ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 arm arm64 powerpc)) perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o endif diff --git a/tools/perf/tests/bpf-script-example.c b/tools/perf/tests/bpf-script-example.c deleted file mode 100644 index b638cc99d5ae..000000000000 --- a/tools/perf/tests/bpf-script-example.c +++ /dev/null @@ -1,60 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * bpf-script-example.c - * Test basic LLVM building - */ -#ifndef LINUX_VERSION_CODE -# error Need LINUX_VERSION_CODE -# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig' -#endif -#define BPF_ANY 0 -#define BPF_MAP_TYPE_ARRAY 2 -#define BPF_FUNC_map_lookup_elem 1 -#define BPF_FUNC_map_update_elem 2 - -static void *(*bpf_map_lookup_elem)(void *map, void *key) = - (void *) BPF_FUNC_map_lookup_elem; -static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) = - (void *) BPF_FUNC_map_update_elem; - -/* - * Following macros are taken from tools/lib/bpf/bpf_helpers.h, - * and are used to create BTF defined maps. It is easier to take - * 2 simple macros, than being able to include above header in - * runtime. - * - * __uint - defines integer attribute of BTF map definition, - * Such attributes are represented using a pointer to an array, - * in which dimensionality of array encodes specified integer - * value. - * - * __type - defines pointer variable with typeof(val) type for - * attributes like key or value, which will be defined by the - * size of the type. - */ -#define __uint(name, val) int (*name)[val] -#define __type(name, val) typeof(val) *name - -#define SEC(NAME) __attribute__((section(NAME), used)) -struct { - __uint(type, BPF_MAP_TYPE_ARRAY); - __uint(max_entries, 1); - __type(key, int); - __type(value, int); -} flip_table SEC(".maps"); - -SEC("syscalls:sys_enter_epoll_pwait") -int bpf_func__SyS_epoll_pwait(void *ctx) -{ - int ind =0; - int *flag = bpf_map_lookup_elem(&flip_table, &ind); - int new_flag; - if (!flag) - return 0; - /* flip flag and store back */ - new_flag = !*flag; - bpf_map_update_elem(&flip_table, &ind, &new_flag, BPF_ANY); - return new_flag; -} -char _license[] SEC("license") = "GPL"; -int _version SEC("version") = LINUX_VERSION_CODE; diff --git a/tools/perf/tests/bpf-script-test-kbuild.c b/tools/perf/tests/bpf-script-test-kbuild.c deleted file mode 100644 index 219673aa278f..000000000000 --- a/tools/perf/tests/bpf-script-test-kbuild.c +++ /dev/null @@ -1,21 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * bpf-script-test-kbuild.c - * Test include from kernel header - */ -#ifndef LINUX_VERSION_CODE -# error Need LINUX_VERSION_CODE -# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig' -#endif -#define SEC(NAME) __attribute__((section(NAME), used)) - -#include <uapi/linux/fs.h> - -SEC("func=vfs_llseek") -int bpf_func__vfs_llseek(void *ctx) -{ - return 0; -} - -char _license[] SEC("license") = "GPL"; -int _version SEC("version") = LINUX_VERSION_CODE; diff --git a/tools/perf/tests/bpf-script-test-prologue.c b/tools/perf/tests/bpf-script-test-prologue.c deleted file mode 100644 index 91778b5c6125..000000000000 --- a/tools/perf/tests/bpf-script-test-prologue.c +++ /dev/null @@ -1,49 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * bpf-script-test-prologue.c - * Test BPF prologue - */ -#ifndef LINUX_VERSION_CODE -# error Need LINUX_VERSION_CODE -# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig' -#endif -#define SEC(NAME) __attribute__((section(NAME), used)) - -#include <uapi/linux/fs.h> - -/* - * If CONFIG_PROFILE_ALL_BRANCHES is selected, - * 'if' is redefined after include kernel header. - * Recover 'if' for BPF object code. - */ -#ifdef if -# undef if -#endif - -typedef unsigned int __bitwise fmode_t; - -#define FMODE_READ 0x1 -#define FMODE_WRITE 0x2 - -static void (*bpf_trace_printk)(const char *fmt, int fmt_size, ...) = - (void *) 6; - -SEC("func=null_lseek file->f_mode offset orig") -int bpf_func__null_lseek(void *ctx, int err, unsigned long _f_mode, - unsigned long offset, unsigned long orig) -{ - fmode_t f_mode = (fmode_t)_f_mode; - - if (err) - return 0; - if (f_mode & FMODE_WRITE) - return 0; - if (offset & 1) - return 0; - if (orig == SEEK_CUR) - return 0; - return 1; -} - -char _license[] SEC("license") = "GPL"; -int _version SEC("version") = LINUX_VERSION_CODE; diff --git a/tools/perf/tests/bpf-script-test-relocation.c b/tools/perf/tests/bpf-script-test-relocation.c deleted file mode 100644 index 74006e4b2d24..000000000000 --- a/tools/perf/tests/bpf-script-test-relocation.c +++ /dev/null @@ -1,51 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * bpf-script-test-relocation.c - * Test BPF loader checking relocation - */ -#ifndef LINUX_VERSION_CODE -# error Need LINUX_VERSION_CODE -# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig' -#endif -#define BPF_ANY 0 -#define BPF_MAP_TYPE_ARRAY 2 -#define BPF_FUNC_map_lookup_elem 1 -#define BPF_FUNC_map_update_elem 2 - -static void *(*bpf_map_lookup_elem)(void *map, void *key) = - (void *) BPF_FUNC_map_lookup_elem; -static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) = - (void *) BPF_FUNC_map_update_elem; - -struct bpf_map_def { - unsigned int type; - unsigned int key_size; - unsigned int value_size; - unsigned int max_entries; -}; - -#define SEC(NAME) __attribute__((section(NAME), used)) -struct bpf_map_def SEC("maps") my_table = { - .type = BPF_MAP_TYPE_ARRAY, - .key_size = sizeof(int), - .value_size = sizeof(int), - .max_entries = 1, -}; - -int this_is_a_global_val; - -SEC("func=sys_write") -int bpf_func__sys_write(void *ctx) -{ - int key = 0; - int value = 0; - - /* - * Incorrect relocation. Should not allow this program be - * loaded into kernel. - */ - bpf_map_update_elem(&this_is_a_global_val, &key, &value, 0); - return 0; -} -char _license[] SEC("license") = "GPL"; -int _version SEC("version") = LINUX_VERSION_CODE; diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c deleted file mode 100644 index 8beb46066034..000000000000 --- a/tools/perf/tests/bpf.c +++ /dev/null @@ -1,389 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <errno.h> -#include <stdio.h> -#include <stdlib.h> -#include <sys/epoll.h> -#include <sys/types.h> -#include <sys/stat.h> -#include <fcntl.h> -#include <util/record.h> -#include <util/util.h> -#include <util/bpf-loader.h> -#include <util/evlist.h> -#include <linux/filter.h> -#include <linux/kernel.h> -#include <linux/string.h> -#include <api/fs/fs.h> -#include <perf/mmap.h> -#include "tests.h" -#include "llvm.h" -#include "debug.h" -#include "parse-events.h" -#include "util/mmap.h" -#define NR_ITERS 111 -#define PERF_TEST_BPF_PATH "/sys/fs/bpf/perf_test" - -#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT) -#include <linux/bpf.h> -#include <bpf/bpf.h> - -static int epoll_pwait_loop(void) -{ - int i; - - /* Should fail NR_ITERS times */ - for (i = 0; i < NR_ITERS; i++) - epoll_pwait(-(i + 1), NULL, 0, 0, NULL); - return 0; -} - -#ifdef HAVE_BPF_PROLOGUE - -static int llseek_loop(void) -{ - int fds[2], i; - - fds[0] = open("/dev/null", O_RDONLY); - fds[1] = open("/dev/null", O_RDWR); - - if (fds[0] < 0 || fds[1] < 0) - return -1; - - for (i = 0; i < NR_ITERS; i++) { - lseek(fds[i % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET); - lseek(fds[(i + 1) % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET); - } - close(fds[0]); - close(fds[1]); - return 0; -} - -#endif - -static struct { - enum test_llvm__testcase prog_id; - const char *name; - const char *msg_compile_fail; - const char *msg_load_fail; - int (*target_func)(void); - int expect_result; - bool pin; -} bpf_testcase_table[] = { - { - .prog_id = LLVM_TESTCASE_BASE, - .name = "[basic_bpf_test]", - .msg_compile_fail = "fix 'perf test LLVM' first", - .msg_load_fail = "load bpf object failed", - .target_func = &epoll_pwait_loop, - .expect_result = (NR_ITERS + 1) / 2, - }, - { - .prog_id = LLVM_TESTCASE_BASE, - .name = "[bpf_pinning]", - .msg_compile_fail = "fix kbuild first", - .msg_load_fail = "check your vmlinux setting?", - .target_func = &epoll_pwait_loop, - .expect_result = (NR_ITERS + 1) / 2, - .pin = true, - }, -#ifdef HAVE_BPF_PROLOGUE - { - .prog_id = LLVM_TESTCASE_BPF_PROLOGUE, - .name = "[bpf_prologue_test]", - .msg_compile_fail = "fix kbuild first", - .msg_load_fail = "check your vmlinux setting?", - .target_func = &llseek_loop, - .expect_result = (NR_ITERS + 1) / 4, - }, -#endif -}; - -static int do_test(struct bpf_object *obj, int (*func)(void), - int expect) -{ - struct record_opts opts = { - .target = { - .uid = UINT_MAX, - .uses_mmap = true, - }, - .freq = 0, - .mmap_pages = 256, - .default_interval = 1, - }; - - char pid[16]; - char sbuf[STRERR_BUFSIZE]; - struct evlist *evlist; - int i, ret = TEST_FAIL, err = 0, count = 0; - - struct parse_events_state parse_state; - struct parse_events_error parse_error; - - parse_events_error__init(&parse_error); - bzero(&parse_state, sizeof(parse_state)); - parse_state.error = &parse_error; - INIT_LIST_HEAD(&parse_state.list); - - err = parse_events_load_bpf_obj(&parse_state, &parse_state.list, obj, NULL); - parse_events_error__exit(&parse_error); - if (err == -ENODATA) { - pr_debug("Failed to add events selected by BPF, debuginfo package not installed\n"); - return TEST_SKIP; - } - if (err || list_empty(&parse_state.list)) { - pr_debug("Failed to add events selected by BPF\n"); - return TEST_FAIL; - } - - snprintf(pid, sizeof(pid), "%d", getpid()); - pid[sizeof(pid) - 1] = '\0'; - opts.target.tid = opts.target.pid = pid; - - /* Instead of evlist__new_default, don't add default events */ - evlist = evlist__new(); - if (!evlist) { - pr_debug("Not enough memory to create evlist\n"); - return TEST_FAIL; - } - - err = evlist__create_maps(evlist, &opts.target); - if (err < 0) { - pr_debug("Not enough memory to create thread/cpu maps\n"); - goto out_delete_evlist; - } - - evlist__splice_list_tail(evlist, &parse_state.list); - - evlist__config(evlist, &opts, NULL); - - err = evlist__open(evlist); - if (err < 0) { - pr_debug("perf_evlist__open: %s\n", - str_error_r(errno, sbuf, sizeof(sbuf))); - goto out_delete_evlist; - } - - err = evlist__mmap(evlist, opts.mmap_pages); - if (err < 0) { - pr_debug("evlist__mmap: %s\n", - str_error_r(errno, sbuf, sizeof(sbuf))); - goto out_delete_evlist; - } - - evlist__enable(evlist); - (*func)(); - evlist__disable(evlist); - - for (i = 0; i < evlist->core.nr_mmaps; i++) { - union perf_event *event; - struct mmap *md; - - md = &evlist->mmap[i]; - if (perf_mmap__read_init(&md->core) < 0) - continue; - - while ((event = perf_mmap__read_event(&md->core)) != NULL) { - const u32 type = event->header.type; - - if (type == PERF_RECORD_SAMPLE) - count ++; - } - perf_mmap__read_done(&md->core); - } - - if (count != expect * evlist->core.nr_entries) { - pr_debug("BPF filter result incorrect, expected %d, got %d samples\n", expect * evlist->core.nr_entries, count); - goto out_delete_evlist; - } - - ret = TEST_OK; - -out_delete_evlist: - evlist__delete(evlist); - return ret; -} - -static struct bpf_object * -prepare_bpf(void *obj_buf, size_t obj_buf_sz, const char *name) -{ - struct bpf_object *obj; - - obj = bpf__prepare_load_buffer(obj_buf, obj_buf_sz, name); - if (IS_ERR(obj)) { - pr_debug("Compile BPF program failed.\n"); - return NULL; - } - return obj; -} - -static int __test__bpf(int idx) -{ - int ret; - void *obj_buf; - size_t obj_buf_sz; - struct bpf_object *obj; - - ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz, - bpf_testcase_table[idx].prog_id, - false, NULL); - if (ret != TEST_OK || !obj_buf || !obj_buf_sz) { - pr_debug("Unable to get BPF object, %s\n", - bpf_testcase_table[idx].msg_compile_fail); - if ((idx == 0) || (ret == TEST_SKIP)) - return TEST_SKIP; - else - return TEST_FAIL; - } - - obj = prepare_bpf(obj_buf, obj_buf_sz, - bpf_testcase_table[idx].name); - if ((!!bpf_testcase_table[idx].target_func) != (!!obj)) { - if (!obj) - pr_debug("Fail to load BPF object: %s\n", - bpf_testcase_table[idx].msg_load_fail); - else - pr_debug("Success unexpectedly: %s\n", - bpf_testcase_table[idx].msg_load_fail); - ret = TEST_FAIL; - goto out; - } - - if (obj) { - ret = do_test(obj, - bpf_testcase_table[idx].target_func, - bpf_testcase_table[idx].expect_result); - if (ret != TEST_OK) - goto out; - if (bpf_testcase_table[idx].pin) { - int err; - - if (!bpf_fs__mount()) { - pr_debug("BPF filesystem not mounted\n"); - ret = TEST_FAIL; - goto out; - } - err = mkdir(PERF_TEST_BPF_PATH, 0777); - if (err && errno != EEXIST) { - pr_debug("Failed to make perf_test dir: %s\n", - strerror(errno)); - ret = TEST_FAIL; - goto out; - } - if (bpf_object__pin(obj, PERF_TEST_BPF_PATH)) - ret = TEST_FAIL; - if (rm_rf(PERF_TEST_BPF_PATH)) - ret = TEST_FAIL; - } - } - -out: - free(obj_buf); - bpf__clear(); - return ret; -} - -static int check_env(void) -{ - LIBBPF_OPTS(bpf_prog_load_opts, opts); - int err; - char license[] = "GPL"; - - struct bpf_insn insns[] = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }; - - err = fetch_kernel_version(&opts.kern_version, NULL, 0); - if (err) { - pr_debug("Unable to get kernel version\n"); - return err; - } - err = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, license, insns, - ARRAY_SIZE(insns), &opts); - if (err < 0) { - pr_err("Missing basic BPF support, skip this test: %s\n", - strerror(errno)); - return err; - } - close(err); - - return 0; -} - -static int test__bpf(int i) -{ - int err; - - if (i < 0 || i >= (int)ARRAY_SIZE(bpf_testcase_table)) - return TEST_FAIL; - - if (geteuid() != 0) { - pr_debug("Only root can run BPF test\n"); - return TEST_SKIP; - } - - if (check_env()) - return TEST_SKIP; - - err = __test__bpf(i); - return err; -} -#endif - -static int test__basic_bpf_test(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT) - return test__bpf(0); -#else - pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n"); - return TEST_SKIP; -#endif -} - -static int test__bpf_pinning(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT) - return test__bpf(1); -#else - pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n"); - return TEST_SKIP; -#endif -} - -static int test__bpf_prologue_test(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_BPF_PROLOGUE) && defined(HAVE_LIBTRACEEVENT) - return test__bpf(2); -#else - pr_debug("Skip BPF test because BPF or libtraceevent support is not compiled\n"); - return TEST_SKIP; -#endif -} - - -static struct test_case bpf_tests[] = { -#if defined(HAVE_LIBBPF_SUPPORT) && defined(HAVE_LIBTRACEEVENT) - TEST_CASE("Basic BPF filtering", basic_bpf_test), - TEST_CASE_REASON("BPF pinning", bpf_pinning, - "clang isn't installed or environment missing BPF support"), -#ifdef HAVE_BPF_PROLOGUE - TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, - "clang/debuginfo isn't installed or environment missing BPF support"), -#else - TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in"), -#endif -#else - TEST_CASE_REASON("Basic BPF filtering", basic_bpf_test, "not compiled in or missing libtraceevent support"), - TEST_CASE_REASON("BPF pinning", bpf_pinning, "not compiled in or missing libtraceevent support"), - TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in or missing libtraceevent support"), -#endif - { .name = NULL, } -}; - -struct test_suite suite__bpf = { - .desc = "BPF filter", - .test_cases = bpf_tests, -}; diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c index 1f6557ce3b0a..0ad18cf6dd22 100644 --- a/tools/perf/tests/builtin-test.c +++ b/tools/perf/tests/builtin-test.c @@ -33,9 +33,18 @@ static bool dont_fork; const char *dso_to_test; -struct test_suite *__weak arch_tests[] = { +/* + * List of architecture specific tests. Not a weak symbol as the array length is + * dependent on the initialization, as such GCC with LTO complains of + * conflicting definitions with a weak symbol. + */ +#if defined(__i386__) || defined(__x86_64__) || defined(__aarch64__) || defined(__powerpc64__) +extern struct test_suite *arch_tests[]; +#else +static struct test_suite *arch_tests[] = { NULL, }; +#endif static struct test_suite *generic_tests[] = { &suite__vmlinux_matches_kallsyms, @@ -83,9 +92,7 @@ static struct test_suite *generic_tests[] = { &suite__fdarray__add, &suite__kmod_path__parse, &suite__thread_map, - &suite__llvm, &suite__session_topology, - &suite__bpf, &suite__thread_map_synthesize, &suite__thread_map_remove, &suite__cpu_map, @@ -99,7 +106,6 @@ static struct test_suite *generic_tests[] = { &suite__is_printable_array, &suite__bitmap_print, &suite__perf_hooks, - &suite__clang, &suite__unit_number__scnprint, &suite__mem2node, &suite__time_utils, diff --git a/tools/perf/tests/clang.c b/tools/perf/tests/clang.c deleted file mode 100644 index a7111005d5b9..000000000000 --- a/tools/perf/tests/clang.c +++ /dev/null @@ -1,32 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include "tests.h" -#include "c++/clang-c.h" -#include <linux/kernel.h> - -#ifndef HAVE_LIBCLANGLLVM_SUPPORT -static int test__clang_to_IR(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ - return TEST_SKIP; -} - -static int test__clang_to_obj(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ - return TEST_SKIP; -} -#endif - -static struct test_case clang_tests[] = { - TEST_CASE_REASON("builtin clang compile C source to IR", clang_to_IR, - "not compiled in"), - TEST_CASE_REASON("builtin clang compile C source to ELF object", - clang_to_obj, - "not compiled in"), - { .name = NULL, } -}; - -struct test_suite suite__clang = { - .desc = "builtin clang support", - .test_cases = clang_tests, -}; diff --git a/tools/perf/tests/config-fragments/README b/tools/perf/tests/config-fragments/README new file mode 100644 index 000000000000..fe7de5d93674 --- /dev/null +++ b/tools/perf/tests/config-fragments/README @@ -0,0 +1,7 @@ +This folder is for kernel config fragments that can be merged with +defconfig to give full test coverage of a perf test run. This is only +an optimistic set as some features require hardware support in order to +pass and not skip. + +'config' is shared across all platforms, and for arch specific files, +the file name should match that used in the ARCH=... make option. diff --git a/tools/perf/tests/config-fragments/arm64 b/tools/perf/tests/config-fragments/arm64 new file mode 100644 index 000000000000..64c4ab17cd58 --- /dev/null +++ b/tools/perf/tests/config-fragments/arm64 @@ -0,0 +1 @@ +CONFIG_CORESIGHT_SOURCE_ETM4X=y diff --git a/tools/perf/tests/config-fragments/config b/tools/perf/tests/config-fragments/config new file mode 100644 index 000000000000..c340b3195fca --- /dev/null +++ b/tools/perf/tests/config-fragments/config @@ -0,0 +1,11 @@ +CONFIG_TRACEPOINTS=y +CONFIG_STACKTRACE=y +CONFIG_NOP_TRACER=y +CONFIG_RING_BUFFER=y +CONFIG_EVENT_TRACING=y +CONFIG_CONTEXT_SWITCH_TRACER=y +CONFIG_TRACING=y +CONFIG_GENERIC_TRACER=y +CONFIG_FTRACE=y +CONFIG_FTRACE_SYSCALLS=y +CONFIG_BRANCH_PROFILE_NONE=y diff --git a/tools/perf/tests/dlfilter-test.c b/tools/perf/tests/dlfilter-test.c index 086fd2179e41..da3a9b50b1b1 100644 --- a/tools/perf/tests/dlfilter-test.c +++ b/tools/perf/tests/dlfilter-test.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* * Test dlfilter C API. A perf.data file is synthesized and then processed - * by perf script with a dlfilter named dlfilter-test-api-v0.so. Also a C file + * by perf script with dlfilters named dlfilter-test-api-v*.so. Also a C file * is compiled to provide a dso to match the synthesized perf.data file. */ @@ -37,6 +37,8 @@ #define MAP_START 0x400000 +#define DLFILTER_TEST_NAME_MAX 128 + struct test_data { struct perf_tool tool; struct machine *machine; @@ -45,6 +47,8 @@ struct test_data { u64 bar; u64 ip; u64 addr; + char name[DLFILTER_TEST_NAME_MAX]; + char desc[DLFILTER_TEST_NAME_MAX]; char perf[PATH_MAX]; char perf_data_file_name[PATH_MAX]; char c_file_name[PATH_MAX]; @@ -215,7 +219,7 @@ static int write_prog(char *file_name) return err ? -1 : 0; } -static int get_dlfilters_path(char *buf, size_t sz) +static int get_dlfilters_path(const char *name, char *buf, size_t sz) { char perf[PATH_MAX]; char path[PATH_MAX]; @@ -224,12 +228,12 @@ static int get_dlfilters_path(char *buf, size_t sz) perf_exe(perf, sizeof(perf)); perf_path = dirname(perf); - snprintf(path, sizeof(path), "%s/dlfilters/dlfilter-test-api-v0.so", perf_path); + snprintf(path, sizeof(path), "%s/dlfilters/%s", perf_path, name); if (access(path, R_OK)) { exec_path = get_argv_exec_path(); if (!exec_path) return -1; - snprintf(path, sizeof(path), "%s/dlfilters/dlfilter-test-api-v0.so", exec_path); + snprintf(path, sizeof(path), "%s/dlfilters/%s", exec_path, name); free(exec_path); if (access(path, R_OK)) return -1; @@ -244,9 +248,9 @@ static int check_filter_desc(struct test_data *td) char *desc = NULL; int ret; - if (get_filter_desc(td->dlfilters, "dlfilter-test-api-v0.so", &desc, &long_desc) && + if (get_filter_desc(td->dlfilters, td->name, &desc, &long_desc) && long_desc && !strcmp(long_desc, "Filter used by the 'dlfilter C API' perf test") && - desc && !strcmp(desc, "dlfilter to test v0 C API")) + desc && !strcmp(desc, td->desc)) ret = 0; else ret = -1; @@ -284,7 +288,7 @@ static int get_ip_addr(struct test_data *td) static int do_run_perf_script(struct test_data *td, int do_early) { return system_cmd("%s script -i %s " - "--dlfilter %s/dlfilter-test-api-v0.so " + "--dlfilter %s/%s " "--dlarg first " "--dlarg %d " "--dlarg %" PRIu64 " " @@ -292,7 +296,7 @@ static int do_run_perf_script(struct test_data *td, int do_early) "--dlarg %d " "--dlarg last", td->perf, td->perf_data_file_name, td->dlfilters, - verbose, td->ip, td->addr, do_early); + td->name, verbose, td->ip, td->addr, do_early); } static int run_perf_script(struct test_data *td) @@ -321,7 +325,7 @@ static int test__dlfilter_test(struct test_data *td) u64 id = 99; int err; - if (get_dlfilters_path(td->dlfilters, PATH_MAX)) + if (get_dlfilters_path(td->name, td->dlfilters, PATH_MAX)) return test_result("dlfilters not found", TEST_SKIP); if (check_filter_desc(td)) @@ -399,14 +403,18 @@ static void test_data__free(struct test_data *td) } } -static int test__dlfilter(struct test_suite *test __maybe_unused, int subtest __maybe_unused) +static int test__dlfilter_ver(int ver) { struct test_data td = {.fd = -1}; int pid = getpid(); int err; + pr_debug("\n-- Testing version %d API --\n", ver); + perf_exe(td.perf, sizeof(td.perf)); + snprintf(td.name, sizeof(td.name), "dlfilter-test-api-v%d.so", ver); + snprintf(td.desc, sizeof(td.desc), "dlfilter to test v%d C API", ver); snprintf(td.perf_data_file_name, PATH_MAX, "/tmp/dlfilter-test-%u-perf-data", pid); snprintf(td.c_file_name, PATH_MAX, "/tmp/dlfilter-test-%u-prog.c", pid); snprintf(td.prog_file_name, PATH_MAX, "/tmp/dlfilter-test-%u-prog", pid); @@ -416,4 +424,14 @@ static int test__dlfilter(struct test_suite *test __maybe_unused, int subtest __ return err; } +static int test__dlfilter(struct test_suite *test __maybe_unused, int subtest __maybe_unused) +{ + int err = test__dlfilter_ver(0); + + if (err) + return err; + /* No test for version 1 */ + return test__dlfilter_ver(2); +} + DEFINE_SUITE("dlfilter C API", dlfilter); diff --git a/tools/perf/tests/expr.c b/tools/perf/tests/expr.c index c1c3fcbc2753..81229fa4f1e9 100644 --- a/tools/perf/tests/expr.c +++ b/tools/perf/tests/expr.c @@ -70,7 +70,7 @@ static int test__expr(struct test_suite *t __maybe_unused, int subtest __maybe_u { struct expr_id_data *val_ptr; const char *p; - double val, num_cpus, num_cores, num_dies, num_packages; + double val, num_cpus_online, num_cpus, num_cores, num_dies, num_packages; int ret; struct expr_parse_ctx *ctx; bool is_intel = false; @@ -227,7 +227,10 @@ static int test__expr(struct test_suite *t __maybe_unused, int subtest __maybe_u /* Test toplogy constants appear well ordered. */ expr__ctx_clear(ctx); + TEST_ASSERT_VAL("#num_cpus_online", + expr__parse(&num_cpus_online, ctx, "#num_cpus_online") == 0); TEST_ASSERT_VAL("#num_cpus", expr__parse(&num_cpus, ctx, "#num_cpus") == 0); + TEST_ASSERT_VAL("#num_cpus >= #num_cpus_online", num_cpus >= num_cpus_online); TEST_ASSERT_VAL("#num_cores", expr__parse(&num_cores, ctx, "#num_cores") == 0); TEST_ASSERT_VAL("#num_cpus >= #num_cores", num_cpus >= num_cores); TEST_ASSERT_VAL("#num_dies", expr__parse(&num_dies, ctx, "#num_dies") == 0); diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c deleted file mode 100644 index 0bc25a56cfef..000000000000 --- a/tools/perf/tests/llvm.c +++ /dev/null @@ -1,219 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include <stdio.h> -#include <stdlib.h> -#include <string.h> -#include "tests.h" -#include "debug.h" - -#ifdef HAVE_LIBBPF_SUPPORT -#include <bpf/libbpf.h> -#include <util/llvm-utils.h> -#include "llvm.h" -static int test__bpf_parsing(void *obj_buf, size_t obj_buf_sz) -{ - struct bpf_object *obj; - - obj = bpf_object__open_mem(obj_buf, obj_buf_sz, NULL); - if (libbpf_get_error(obj)) - return TEST_FAIL; - bpf_object__close(obj); - return TEST_OK; -} - -static struct { - const char *source; - const char *desc; - bool should_load_fail; -} bpf_source_table[__LLVM_TESTCASE_MAX] = { - [LLVM_TESTCASE_BASE] = { - .source = test_llvm__bpf_base_prog, - .desc = "Basic BPF llvm compile", - }, - [LLVM_TESTCASE_KBUILD] = { - .source = test_llvm__bpf_test_kbuild_prog, - .desc = "kbuild searching", - }, - [LLVM_TESTCASE_BPF_PROLOGUE] = { - .source = test_llvm__bpf_test_prologue_prog, - .desc = "Compile source for BPF prologue generation", - }, - [LLVM_TESTCASE_BPF_RELOCATION] = { - .source = test_llvm__bpf_test_relocation, - .desc = "Compile source for BPF relocation", - .should_load_fail = true, - }, -}; - -int -test_llvm__fetch_bpf_obj(void **p_obj_buf, - size_t *p_obj_buf_sz, - enum test_llvm__testcase idx, - bool force, - bool *should_load_fail) -{ - const char *source; - const char *desc; - const char *tmpl_old, *clang_opt_old; - char *tmpl_new = NULL, *clang_opt_new = NULL; - int err, old_verbose, ret = TEST_FAIL; - - if (idx >= __LLVM_TESTCASE_MAX) - return TEST_FAIL; - - source = bpf_source_table[idx].source; - desc = bpf_source_table[idx].desc; - if (should_load_fail) - *should_load_fail = bpf_source_table[idx].should_load_fail; - - /* - * Skip this test if user's .perfconfig doesn't set [llvm] section - * and clang is not found in $PATH - */ - if (!force && (!llvm_param.user_set_param && - llvm__search_clang())) { - pr_debug("No clang, skip this test\n"); - return TEST_SKIP; - } - - /* - * llvm is verbosity when error. Suppress all error output if - * not 'perf test -v'. - */ - old_verbose = verbose; - if (verbose == 0) - verbose = -1; - - *p_obj_buf = NULL; - *p_obj_buf_sz = 0; - - if (!llvm_param.clang_bpf_cmd_template) - goto out; - - if (!llvm_param.clang_opt) - llvm_param.clang_opt = strdup(""); - - err = asprintf(&tmpl_new, "echo '%s' | %s%s", source, - llvm_param.clang_bpf_cmd_template, - old_verbose ? "" : " 2>/dev/null"); - if (err < 0) - goto out; - err = asprintf(&clang_opt_new, "-xc %s", llvm_param.clang_opt); - if (err < 0) - goto out; - - tmpl_old = llvm_param.clang_bpf_cmd_template; - llvm_param.clang_bpf_cmd_template = tmpl_new; - clang_opt_old = llvm_param.clang_opt; - llvm_param.clang_opt = clang_opt_new; - - err = llvm__compile_bpf("-", p_obj_buf, p_obj_buf_sz); - - llvm_param.clang_bpf_cmd_template = tmpl_old; - llvm_param.clang_opt = clang_opt_old; - - verbose = old_verbose; - if (err) - goto out; - - ret = TEST_OK; -out: - free(tmpl_new); - free(clang_opt_new); - if (ret != TEST_OK) - pr_debug("Failed to compile test case: '%s'\n", desc); - return ret; -} - -static int test__llvm(int subtest) -{ - int ret; - void *obj_buf = NULL; - size_t obj_buf_sz = 0; - bool should_load_fail = false; - - if ((subtest < 0) || (subtest >= __LLVM_TESTCASE_MAX)) - return TEST_FAIL; - - ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz, - subtest, false, &should_load_fail); - - if (ret == TEST_OK && !should_load_fail) { - ret = test__bpf_parsing(obj_buf, obj_buf_sz); - if (ret != TEST_OK) { - pr_debug("Failed to parse test case '%s'\n", - bpf_source_table[subtest].desc); - } - } - free(obj_buf); - - return ret; -} -#endif //HAVE_LIBBPF_SUPPORT - -static int test__llvm__bpf_base_prog(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#ifdef HAVE_LIBBPF_SUPPORT - return test__llvm(LLVM_TESTCASE_BASE); -#else - pr_debug("Skip LLVM test because BPF support is not compiled\n"); - return TEST_SKIP; -#endif -} - -static int test__llvm__bpf_test_kbuild_prog(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#ifdef HAVE_LIBBPF_SUPPORT - return test__llvm(LLVM_TESTCASE_KBUILD); -#else - pr_debug("Skip LLVM test because BPF support is not compiled\n"); - return TEST_SKIP; -#endif -} - -static int test__llvm__bpf_test_prologue_prog(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#ifdef HAVE_LIBBPF_SUPPORT - return test__llvm(LLVM_TESTCASE_BPF_PROLOGUE); -#else - pr_debug("Skip LLVM test because BPF support is not compiled\n"); - return TEST_SKIP; -#endif -} - -static int test__llvm__bpf_test_relocation(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ -#ifdef HAVE_LIBBPF_SUPPORT - return test__llvm(LLVM_TESTCASE_BPF_RELOCATION); -#else - pr_debug("Skip LLVM test because BPF support is not compiled\n"); - return TEST_SKIP; -#endif -} - - -static struct test_case llvm_tests[] = { -#ifdef HAVE_LIBBPF_SUPPORT - TEST_CASE("Basic BPF llvm compile", llvm__bpf_base_prog), - TEST_CASE("kbuild searching", llvm__bpf_test_kbuild_prog), - TEST_CASE("Compile source for BPF prologue generation", - llvm__bpf_test_prologue_prog), - TEST_CASE("Compile source for BPF relocation", llvm__bpf_test_relocation), -#else - TEST_CASE_REASON("Basic BPF llvm compile", llvm__bpf_base_prog, "not compiled in"), - TEST_CASE_REASON("kbuild searching", llvm__bpf_test_kbuild_prog, "not compiled in"), - TEST_CASE_REASON("Compile source for BPF prologue generation", - llvm__bpf_test_prologue_prog, "not compiled in"), - TEST_CASE_REASON("Compile source for BPF relocation", - llvm__bpf_test_relocation, "not compiled in"), -#endif - { .name = NULL, } -}; - -struct test_suite suite__llvm = { - .desc = "LLVM search and compile", - .test_cases = llvm_tests, -}; diff --git a/tools/perf/tests/llvm.h b/tools/perf/tests/llvm.h deleted file mode 100644 index f68b0d9b8ae2..000000000000 --- a/tools/perf/tests/llvm.h +++ /dev/null @@ -1,31 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef PERF_TEST_LLVM_H -#define PERF_TEST_LLVM_H - -#ifdef __cplusplus -extern "C" { -#endif - -#include <stddef.h> /* for size_t */ -#include <stdbool.h> /* for bool */ - -extern const char test_llvm__bpf_base_prog[]; -extern const char test_llvm__bpf_test_kbuild_prog[]; -extern const char test_llvm__bpf_test_prologue_prog[]; -extern const char test_llvm__bpf_test_relocation[]; - -enum test_llvm__testcase { - LLVM_TESTCASE_BASE, - LLVM_TESTCASE_KBUILD, - LLVM_TESTCASE_BPF_PROLOGUE, - LLVM_TESTCASE_BPF_RELOCATION, - __LLVM_TESTCASE_MAX, -}; - -int test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz, - enum test_llvm__testcase index, bool force, - bool *should_load_fail); -#ifdef __cplusplus -} -#endif -#endif diff --git a/tools/perf/tests/make b/tools/perf/tests/make index 58cf96d762d0..ea4c341f5af1 100644 --- a/tools/perf/tests/make +++ b/tools/perf/tests/make @@ -95,7 +95,6 @@ make_with_babeltrace:= LIBBABELTRACE=1 make_with_coresight := CORESIGHT=1 make_no_sdt := NO_SDT=1 make_no_syscall_tbl := NO_SYSCALL_TABLE=1 -make_with_clangllvm := LIBCLANGLLVM=1 make_no_libpfm4 := NO_LIBPFM4=1 make_with_gtk2 := GTK2=1 make_refcnt_check := EXTRA_CFLAGS="-DREFCNT_CHECKING=1" diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c index 658fb9599d95..d47f1f871164 100644 --- a/tools/perf/tests/parse-events.c +++ b/tools/perf/tests/parse-events.c @@ -2170,7 +2170,7 @@ static const struct evlist_test test__events[] = { static const struct evlist_test test__events_pmu[] = { { - .name = "cpu/config=10,config1,config2=3,period=1000/u", + .name = "cpu/config=10,config1=1,config2=3,period=1000/u", .valid = test__pmu_cpu_valid, .check = test__checkevent_pmu, /* 0 */ @@ -2472,7 +2472,7 @@ static int test_term(const struct terms_test *t) INIT_LIST_HEAD(&terms); - ret = parse_events_terms(&terms, t->str); + ret = parse_events_terms(&terms, t->str, /*input=*/ NULL); if (ret) { pr_debug("failed to parse terms '%s', err %d\n", t->str , ret); diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c index 64383fc34ef1..f5321fbdee79 100644 --- a/tools/perf/tests/pmu-events.c +++ b/tools/perf/tests/pmu-events.c @@ -44,6 +44,7 @@ struct perf_pmu_test_pmu { static const struct perf_pmu_test_event bp_l1_btb_correct = { .event = { + .pmu = "default_core", .name = "bp_l1_btb_correct", .event = "event=0x8a", .desc = "L1 BTB Correction", @@ -55,6 +56,7 @@ static const struct perf_pmu_test_event bp_l1_btb_correct = { static const struct perf_pmu_test_event bp_l2_btb_correct = { .event = { + .pmu = "default_core", .name = "bp_l2_btb_correct", .event = "event=0x8b", .desc = "L2 BTB Correction", @@ -66,6 +68,7 @@ static const struct perf_pmu_test_event bp_l2_btb_correct = { static const struct perf_pmu_test_event segment_reg_loads_any = { .event = { + .pmu = "default_core", .name = "segment_reg_loads.any", .event = "event=0x6,period=200000,umask=0x80", .desc = "Number of segment register loads", @@ -77,6 +80,7 @@ static const struct perf_pmu_test_event segment_reg_loads_any = { static const struct perf_pmu_test_event dispatch_blocked_any = { .event = { + .pmu = "default_core", .name = "dispatch_blocked.any", .event = "event=0x9,period=200000,umask=0x20", .desc = "Memory cluster signals to block micro-op dispatch for any reason", @@ -88,6 +92,7 @@ static const struct perf_pmu_test_event dispatch_blocked_any = { static const struct perf_pmu_test_event eist_trans = { .event = { + .pmu = "default_core", .name = "eist_trans", .event = "event=0x3a,period=200000,umask=0x0", .desc = "Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions", @@ -99,6 +104,7 @@ static const struct perf_pmu_test_event eist_trans = { static const struct perf_pmu_test_event l3_cache_rd = { .event = { + .pmu = "default_core", .name = "l3_cache_rd", .event = "event=0x40", .desc = "L3 cache access, read", @@ -123,7 +129,7 @@ static const struct perf_pmu_test_event uncore_hisi_ddrc_flux_wcmd = { .event = { .name = "uncore_hisi_ddrc.flux_wcmd", .event = "event=0x2", - .desc = "DDRC write commands. Unit: hisi_sccl,ddrc ", + .desc = "DDRC write commands", .topic = "uncore", .long_desc = "DDRC write commands", .pmu = "hisi_sccl,ddrc", @@ -137,7 +143,7 @@ static const struct perf_pmu_test_event unc_cbo_xsnp_response_miss_eviction = { .event = { .name = "unc_cbo_xsnp_response.miss_eviction", .event = "event=0x22,umask=0x81", - .desc = "A cross-core snoop resulted from L3 Eviction which misses in some processor core. Unit: uncore_cbox ", + .desc = "A cross-core snoop resulted from L3 Eviction which misses in some processor core", .topic = "uncore", .long_desc = "A cross-core snoop resulted from L3 Eviction which misses in some processor core", .pmu = "uncore_cbox", @@ -151,7 +157,7 @@ static const struct perf_pmu_test_event uncore_hyphen = { .event = { .name = "event-hyphen", .event = "event=0xe0,umask=0x00", - .desc = "UNC_CBO_HYPHEN. Unit: uncore_cbox ", + .desc = "UNC_CBO_HYPHEN", .topic = "uncore", .long_desc = "UNC_CBO_HYPHEN", .pmu = "uncore_cbox", @@ -165,7 +171,7 @@ static const struct perf_pmu_test_event uncore_two_hyph = { .event = { .name = "event-two-hyph", .event = "event=0xc0,umask=0x00", - .desc = "UNC_CBO_TWO_HYPH. Unit: uncore_cbox ", + .desc = "UNC_CBO_TWO_HYPH", .topic = "uncore", .long_desc = "UNC_CBO_TWO_HYPH", .pmu = "uncore_cbox", @@ -179,7 +185,7 @@ static const struct perf_pmu_test_event uncore_hisi_l3c_rd_hit_cpipe = { .event = { .name = "uncore_hisi_l3c.rd_hit_cpipe", .event = "event=0x7", - .desc = "Total read hits. Unit: hisi_sccl,l3c ", + .desc = "Total read hits", .topic = "uncore", .long_desc = "Total read hits", .pmu = "hisi_sccl,l3c", @@ -193,7 +199,7 @@ static const struct perf_pmu_test_event uncore_imc_free_running_cache_miss = { .event = { .name = "uncore_imc_free_running.cache_miss", .event = "event=0x12", - .desc = "Total cache misses. Unit: uncore_imc_free_running ", + .desc = "Total cache misses", .topic = "uncore", .long_desc = "Total cache misses", .pmu = "uncore_imc_free_running", @@ -207,7 +213,7 @@ static const struct perf_pmu_test_event uncore_imc_cache_hits = { .event = { .name = "uncore_imc.cache_hits", .event = "event=0x34", - .desc = "Total cache hits. Unit: uncore_imc ", + .desc = "Total cache hits", .topic = "uncore", .long_desc = "Total cache hits", .pmu = "uncore_imc", @@ -232,13 +238,13 @@ static const struct perf_pmu_test_event sys_ddr_pmu_write_cycles = { .event = { .name = "sys_ddr_pmu.write_cycles", .event = "event=0x2b", - .desc = "ddr write-cycles event. Unit: uncore_sys_ddr_pmu ", + .desc = "ddr write-cycles event", .topic = "uncore", .pmu = "uncore_sys_ddr_pmu", .compat = "v8", }, .alias_str = "event=0x2b", - .alias_long_desc = "ddr write-cycles event. Unit: uncore_sys_ddr_pmu ", + .alias_long_desc = "ddr write-cycles event", .matching_pmu = "uncore_sys_ddr_pmu", }; @@ -246,13 +252,13 @@ static const struct perf_pmu_test_event sys_ccn_pmu_read_cycles = { .event = { .name = "sys_ccn_pmu.read_cycles", .event = "config=0x2c", - .desc = "ccn read-cycles event. Unit: uncore_sys_ccn_pmu ", + .desc = "ccn read-cycles event", .topic = "uncore", .pmu = "uncore_sys_ccn_pmu", .compat = "0x01", }, .alias_str = "config=0x2c", - .alias_long_desc = "ccn read-cycles event. Unit: uncore_sys_ccn_pmu ", + .alias_long_desc = "ccn read-cycles event", .matching_pmu = "uncore_sys_ccn_pmu", }; @@ -341,7 +347,7 @@ static int compare_pmu_events(const struct pmu_event *e1, const struct pmu_event return 0; } -static int compare_alias_to_test_event(struct perf_pmu_alias *alias, +static int compare_alias_to_test_event(struct pmu_event_info *alias, struct perf_pmu_test_event const *test_event, char const *pmu_name) { @@ -385,8 +391,8 @@ static int compare_alias_to_test_event(struct perf_pmu_alias *alias, return -1; } - - if (!is_same(alias->pmu_name, test_event->event.pmu)) { + if (!is_same(alias->pmu_name, test_event->event.pmu) && + !is_same(alias->pmu_name, "default_core")) { pr_debug("testing aliases PMU %s: mismatched pmu_name, %s vs %s\n", pmu_name, alias->pmu_name, test_event->event.pmu); return -1; @@ -403,7 +409,7 @@ static int test__pmu_event_table_core_callback(const struct pmu_event *pe, struct perf_pmu_test_event const **test_event_table; bool found = false; - if (pe->pmu) + if (strcmp(pe->pmu, "default_core")) test_event_table = &uncore_events[0]; else test_event_table = &core_events[0]; @@ -477,12 +483,14 @@ static int test__pmu_event_table(struct test_suite *test __maybe_unused, if (!table || !sys_event_table) return -1; - err = pmu_events_table_for_each_event(table, test__pmu_event_table_core_callback, + err = pmu_events_table__for_each_event(table, /*pmu=*/ NULL, + test__pmu_event_table_core_callback, &map_events); if (err) return err; - err = pmu_events_table_for_each_event(sys_event_table, test__pmu_event_table_sys_callback, + err = pmu_events_table__for_each_event(sys_event_table, /*pmu=*/ NULL, + test__pmu_event_table_sys_callback, &map_events); if (err) return err; @@ -496,26 +504,30 @@ static int test__pmu_event_table(struct test_suite *test __maybe_unused, return 0; } -static struct perf_pmu_alias *find_alias(const char *test_event, struct list_head *aliases) -{ - struct perf_pmu_alias *alias; +struct test_core_pmu_event_aliases_cb_args { + struct perf_pmu_test_event const *test_event; + int *count; +}; - list_for_each_entry(alias, aliases, list) - if (!strcmp(test_event, alias->name)) - return alias; +static int test_core_pmu_event_aliases_cb(void *state, struct pmu_event_info *alias) +{ + struct test_core_pmu_event_aliases_cb_args *args = state; - return NULL; + if (compare_alias_to_test_event(alias, args->test_event, alias->pmu->name)) + return -1; + (*args->count)++; + pr_debug2("testing aliases core PMU %s: matched event %s\n", + alias->pmu_name, alias->name); + return 0; } /* Verify aliases are as expected */ -static int __test_core_pmu_event_aliases(char *pmu_name, int *count) +static int __test_core_pmu_event_aliases(const char *pmu_name, int *count) { struct perf_pmu_test_event const **test_event_table; struct perf_pmu *pmu; - LIST_HEAD(aliases); int res = 0; const struct pmu_events_table *table = find_core_events_table("testarch", "testcpu"); - struct perf_pmu_alias *a, *tmp; if (!table) return -1; @@ -526,37 +538,40 @@ static int __test_core_pmu_event_aliases(char *pmu_name, int *count) if (!pmu) return -1; - pmu->name = pmu_name; - - pmu_add_cpu_aliases_table(&aliases, pmu, table); - + INIT_LIST_HEAD(&pmu->format); + INIT_LIST_HEAD(&pmu->aliases); + INIT_LIST_HEAD(&pmu->caps); + INIT_LIST_HEAD(&pmu->list); + pmu->name = strdup(pmu_name); + pmu->is_core = true; + + pmu->events_table = table; + pmu_add_cpu_aliases_table(pmu, table); + pmu->cpu_aliases_added = true; + pmu->sysfs_aliases_loaded = true; + + res = pmu_events_table__find_event(table, pmu, "bp_l1_btb_correct", NULL, NULL); + if (res != 0) { + pr_debug("Missing test event in test architecture"); + return res; + } for (; *test_event_table; test_event_table++) { - struct perf_pmu_test_event const *test_event = *test_event_table; - struct pmu_event const *event = &test_event->event; - struct perf_pmu_alias *alias = find_alias(event->name, &aliases); - - if (!alias) { - pr_debug("testing aliases core PMU %s: no alias, alias_table->name=%s\n", - pmu_name, event->name); - res = -1; - break; - } - - if (compare_alias_to_test_event(alias, test_event, pmu_name)) { - res = -1; - break; - } - - (*count)++; - pr_debug2("testing aliases core PMU %s: matched event %s\n", - pmu_name, alias->name); + struct perf_pmu_test_event test_event = **test_event_table; + struct pmu_event const *event = &test_event.event; + struct test_core_pmu_event_aliases_cb_args args = { + .test_event = &test_event, + .count = count, + }; + int err; + + test_event.event.pmu = pmu_name; + err = perf_pmu__find_event(pmu, event->name, &args, + test_core_pmu_event_aliases_cb); + if (err) + res = err; } + perf_pmu__delete(pmu); - list_for_each_entry_safe(a, tmp, &aliases, list) { - list_del(&a->list); - perf_pmu_free_alias(a); - } - free(pmu); return res; } @@ -566,20 +581,20 @@ static int __test_uncore_pmu_event_aliases(struct perf_pmu_test_pmu *test_pmu) struct perf_pmu_test_event const **table; struct perf_pmu *pmu = &test_pmu->pmu; const char *pmu_name = pmu->name; - struct perf_pmu_alias *a, *tmp, *alias; const struct pmu_events_table *events_table; - LIST_HEAD(aliases); int res = 0; events_table = find_core_events_table("testarch", "testcpu"); if (!events_table) return -1; - pmu_add_cpu_aliases_table(&aliases, pmu, events_table); - pmu_add_sys_aliases(&aliases, pmu); + pmu->events_table = events_table; + pmu_add_cpu_aliases_table(pmu, events_table); + pmu->cpu_aliases_added = true; + pmu->sysfs_aliases_loaded = true; + pmu_add_sys_aliases(pmu); /* Count how many aliases we generated */ - list_for_each_entry(alias, &aliases, list) - alias_count++; + alias_count = perf_pmu__num_events(pmu); /* Count how many aliases we expect from the known table */ for (table = &test_pmu->aliases[0]; *table; table++) @@ -588,33 +603,25 @@ static int __test_uncore_pmu_event_aliases(struct perf_pmu_test_pmu *test_pmu) if (alias_count != to_match_count) { pr_debug("testing aliases uncore PMU %s: mismatch expected aliases (%d) vs found (%d)\n", pmu_name, to_match_count, alias_count); - res = -1; - goto out; + return -1; } - list_for_each_entry(alias, &aliases, list) { - bool matched = false; - - for (table = &test_pmu->aliases[0]; *table; table++) { - struct perf_pmu_test_event const *test_event = *table; - struct pmu_event const *event = &test_event->event; - - if (!strcmp(event->name, alias->name)) { - if (compare_alias_to_test_event(alias, - test_event, - pmu_name)) { - continue; - } - matched = true; - matched_count++; - } - } - - if (matched == false) { + for (table = &test_pmu->aliases[0]; *table; table++) { + struct perf_pmu_test_event test_event = **table; + struct pmu_event const *event = &test_event.event; + int err; + struct test_core_pmu_event_aliases_cb_args args = { + .test_event = &test_event, + .count = &matched_count, + }; + + err = perf_pmu__find_event(pmu, event->name, &args, + test_core_pmu_event_aliases_cb); + if (err) { + res = err; pr_debug("testing aliases uncore PMU %s: could not match alias %s\n", - pmu_name, alias->name); - res = -1; - goto out; + pmu_name, event->name); + return -1; } } @@ -623,19 +630,13 @@ static int __test_uncore_pmu_event_aliases(struct perf_pmu_test_pmu *test_pmu) pmu_name, matched_count, alias_count); res = -1; } - -out: - list_for_each_entry_safe(a, tmp, &aliases, list) { - list_del(&a->list); - perf_pmu_free_alias(a); - } return res; } static struct perf_pmu_test_pmu test_pmus[] = { { .pmu = { - .name = (char *)"hisi_sccl1_ddrc2", + .name = "hisi_sccl1_ddrc2", .is_uncore = 1, }, .aliases = { @@ -644,7 +645,7 @@ static struct perf_pmu_test_pmu test_pmus[] = { }, { .pmu = { - .name = (char *)"uncore_cbox_0", + .name = "uncore_cbox_0", .is_uncore = 1, }, .aliases = { @@ -655,7 +656,7 @@ static struct perf_pmu_test_pmu test_pmus[] = { }, { .pmu = { - .name = (char *)"hisi_sccl3_l3c7", + .name = "hisi_sccl3_l3c7", .is_uncore = 1, }, .aliases = { @@ -664,7 +665,7 @@ static struct perf_pmu_test_pmu test_pmus[] = { }, { .pmu = { - .name = (char *)"uncore_imc_free_running_0", + .name = "uncore_imc_free_running_0", .is_uncore = 1, }, .aliases = { @@ -673,7 +674,7 @@ static struct perf_pmu_test_pmu test_pmus[] = { }, { .pmu = { - .name = (char *)"uncore_imc_0", + .name = "uncore_imc_0", .is_uncore = 1, }, .aliases = { @@ -682,9 +683,9 @@ static struct perf_pmu_test_pmu test_pmus[] = { }, { .pmu = { - .name = (char *)"uncore_sys_ddr_pmu0", + .name = "uncore_sys_ddr_pmu0", .is_uncore = 1, - .id = (char *)"v8", + .id = "v8", }, .aliases = { &sys_ddr_pmu_write_cycles, @@ -692,9 +693,9 @@ static struct perf_pmu_test_pmu test_pmus[] = { }, { .pmu = { - .name = (char *)"uncore_sys_ccn_pmu4", + .name = "uncore_sys_ccn_pmu4", .is_uncore = 1, - .id = (char *)"0x01", + .id = "0x01", }, .aliases = { &sys_ccn_pmu_read_cycles, @@ -732,8 +733,13 @@ static int test__aliases(struct test_suite *test __maybe_unused, } for (i = 0; i < ARRAY_SIZE(test_pmus); i++) { - int res = __test_uncore_pmu_event_aliases(&test_pmus[i]); + int res; + + INIT_LIST_HEAD(&test_pmus[i].pmu.format); + INIT_LIST_HEAD(&test_pmus[i].pmu.aliases); + INIT_LIST_HEAD(&test_pmus[i].pmu.caps); + res = __test_uncore_pmu_event_aliases(&test_pmus[i]); if (res) return res; } diff --git a/tools/perf/tests/pmu.c b/tools/perf/tests/pmu.c index a4452639a3d4..eb60e5f66859 100644 --- a/tools/perf/tests/pmu.c +++ b/tools/perf/tests/pmu.c @@ -7,6 +7,7 @@ #include <stdio.h> #include <linux/kernel.h> #include <linux/limits.h> +#include <linux/zalloc.h> /* Simulated format definitions. */ static struct test_format { @@ -27,55 +28,55 @@ static struct test_format { /* Simulated users input. */ static struct parse_events_term test_terms[] = { { - .config = (char *) "krava01", + .config = "krava01", .val.num = 15, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava02", + .config = "krava02", .val.num = 170, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava03", + .config = "krava03", .val.num = 1, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava11", + .config = "krava11", .val.num = 27, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava12", + .config = "krava12", .val.num = 1, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava13", + .config = "krava13", .val.num = 2, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava21", + .config = "krava21", .val.num = 119, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava22", + .config = "krava22", .val.num = 11, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, }, { - .config = (char *) "krava23", + .config = "krava23", .val.num = 2, .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = PARSE_EVENTS__TERM_TYPE_USER, @@ -141,48 +142,55 @@ static struct list_head *test_terms_list(void) static int test__pmu(struct test_suite *test __maybe_unused, int subtest __maybe_unused) { char dir[PATH_MAX]; - char *format = test_format_dir_get(dir, sizeof(dir)); - LIST_HEAD(formats); + char *format; struct list_head *terms = test_terms_list(); + struct perf_event_attr attr; + struct perf_pmu *pmu; + int fd; int ret; - if (!format) - return -EINVAL; - - do { - struct perf_event_attr attr; - int fd; - - memset(&attr, 0, sizeof(attr)); - - fd = open(format, O_DIRECTORY); - if (fd < 0) { - ret = fd; - break; - } - ret = perf_pmu__format_parse(fd, &formats); - if (ret) - break; - - ret = perf_pmu__config_terms("perf-pmu-test", &formats, &attr, - terms, false, NULL); - if (ret) - break; + pmu = zalloc(sizeof(*pmu)); + if (!pmu) + return -ENOMEM; - ret = -EINVAL; + INIT_LIST_HEAD(&pmu->format); + INIT_LIST_HEAD(&pmu->aliases); + INIT_LIST_HEAD(&pmu->caps); + format = test_format_dir_get(dir, sizeof(dir)); + if (!format) { + free(pmu); + return -EINVAL; + } - if (attr.config != 0xc00000000002a823) - break; - if (attr.config1 != 0x8000400000000145) - break; - if (attr.config2 != 0x0400000020041d07) - break; + memset(&attr, 0, sizeof(attr)); - ret = 0; - } while (0); + fd = open(format, O_DIRECTORY); + if (fd < 0) { + ret = fd; + goto out; + } - perf_pmu__del_formats(&formats); + pmu->name = strdup("perf-pmu-test"); + ret = perf_pmu__format_parse(pmu, fd, /*eager_load=*/true); + if (ret) + goto out; + + ret = perf_pmu__config_terms(pmu, &attr, terms, /*zero=*/false, /*err=*/NULL); + if (ret) + goto out; + + ret = -EINVAL; + if (attr.config != 0xc00000000002a823) + goto out; + if (attr.config1 != 0x8000400000000145) + goto out; + if (attr.config2 != 0x0400000020041d07) + goto out; + + ret = 0; +out: test_format_dir_put(format); + perf_pmu__delete(pmu); return ret; } diff --git a/tools/perf/tests/shell/coresight/asm_pure_loop.sh b/tools/perf/tests/shell/coresight/asm_pure_loop.sh index 569e9d46162b..779bc8608e1e 100755 --- a/tools/perf/tests/shell/coresight/asm_pure_loop.sh +++ b/tools/perf/tests/shell/coresight/asm_pure_loop.sh @@ -5,7 +5,7 @@ # Carsten Haitzler <carsten.haitzler@arm.com>, 2021 TEST="asm_pure_loop" -. $(dirname $0)/../lib/coresight.sh +. "$(dirname $0)"/../lib/coresight.sh ARGS="" DATV="out" DATA="$DATD/perf-$TEST-$DATV.data" diff --git a/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh b/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh index d21ba8545938..08a44e52ce9b 100755 --- a/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh +++ b/tools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh @@ -5,7 +5,7 @@ # Carsten Haitzler <carsten.haitzler@arm.com>, 2021 TEST="memcpy_thread" -. $(dirname $0)/../lib/coresight.sh +. "$(dirname $0)"/../lib/coresight.sh ARGS="16 10 1" DATV="16k_10" DATA="$DATD/perf-$TEST-$DATV.data" diff --git a/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh b/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh index 7c13636fc778..c83a200dede4 100755 --- a/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh +++ b/tools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh @@ -5,7 +5,7 @@ # Carsten Haitzler <carsten.haitzler@arm.com>, 2021 TEST="thread_loop" -. $(dirname $0)/../lib/coresight.sh +. "$(dirname $0)"/../lib/coresight.sh ARGS="10 1" DATV="check-tid-10th" DATA="$DATD/perf-$TEST-$DATV.data" diff --git a/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh b/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh index a067145af43c..6346fd5e87c8 100755 --- a/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh +++ b/tools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh @@ -5,7 +5,7 @@ # Carsten Haitzler <carsten.haitzler@arm.com>, 2021 TEST="thread_loop" -. $(dirname $0)/../lib/coresight.sh +. "$(dirname $0)"/../lib/coresight.sh ARGS="2 20" DATV="check-tid-2th" DATA="$DATD/perf-$TEST-$DATV.data" diff --git a/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh b/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh index f48c85230b15..7304e3d3a6ff 100755 --- a/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh +++ b/tools/perf/tests/shell/coresight/unroll_loop_thread_10.sh @@ -5,7 +5,7 @@ # Carsten Haitzler <carsten.haitzler@arm.com>, 2021 TEST="unroll_loop_thread" -. $(dirname $0)/../lib/coresight.sh +. "$(dirname $0)"/../lib/coresight.sh ARGS="10" DATV="10" DATA="$DATD/perf-$TEST-$DATV.data" diff --git a/tools/perf/tests/shell/lib/probe.sh b/tools/perf/tests/shell/lib/probe.sh index 51e3f60baba0..5aa6e2ec5734 100644 --- a/tools/perf/tests/shell/lib/probe.sh +++ b/tools/perf/tests/shell/lib/probe.sh @@ -1,3 +1,4 @@ +#!/bin/bash # SPDX-License-Identifier: GPL-2.0 # Arnaldo Carvalho de Melo <acme@kernel.org>, 2017 diff --git a/tools/perf/tests/shell/lib/probe_vfs_getname.sh b/tools/perf/tests/shell/lib/probe_vfs_getname.sh index 60c5e34f90c4..bf4c1fb71c4b 100644 --- a/tools/perf/tests/shell/lib/probe_vfs_getname.sh +++ b/tools/perf/tests/shell/lib/probe_vfs_getname.sh @@ -1,3 +1,4 @@ +#!/bin/sh # Arnaldo Carvalho de Melo <acme@kernel.org>, 2017 perf probe -l 2>&1 | grep -q probe:vfs_getname @@ -10,11 +11,11 @@ cleanup_probe_vfs_getname() { } add_probe_vfs_getname() { - local verbose=$1 + add_probe_verbose=$1 if [ $had_vfs_getname -eq 1 ] ; then line=$(perf probe -L getname_flags 2>&1 | grep -E 'result.*=.*filename;' | sed -r 's/[[:space:]]+([[:digit:]]+)[[:space:]]+result->uptr.*/\1/') perf probe -q "vfs_getname=getname_flags:${line} pathname=result->name:string" || \ - perf probe $verbose "vfs_getname=getname_flags:${line} pathname=filename:ustring" + perf probe $add_probe_verbose "vfs_getname=getname_flags:${line} pathname=filename:ustring" fi } diff --git a/tools/perf/tests/shell/lib/stat_output.sh b/tools/perf/tests/shell/lib/stat_output.sh index 698343f0ecf9..3cc158a64326 100644 --- a/tools/perf/tests/shell/lib/stat_output.sh +++ b/tools/perf/tests/shell/lib/stat_output.sh @@ -1,3 +1,4 @@ +#!/bin/bash # SPDX-License-Identifier: GPL-2.0 # Return true if perf_event_paranoid is > $1 and not running as root. diff --git a/tools/perf/tests/shell/lib/waiting.sh b/tools/perf/tests/shell/lib/waiting.sh index e7a39134a68e..bdd5a7c71591 100644 --- a/tools/perf/tests/shell/lib/waiting.sh +++ b/tools/perf/tests/shell/lib/waiting.sh @@ -1,3 +1,4 @@ +#!/bin/sh # SPDX-License-Identifier: GPL-2.0 tenths=date\ +%s%1N diff --git a/tools/perf/tests/shell/lock_contention.sh b/tools/perf/tests/shell/lock_contention.sh index 4a194420416e..d120e83db7d9 100755 --- a/tools/perf/tests/shell/lock_contention.sh +++ b/tools/perf/tests/shell/lock_contention.sh @@ -21,7 +21,7 @@ trap_cleanup() { trap trap_cleanup EXIT TERM INT check() { - if [ `id -u` != 0 ]; then + if [ "$(id -u)" != 0 ]; then echo "[Skip] No root permission" err=2 exit @@ -157,10 +157,10 @@ test_lock_filter() perf lock contention -i ${perfdata} -L tasklist_lock -q 2> ${result} # find out the type of tasklist_lock - local type=$(head -1 "${result}" | awk '{ print $8 }' | sed -e 's/:.*//') + test_lock_filter_type=$(head -1 "${result}" | awk '{ print $8 }' | sed -e 's/:.*//') - if [ "$(grep -c -v "${type}" "${result}")" != "0" ]; then - echo "[Fail] Recorded result should not have non-${type} locks:" "$(cat "${result}")" + if [ "$(grep -c -v "${test_lock_filter_type}" "${result}")" != "0" ]; then + echo "[Fail] Recorded result should not have non-${test_lock_filter_type} locks:" "$(cat "${result}")" err=1 exit fi @@ -170,8 +170,8 @@ test_lock_filter() fi perf lock con -a -b -L tasklist_lock -q -- perf bench sched messaging > /dev/null 2> ${result} - if [ "$(grep -c -v "${type}" "${result}")" != "0" ]; then - echo "[Fail] BPF result should not have non-${type} locks:" "$(cat "${result}")" + if [ "$(grep -c -v "${test_lock_filter_type}" "${result}")" != "0" ]; then + echo "[Fail] BPF result should not have non-${test_lock_filter_type} locks:" "$(cat "${result}")" err=1 exit fi diff --git a/tools/perf/tests/shell/probe_vfs_getname.sh b/tools/perf/tests/shell/probe_vfs_getname.sh index 5d1b63d3f3e1..871243d6d03a 100755 --- a/tools/perf/tests/shell/probe_vfs_getname.sh +++ b/tools/perf/tests/shell/probe_vfs_getname.sh @@ -4,11 +4,11 @@ # SPDX-License-Identifier: GPL-2.0 # Arnaldo Carvalho de Melo <acme@kernel.org>, 2017 -. $(dirname $0)/lib/probe.sh +. "$(dirname $0)"/lib/probe.sh skip_if_no_perf_probe || exit 2 -. $(dirname $0)/lib/probe_vfs_getname.sh +. "$(dirname $0)"/lib/probe_vfs_getname.sh add_probe_vfs_getname || skip_if_no_debuginfo err=$? diff --git a/tools/perf/tests/shell/record+zstd_comp_decomp.sh b/tools/perf/tests/shell/record+zstd_comp_decomp.sh index 49bd875d5122..8929046e9057 100755 --- a/tools/perf/tests/shell/record+zstd_comp_decomp.sh +++ b/tools/perf/tests/shell/record+zstd_comp_decomp.sh @@ -13,25 +13,25 @@ skip_if_no_z_record() { collect_z_record() { echo "Collecting compressed record file:" [ "$(uname -m)" != s390x ] && gflag='-g' - $perf_tool record -o $trace_file $gflag -z -F 5000 -- \ + $perf_tool record -o "$trace_file" $gflag -z -F 5000 -- \ dd count=500 if=/dev/urandom of=/dev/null } check_compressed_stats() { echo "Checking compressed events stats:" - $perf_tool report -i $trace_file --header --stats | \ + $perf_tool report -i "$trace_file" --header --stats | \ grep -E "(# compressed : Zstd,)|(COMPRESSED events:)" } check_compressed_output() { - $perf_tool inject -i $trace_file -o $trace_file.decomp && - $perf_tool report -i $trace_file --stdio -F comm,dso,sym | head -n -3 > $trace_file.comp.output && - $perf_tool report -i $trace_file.decomp --stdio -F comm,dso,sym | head -n -3 > $trace_file.decomp.output && - diff $trace_file.comp.output $trace_file.decomp.output + $perf_tool inject -i "$trace_file" -o "$trace_file.decomp" && + $perf_tool report -i "$trace_file" --stdio -F comm,dso,sym | head -n -3 > "$trace_file.comp.output" && + $perf_tool report -i "$trace_file.decomp" --stdio -F comm,dso,sym | head -n -3 > "$trace_file.decomp.output" && + diff "$trace_file.comp.output" "$trace_file.decomp.output" } skip_if_no_z_record || exit 2 collect_z_record && check_compressed_stats && check_compressed_output err=$? -rm -f $trace_file* +rm -f "$trace_file*" exit $err diff --git a/tools/perf/tests/shell/record_bpf_filter.sh b/tools/perf/tests/shell/record_bpf_filter.sh new file mode 100755 index 000000000000..31c593966e8c --- /dev/null +++ b/tools/perf/tests/shell/record_bpf_filter.sh @@ -0,0 +1,134 @@ +#!/bin/sh +# perf record sample filtering (by BPF) tests +# SPDX-License-Identifier: GPL-2.0 + +set -e + +err=0 +perfdata=$(mktemp /tmp/__perf_test.perf.data.XXXXX) + +cleanup() { + rm -f "${perfdata}" + rm -f "${perfdata}".old + trap - EXIT TERM INT +} + +trap_cleanup() { + cleanup + exit 1 +} +trap trap_cleanup EXIT TERM INT + +test_bpf_filter_priv() { + echo "Checking BPF-filter privilege" + + if [ "$(id -u)" != 0 ] + then + echo "bpf-filter test [Skipped permission]" + err=2 + return + fi + if ! perf record -e task-clock --filter 'period > 1' \ + -o /dev/null --quiet true 2>&1 + then + echo "bpf-filter test [Skipped missing BPF support]" + err=2 + return + fi +} + +test_bpf_filter_basic() { + echo "Basic bpf-filter test" + + if ! perf record -e task-clock -c 10000 --filter 'ip < 0xffffffff00000000' \ + -o "${perfdata}" true 2> /dev/null + then + echo "Basic bpf-filter test [Failed record]" + err=1 + return + fi + if perf script -i "${perfdata}" -F ip | grep 'ffffffff[0-9a-f]*' + then + if uname -r | grep -q ^6.2 + then + echo "Basic bpf-filter test [Skipped unsupported kernel]" + err=2 + return + fi + echo "Basic bpf-filter test [Failed invalid output]" + err=1 + return + fi + echo "Basic bpf-filter test [Success]" +} + +test_bpf_filter_fail() { + echo "Failing bpf-filter test" + + # 'cpu' requires PERF_SAMPLE_CPU flag + if ! perf record -e task-clock --filter 'cpu > 0' \ + -o /dev/null true 2>&1 | grep PERF_SAMPLE_CPU + then + echo "Failing bpf-filter test [Failed forbidden CPU]" + err=1 + return + fi + + if ! perf record --sample-cpu -e task-clock --filter 'cpu > 0' \ + -o /dev/null true 2>/dev/null + then + echo "Failing bpf-filter test [Failed should succeed]" + err=1 + return + fi + + echo "Failing bpf-filter test [Success]" +} + +test_bpf_filter_group() { + echo "Group bpf-filter test" + + if ! perf record -e task-clock --filter 'period > 1000 || ip > 0' \ + -o /dev/null true 2>/dev/null + then + echo "Group bpf-filter test [Failed should succeed]" + err=1 + return + fi + + if ! perf record -e task-clock --filter 'cpu > 0 || ip > 0' \ + -o /dev/null true 2>&1 | grep PERF_SAMPLE_CPU + then + echo "Group bpf-filter test [Failed forbidden CPU]" + err=1 + return + fi + + if ! perf record -e task-clock --filter 'period > 0 || code_pgsz > 4096' \ + -o /dev/null true 2>&1 | grep PERF_SAMPLE_CODE_PAGE_SIZE + then + echo "Group bpf-filter test [Failed forbidden CODE_PAGE_SIZE]" + err=1 + return + fi + + echo "Group bpf-filter test [Success]" +} + + +test_bpf_filter_priv + +if [ $err = 0 ]; then + test_bpf_filter_basic +fi + +if [ $err = 0 ]; then + test_bpf_filter_fail +fi + +if [ $err = 0 ]; then + test_bpf_filter_group +fi + +cleanup +exit $err diff --git a/tools/perf/tests/shell/record_offcpu.sh b/tools/perf/tests/shell/record_offcpu.sh index f062ae9a95e1..a0d14cd0aa79 100755 --- a/tools/perf/tests/shell/record_offcpu.sh +++ b/tools/perf/tests/shell/record_offcpu.sh @@ -10,19 +10,19 @@ perfdata=$(mktemp /tmp/__perf_test.perf.data.XXXXX) cleanup() { rm -f ${perfdata} rm -f ${perfdata}.old - trap - exit term int + trap - EXIT TERM INT } trap_cleanup() { cleanup exit 1 } -trap trap_cleanup exit term int +trap trap_cleanup EXIT TERM INT test_offcpu_priv() { echo "Checking off-cpu privilege" - if [ `id -u` != 0 ] + if [ "$(id -u)" != 0 ] then echo "off-cpu test [Skipped permission]" err=2 diff --git a/tools/perf/tests/shell/stat+csv_output.sh b/tools/perf/tests/shell/stat+csv_output.sh index 34a0701fee05..d890eb26e914 100755 --- a/tools/perf/tests/shell/stat+csv_output.sh +++ b/tools/perf/tests/shell/stat+csv_output.sh @@ -6,7 +6,7 @@ set -e -. $(dirname $0)/lib/stat_output.sh +. "$(dirname $0)"/lib/stat_output.sh csv_sep=@ diff --git a/tools/perf/tests/shell/stat+csv_summary.sh b/tools/perf/tests/shell/stat+csv_summary.sh index 5571ff75eb42..8bae9c8a835e 100755 --- a/tools/perf/tests/shell/stat+csv_summary.sh +++ b/tools/perf/tests/shell/stat+csv_summary.sh @@ -10,7 +10,7 @@ set -e # perf stat -e cycles -x' ' -I1000 --interval-count 1 --summary 2>&1 | \ grep -e summary | \ -while read summary num event run pct +while read summary _num _event _run _pct do if [ $summary != "summary" ]; then exit 1 @@ -23,7 +23,7 @@ done # perf stat -e cycles -x' ' -I1000 --interval-count 1 --summary --no-csv-summary 2>&1 | \ grep -e summary | \ -while read num event run pct +while read _num _event _run _pct do exit 1 done diff --git a/tools/perf/tests/shell/stat+shadow_stat.sh b/tools/perf/tests/shell/stat+shadow_stat.sh index 0e9cba84e757..a1918a15e36a 100755 --- a/tools/perf/tests/shell/stat+shadow_stat.sh +++ b/tools/perf/tests/shell/stat+shadow_stat.sh @@ -14,7 +14,7 @@ test_global_aggr() { perf stat -a --no-big-num -e cycles,instructions sleep 1 2>&1 | \ grep -e cycles -e instructions | \ - while read num evt hash ipc rest + while read num evt _hash ipc rest do # skip not counted events if [ "$num" = "<not" ]; then @@ -45,7 +45,7 @@ test_no_aggr() { perf stat -a -A --no-big-num -e cycles,instructions sleep 1 2>&1 | \ grep ^CPU | \ - while read cpu num evt hash ipc rest + while read cpu num evt _hash ipc rest do # skip not counted events if [ "$num" = "<not" ]; then diff --git a/tools/perf/tests/shell/stat+std_output.sh b/tools/perf/tests/shell/stat+std_output.sh index f972b31fa0c2..fb2b10547a11 100755 --- a/tools/perf/tests/shell/stat+std_output.sh +++ b/tools/perf/tests/shell/stat+std_output.sh @@ -6,7 +6,7 @@ set -e -. $(dirname $0)/lib/stat_output.sh +. "$(dirname $0)"/lib/stat_output.sh stat_output=$(mktemp /tmp/__perf_test.stat_output.std.XXXXX) @@ -28,7 +28,6 @@ trap trap_cleanup EXIT TERM INT function commachecker() { - local -i cnt=0 local prefix=1 case "$1" diff --git a/tools/perf/tests/shell/stat_bpf_counters.sh b/tools/perf/tests/shell/stat_bpf_counters.sh index 13473aeba489..a87bb2814b4c 100755 --- a/tools/perf/tests/shell/stat_bpf_counters.sh +++ b/tools/perf/tests/shell/stat_bpf_counters.sh @@ -22,21 +22,21 @@ compare_number() } # skip if --bpf-counters is not supported -if ! perf stat --bpf-counters true > /dev/null 2>&1; then +if ! perf stat -e cycles --bpf-counters true > /dev/null 2>&1; then if [ "$1" = "-v" ]; then echo "Skipping: --bpf-counters not supported" - perf --no-pager stat --bpf-counters true || true + perf --no-pager stat -e cycles --bpf-counters true || true fi exit 2 fi base_cycles=$(perf stat --no-big-num -e cycles -- perf bench sched messaging -g 1 -l 100 -t 2>&1 | awk '/cycles/ {print $1}') -if [ "$base_cycles" == "<not" ]; then +if [ "$base_cycles" = "<not" ]; then echo "Skipping: cycles event not counted" exit 2 fi bpf_cycles=$(perf stat --no-big-num --bpf-counters -e cycles -- perf bench sched messaging -g 1 -l 100 -t 2>&1 | awk '/cycles/ {print $1}') -if [ "$bpf_cycles" == "<not" ]; then +if [ "$bpf_cycles" = "<not" ]; then echo "Failed: cycles not counted with --bpf-counters" exit 1 fi diff --git a/tools/perf/tests/shell/stat_bpf_counters_cgrp.sh b/tools/perf/tests/shell/stat_bpf_counters_cgrp.sh index d724855d097c..e75d0780dc78 100755 --- a/tools/perf/tests/shell/stat_bpf_counters_cgrp.sh +++ b/tools/perf/tests/shell/stat_bpf_counters_cgrp.sh @@ -25,22 +25,22 @@ check_bpf_counter() find_cgroups() { # try usual systemd slices first - if [ -d /sys/fs/cgroup/system.slice -a -d /sys/fs/cgroup/user.slice ]; then + if [ -d /sys/fs/cgroup/system.slice ] && [ -d /sys/fs/cgroup/user.slice ]; then test_cgroups="system.slice,user.slice" return fi # try root and self cgroups - local self_cgrp=$(grep perf_event /proc/self/cgroup | cut -d: -f3) - if [ -z ${self_cgrp} ]; then + find_cgroups_self_cgrp=$(grep perf_event /proc/self/cgroup | cut -d: -f3) + if [ -z ${find_cgroups_self_cgrp} ]; then # cgroup v2 doesn't specify perf_event - self_cgrp=$(grep ^0: /proc/self/cgroup | cut -d: -f3) + find_cgroups_self_cgrp=$(grep ^0: /proc/self/cgroup | cut -d: -f3) fi - if [ -z ${self_cgrp} ]; then + if [ -z ${find_cgroups_self_cgrp} ]; then test_cgroups="/" else - test_cgroups="/,${self_cgrp}" + test_cgroups="/,${find_cgroups_self_cgrp}" fi } @@ -48,13 +48,11 @@ find_cgroups() # Just check if it runs without failure and has non-zero results. check_system_wide_counted() { - local output - - output=$(perf stat -a --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, sleep 1 2>&1) - if echo ${output} | grep -q -F "<not "; then + check_system_wide_counted_output=$(perf stat -a --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, sleep 1 2>&1) + if echo ${check_system_wide_counted_output} | grep -q -F "<not "; then echo "Some system-wide events are not counted" if [ "${verbose}" = "1" ]; then - echo ${output} + echo ${check_system_wide_counted_output} fi exit 1 fi @@ -62,13 +60,11 @@ check_system_wide_counted() check_cpu_list_counted() { - local output - - output=$(perf stat -C 1 --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, taskset -c 1 sleep 1 2>&1) - if echo ${output} | grep -q -F "<not "; then + check_cpu_list_counted_output=$(perf stat -C 0,1 --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, taskset -c 1 sleep 1 2>&1) + if echo ${check_cpu_list_counted_output} | grep -q -F "<not "; then echo "Some CPU events are not counted" if [ "${verbose}" = "1" ]; then - echo ${output} + echo ${check_cpu_list_counted_output} fi exit 1 fi diff --git a/tools/perf/tests/shell/test_arm_spe_fork.sh b/tools/perf/tests/shell/test_arm_spe_fork.sh index fad361675a1d..1a7e6a82d0e3 100755 --- a/tools/perf/tests/shell/test_arm_spe_fork.sh +++ b/tools/perf/tests/shell/test_arm_spe_fork.sh @@ -22,7 +22,7 @@ cleanup_files() rm -f ${PERF_DATA} } -trap cleanup_files exit term int +trap cleanup_files EXIT TERM INT echo "Recording workload..." perf record -o ${PERF_DATA} -e arm_spe/period=65536/ -vvv -- $TEST_PROGRAM > ${PERF_RECORD_LOG} 2>&1 & diff --git a/tools/perf/tests/shell/test_perf_data_converter_json.sh b/tools/perf/tests/shell/test_perf_data_converter_json.sh index 72ac6c83231c..6ded58f98f55 100755 --- a/tools/perf/tests/shell/test_perf_data_converter_json.sh +++ b/tools/perf/tests/shell/test_perf_data_converter_json.sh @@ -39,7 +39,7 @@ test_json_converter_command() echo "Testing Perf Data Convertion Command to JSON" perf record -o "$perfdata" -F 99 -g -- perf test -w noploop > /dev/null 2>&1 perf data convert --to-json "$result" --force -i "$perfdata" >/dev/null 2>&1 - if [ $(cat "${result}" | wc -l) -gt "0" ] ; then + if [ "$(cat ${result} | wc -l)" -gt "0" ] ; then echo "Perf Data Converter Command to JSON [SUCCESS]" else echo "Perf Data Converter Command to JSON [FAILED]" diff --git a/tools/perf/tests/shell/test_task_analyzer.sh b/tools/perf/tests/shell/test_task_analyzer.sh index 0095abbe20ca..92d15154ba79 100755 --- a/tools/perf/tests/shell/test_task_analyzer.sh +++ b/tools/perf/tests/shell/test_task_analyzer.sh @@ -52,7 +52,7 @@ find_str_or_fail() { # check if perf is compiled with libtraceevent support skip_no_probe_record_support() { - perf record -e "sched:sched_switch" -a -- sleep 1 2>&1 | grep "libtraceevent is necessary for tracepoint support" && return 2 + perf version --build-options | grep -q " OFF .* HAVE_LIBTRACEEVENT" && return 2 return 0 } diff --git a/tools/perf/tests/shell/trace+probe_vfs_getname.sh b/tools/perf/tests/shell/trace+probe_vfs_getname.sh index 0a4bac3dd77e..4014487cf4d9 100755 --- a/tools/perf/tests/shell/trace+probe_vfs_getname.sh +++ b/tools/perf/tests/shell/trace+probe_vfs_getname.sh @@ -10,17 +10,17 @@ # SPDX-License-Identifier: GPL-2.0 # Arnaldo Carvalho de Melo <acme@kernel.org>, 2017 -. $(dirname $0)/lib/probe.sh +. "$(dirname $0)"/lib/probe.sh skip_if_no_perf_probe || exit 2 skip_if_no_perf_trace || exit 2 -. $(dirname $0)/lib/probe_vfs_getname.sh +. "$(dirname $0)"/lib/probe_vfs_getname.sh trace_open_vfs_getname() { - evts=$(echo $(perf list syscalls:sys_enter_open* 2>/dev/null | grep -E 'open(at)? ' | sed -r 's/.*sys_enter_([a-z]+) +\[.*$/\1/') | sed 's/ /,/') + evts="$(echo "$(perf list syscalls:sys_enter_open* 2>/dev/null | grep -E 'open(at)? ' | sed -r 's/.*sys_enter_([a-z]+) +\[.*$/\1/')" | sed ':a;N;s:\n:,:g')" perf trace -e $evts touch $file 2>&1 | \ - grep -E " +[0-9]+\.[0-9]+ +\( +[0-9]+\.[0-9]+ ms\): +touch\/[0-9]+ open(at)?\((dfd: +CWD, +)?filename: +${file}, +flags: CREAT\|NOCTTY\|NONBLOCK\|WRONLY, +mode: +IRUGO\|IWUGO\) += +[0-9]+$" + grep -E " +[0-9]+\.[0-9]+ +\( +[0-9]+\.[0-9]+ ms\): +touch/[0-9]+ open(at)?\((dfd: +CWD, +)?filename: +\"?${file}\"?, +flags: CREAT\|NOCTTY\|NONBLOCK\|WRONLY, +mode: +IRUGO\|IWUGO\) += +[0-9]+$" } diff --git a/tools/perf/tests/stat.c b/tools/perf/tests/stat.c index 500974040fe3..706780fb5695 100644 --- a/tools/perf/tests/stat.c +++ b/tools/perf/tests/stat.c @@ -27,7 +27,7 @@ static int process_stat_config_event(struct perf_tool *tool __maybe_unused, struct machine *machine __maybe_unused) { struct perf_record_stat_config *config = &event->stat_config; - struct perf_stat_config stat_config; + struct perf_stat_config stat_config = {}; #define HAS(term, val) \ has_term(config, PERF_STAT_CONFIG_TERM__##term, val) diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h index f424c0b7f43f..f33cfc3c19a4 100644 --- a/tools/perf/tests/tests.h +++ b/tools/perf/tests/tests.h @@ -113,7 +113,6 @@ DECLARE_SUITE(fdarray__filter); DECLARE_SUITE(fdarray__add); DECLARE_SUITE(kmod_path__parse); DECLARE_SUITE(thread_map); -DECLARE_SUITE(llvm); DECLARE_SUITE(bpf); DECLARE_SUITE(session_topology); DECLARE_SUITE(thread_map_synthesize); @@ -129,7 +128,6 @@ DECLARE_SUITE(sdt_event); DECLARE_SUITE(is_printable_array); DECLARE_SUITE(bitmap_print); DECLARE_SUITE(perf_hooks); -DECLARE_SUITE(clang); DECLARE_SUITE(unit_number__scnprint); DECLARE_SUITE(mem2node); DECLARE_SUITE(maps__merge_in); diff --git a/tools/perf/trace/beauty/arch_errno_names.sh b/tools/perf/trace/beauty/arch_errno_names.sh index 37c53bac5f56..cc09dcaa891e 100755 --- a/tools/perf/trace/beauty/arch_errno_names.sh +++ b/tools/perf/trace/beauty/arch_errno_names.sh @@ -17,8 +17,7 @@ arch_string() asm_errno_file() { - local arch="$1" - local header + arch="$1" header="$toolsdir/arch/$arch/include/uapi/asm/errno.h" if test -r "$header"; then @@ -30,8 +29,7 @@ asm_errno_file() create_errno_lookup_func() { - local arch=$(arch_string "$1") - local nr name + arch=$(arch_string "$1") printf "static const char *errno_to_name__%s(int err)\n{\n\tswitch (err) {\n" $arch @@ -44,8 +42,8 @@ create_errno_lookup_func() process_arch() { - local arch="$1" - local asm_errno=$(asm_errno_file "$arch") + arch="$1" + asm_errno=$(asm_errno_file "$arch") $gcc $CFLAGS $include_path -E -dM -x c $asm_errno \ |grep -hE '^#define[[:blank:]]+(E[^[:blank:]]+)[[:blank:]]+([[:digit:]]+).*' \ @@ -56,9 +54,8 @@ process_arch() create_arch_errno_table_func() { - local archlist="$1" - local default="$2" - local arch + archlist="$1" + default="$2" printf 'const char *arch_syscalls__strerrno(const char *arch, int err)\n' printf '{\n' diff --git a/tools/perf/trace/beauty/beauty.h b/tools/perf/trace/beauty/beauty.h index 3d12bf0f6d07..788e8f6bd90e 100644 --- a/tools/perf/trace/beauty/beauty.h +++ b/tools/perf/trace/beauty/beauty.h @@ -67,15 +67,14 @@ extern struct strarray strarray__socket_level; /** * augmented_arg: extra payload for syscall pointer arguments - * If perf_sample->raw_size is more than what a syscall sys_enter_FOO puts, - * then its the arguments contents, so that we can show more than just a + * If perf_sample->raw_size is more than what a syscall sys_enter_FOO puts, then + * its the arguments contents, so that we can show more than just a * pointer. This will be done initially with eBPF, the start of that is at the - * tools/perf/examples/bpf/augmented_syscalls.c example for the openat, but - * will eventually be done automagically caching the running kernel tracefs - * events data into an eBPF C script, that then gets compiled and its .o file - * cached for subsequent use. For char pointers like the ones for 'open' like - * syscalls its easy, for the rest we should use DWARF or better, BTF, much - * more compact. + * tools/perf/util/bpf_skel/augmented_syscalls.bpf.c that will eventually be + * done automagically caching the running kernel tracefs events data into an + * eBPF C script, that then gets compiled and its .o file cached for subsequent + * use. For char pointers like the ones for 'open' like syscalls its easy, for + * the rest we should use DWARF or better, BTF, much more compact. * * @size: 8 if all we need is an integer, otherwise all of the augmented arg. * @int_arg: will be used for integer like pointer contents, like 'accept's 'upeer_addrlen' diff --git a/tools/perf/trace/beauty/mmap_flags.sh b/tools/perf/trace/beauty/mmap_flags.sh index 3022597c8c17..6ecdb3c5a99e 100755 --- a/tools/perf/trace/beauty/mmap_flags.sh +++ b/tools/perf/trace/beauty/mmap_flags.sh @@ -19,6 +19,7 @@ arch_mman=${arch_header_dir}/mman.h printf "static const char *mmap_flags[] = {\n" regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MAP_([[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*' +test -f ${arch_mman} && \ grep -E -q $regex ${arch_mman} && \ (grep -E $regex ${arch_mman} | \ sed -r "s/$regex/\2 \1 \1 \1 \2/g" | \ @@ -28,12 +29,14 @@ grep -E -q $regex ${linux_mman} && \ grep -E -vw 'MAP_(UNINITIALIZED|TYPE|SHARED_VALIDATE)' | \ sed -r "s/$regex/\2 \1 \1 \1 \2/g" | \ xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n#ifndef MAP_%s\n#define MAP_%s %s\n#endif\n") -([ ! -f ${arch_mman} ] || grep -E -q '#[[:space:]]*include[[:space:]]+.*uapi/asm-generic/mman.*' ${arch_mman}) && +( ! test -f ${arch_mman} || \ +grep -E -q '#[[:space:]]*include[[:space:]]+.*uapi/asm-generic/mman.*' ${arch_mman}) && (grep -E $regex ${header_dir}/mman-common.h | \ grep -E -vw 'MAP_(UNINITIALIZED|TYPE|SHARED_VALIDATE)' | \ sed -r "s/$regex/\2 \1 \1 \1 \2/g" | \ xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n#ifndef MAP_%s\n#define MAP_%s %s\n#endif\n") -([ ! -f ${arch_mman} ] || grep -E -q '#[[:space:]]*include[[:space:]]+.*uapi/asm-generic/mman.h>.*' ${arch_mman}) && +( ! test -f ${arch_mman} || \ +grep -E -q '#[[:space:]]*include[[:space:]]+.*uapi/asm-generic/mman.h>.*' ${arch_mman}) && (grep -E $regex ${header_dir}/mman.h | \ sed -r "s/$regex/\2 \1 \1 \1 \2/g" | \ xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n#ifndef MAP_%s\n#define MAP_%s %s\n#endif\n") diff --git a/tools/perf/trace/beauty/mmap_prot.sh b/tools/perf/trace/beauty/mmap_prot.sh index 49e8c865214b..4436fcd6e861 100755 --- a/tools/perf/trace/beauty/mmap_prot.sh +++ b/tools/perf/trace/beauty/mmap_prot.sh @@ -17,12 +17,13 @@ prefix="PROT" printf "static const char *mmap_prot[] = {\n" regex=`printf '^[[:space:]]*#[[:space:]]*define[[:space:]]+%s_([[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*' ${prefix}` -([ ! -f ${arch_mman} ] || grep -E -q '#[[:space:]]*include[[:space:]]+.*uapi/asm-generic/mman.*' ${arch_mman}) && +( ! test -f ${arch_mman} \ +|| grep -E -q '#[[:space:]]*include[[:space:]]+.*uapi/asm-generic/mman.*' ${arch_mman}) && (grep -E $regex ${common_mman} | \ grep -E -vw PROT_NONE | \ sed -r "s/$regex/\2 \1 \1 \1 \2/g" | \ xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n#ifndef ${prefix}_%s\n#define ${prefix}_%s %s\n#endif\n") -[ -f ${arch_mman} ] && grep -E -q $regex ${arch_mman} && +test -f ${arch_mman} && grep -E -q $regex ${arch_mman} && (grep -E $regex ${arch_mman} | \ grep -E -vw PROT_NONE | \ sed -r "s/$regex/\2 \1 \1 \1 \2/g" | \ diff --git a/tools/perf/trace/beauty/x86_arch_prctl.sh b/tools/perf/trace/beauty/x86_arch_prctl.sh index fd5c740512c5..b1596df251f0 100755 --- a/tools/perf/trace/beauty/x86_arch_prctl.sh +++ b/tools/perf/trace/beauty/x86_arch_prctl.sh @@ -7,9 +7,9 @@ prctl_arch_header=${x86_header_dir}/prctl.h print_range () { - local idx=$1 - local prefix=$2 - local first_entry=$3 + idx=$1 + prefix=$2 + first_entry=$3 printf "#define x86_arch_prctl_codes_%d_offset %s\n" $idx $first_entry printf "static const char *x86_arch_prctl_codes_%d[] = {\n" $idx diff --git a/tools/perf/ui/Build b/tools/perf/ui/Build index 3aff83c3275f..6b6d7143a37b 100644 --- a/tools/perf/ui/Build +++ b/tools/perf/ui/Build @@ -10,5 +10,3 @@ CFLAGS_setup.o += -DLIBDIR="BUILD_STR($(LIBDIR))" perf-$(CONFIG_SLANG) += browser.o perf-$(CONFIG_SLANG) += browsers/ perf-$(CONFIG_SLANG) += tui/ - -CFLAGS_browser.o += -DENABLE_SLFUTURE_CONST diff --git a/tools/perf/ui/browser.c b/tools/perf/ui/browser.c index 78fb01d6ad63..603d11283cbd 100644 --- a/tools/perf/ui/browser.c +++ b/tools/perf/ui/browser.c @@ -57,12 +57,12 @@ void ui_browser__gotorc(struct ui_browser *browser, int y, int x) void ui_browser__write_nstring(struct ui_browser *browser __maybe_unused, const char *msg, unsigned int width) { - slsmg_write_nstring(msg, width); + SLsmg_write_nstring(msg, width); } void ui_browser__vprintf(struct ui_browser *browser __maybe_unused, const char *fmt, va_list args) { - slsmg_vprintf(fmt, args); + SLsmg_vprintf(fmt, args); } void ui_browser__printf(struct ui_browser *browser __maybe_unused, const char *fmt, ...) @@ -808,6 +808,6 @@ void ui_browser__init(void) while (ui_browser__colorsets[i].name) { struct ui_browser_colorset *c = &ui_browser__colorsets[i++]; - sltt_set_color(c->colorset, c->name, c->fg, c->bg); + SLtt_set_color(c->colorset, c->name, c->fg, c->bg); } } diff --git a/tools/perf/ui/browsers/Build b/tools/perf/ui/browsers/Build index fdf86f7981ca..7a1d5ddaf688 100644 --- a/tools/perf/ui/browsers/Build +++ b/tools/perf/ui/browsers/Build @@ -4,8 +4,3 @@ perf-y += map.o perf-y += scripts.o perf-y += header.o perf-y += res_sample.o - -CFLAGS_annotate.o += -DENABLE_SLFUTURE_CONST -CFLAGS_hists.o += -DENABLE_SLFUTURE_CONST -CFLAGS_map.o += -DENABLE_SLFUTURE_CONST -CFLAGS_scripts.o += -DENABLE_SLFUTURE_CONST diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c index c7ad9e003080..70db5a717905 100644 --- a/tools/perf/ui/browsers/hists.c +++ b/tools/perf/ui/browsers/hists.c @@ -407,11 +407,6 @@ static bool hist_browser__selection_has_children(struct hist_browser *browser) return container_of(ms, struct callchain_list, ms)->has_children; } -static bool hist_browser__he_selection_unfolded(struct hist_browser *browser) -{ - return browser->he_selection ? browser->he_selection->unfolded : false; -} - static bool hist_browser__selection_unfolded(struct hist_browser *browser) { struct hist_entry *he = browser->he_selection; @@ -584,8 +579,8 @@ static int hierarchy_set_folding(struct hist_browser *hb, struct hist_entry *he, return n; } -static void __hist_entry__set_folding(struct hist_entry *he, - struct hist_browser *hb, bool unfold) +static void hist_entry__set_folding(struct hist_entry *he, + struct hist_browser *hb, bool unfold) { hist_entry__init_have_children(he); he->unfolded = unfold ? he->has_children : false; @@ -603,34 +598,12 @@ static void __hist_entry__set_folding(struct hist_entry *he, he->nr_rows = 0; } -static void hist_entry__set_folding(struct hist_entry *he, - struct hist_browser *browser, bool unfold) -{ - double percent; - - percent = hist_entry__get_percent_limit(he); - if (he->filtered || percent < browser->min_pcnt) - return; - - __hist_entry__set_folding(he, browser, unfold); - - if (!he->depth || unfold) - browser->nr_hierarchy_entries++; - if (he->leaf) - browser->nr_callchain_rows += he->nr_rows; - else if (unfold && !hist_entry__has_hierarchy_children(he, browser->min_pcnt)) { - browser->nr_hierarchy_entries++; - he->has_no_entry = true; - he->nr_rows = 1; - } else - he->has_no_entry = false; -} - static void __hist_browser__set_folding(struct hist_browser *browser, bool unfold) { struct rb_node *nd; struct hist_entry *he; + double percent; nd = rb_first_cached(&browser->hists->entries); while (nd) { @@ -640,6 +613,21 @@ __hist_browser__set_folding(struct hist_browser *browser, bool unfold) nd = __rb_hierarchy_next(nd, HMD_FORCE_CHILD); hist_entry__set_folding(he, browser, unfold); + + percent = hist_entry__get_percent_limit(he); + if (he->filtered || percent < browser->min_pcnt) + continue; + + if (!he->depth || unfold) + browser->nr_hierarchy_entries++; + if (he->leaf) + browser->nr_callchain_rows += he->nr_rows; + else if (unfold && !hist_entry__has_hierarchy_children(he, browser->min_pcnt)) { + browser->nr_hierarchy_entries++; + he->has_no_entry = true; + he->nr_rows = 1; + } else + he->has_no_entry = false; } } @@ -659,8 +647,10 @@ static void hist_browser__set_folding_selected(struct hist_browser *browser, boo if (!browser->he_selection) return; - hist_entry__set_folding(browser->he_selection, browser, unfold); - browser->b.nr_entries = hist_browser__nr_entries(browser); + if (unfold == browser->he_selection->unfolded) + return; + + hist_browser__toggle_fold(browser); } static void ui_browser__warn_lost_events(struct ui_browser *browser) @@ -732,8 +722,8 @@ static int hist_browser__handle_hotkey(struct hist_browser *browser, bool warn_l hist_browser__set_folding(browser, true); break; case 'e': - /* Expand the selected entry. */ - hist_browser__set_folding_selected(browser, !hist_browser__he_selection_unfolded(browser)); + /* Toggle expand/collapse the selected entry. */ + hist_browser__toggle_fold(browser); break; case 'H': browser->show_headers = !browser->show_headers; @@ -1779,7 +1769,7 @@ static void hists_browser__hierarchy_headers(struct hist_browser *browser) hists_browser__scnprintf_hierarchy_headers(browser, headers, sizeof(headers)); - ui_browser__gotorc(&browser->b, 0, 0); + ui_browser__gotorc_title(&browser->b, 0, 0); ui_browser__set_color(&browser->b, HE_COLORSET_ROOT); ui_browser__write_nstring(&browser->b, headers, browser->b.width + 1); } diff --git a/tools/perf/ui/libslang.h b/tools/perf/ui/libslang.h index 991e692b9b46..1dff3020e9d5 100644 --- a/tools/perf/ui/libslang.h +++ b/tools/perf/ui/libslang.h @@ -11,28 +11,16 @@ #define HAVE_LONG_LONG __GLIBC_HAVE_LONG_LONG #endif +/* Enable future slang's corrected function prototypes. */ +#define ENABLE_SLFUTURE_CONST 1 +#define ENABLE_SLFUTURE_VOID 1 + #ifdef HAVE_SLANG_INCLUDE_SUBDIR #include <slang/slang.h> #else #include <slang.h> #endif -#if SLANG_VERSION < 20104 -#define slsmg_printf(msg, args...) \ - SLsmg_printf((char *)(msg), ##args) -#define slsmg_vprintf(msg, vargs) \ - SLsmg_vprintf((char *)(msg), vargs) -#define slsmg_write_nstring(msg, len) \ - SLsmg_write_nstring((char *)(msg), len) -#define sltt_set_color(obj, name, fg, bg) \ - SLtt_set_color(obj,(char *)(name), (char *)(fg), (char *)(bg)) -#else -#define slsmg_printf SLsmg_printf -#define slsmg_vprintf SLsmg_vprintf -#define slsmg_write_nstring SLsmg_write_nstring -#define sltt_set_color SLtt_set_color -#endif - #define SL_KEY_UNTAB 0x1000 #endif /* _PERF_UI_SLANG_H_ */ diff --git a/tools/perf/ui/tui/helpline.c b/tools/perf/ui/tui/helpline.c index db4952f5990b..b39451314f43 100644 --- a/tools/perf/ui/tui/helpline.c +++ b/tools/perf/ui/tui/helpline.c @@ -22,7 +22,7 @@ static void tui_helpline__push(const char *msg) SLsmg_gotorc(SLtt_Screen_Rows - 1, 0); SLsmg_set_color(0); - SLsmg_write_nstring((char *)msg, SLtt_Screen_Cols); + SLsmg_write_nstring(msg, SLtt_Screen_Cols); SLsmg_refresh(); strlcpy(ui_helpline__current, msg, sz); } diff --git a/tools/perf/ui/tui/setup.c b/tools/perf/ui/tui/setup.c index c1886aa184b3..605d9e175ea7 100644 --- a/tools/perf/ui/tui/setup.c +++ b/tools/perf/ui/tui/setup.c @@ -142,7 +142,7 @@ int ui__init(void) goto out; } - SLkp_define_keysym((char *)"^(kB)", SL_KEY_UNTAB); + SLkp_define_keysym("^(kB)", SL_KEY_UNTAB); signal(SIGSEGV, ui__signal_backtrace); signal(SIGFPE, ui__signal_backtrace); diff --git a/tools/perf/ui/tui/util.c b/tools/perf/ui/tui/util.c index 3c5174854ac8..e4d322ce0b54 100644 --- a/tools/perf/ui/tui/util.c +++ b/tools/perf/ui/tui/util.c @@ -106,7 +106,7 @@ int ui_browser__input_window(const char *title, const char *text, char *input, SLsmg_draw_box(y, x++, nr_lines, max_len); if (title) { SLsmg_gotorc(y, x + 1); - SLsmg_write_string((char *)title); + SLsmg_write_string(title); } SLsmg_gotorc(++y, x); nr_lines -= 7; @@ -117,12 +117,12 @@ int ui_browser__input_window(const char *title, const char *text, char *input, len = 5; while (len--) { SLsmg_gotorc(y + len - 1, x); - SLsmg_write_nstring((char *)" ", max_len); + SLsmg_write_nstring(" ", max_len); } SLsmg_draw_box(y++, x + 1, 3, max_len - 2); SLsmg_gotorc(y + 3, x); - SLsmg_write_nstring((char *)exit_msg, max_len); + SLsmg_write_nstring(exit_msg, max_len); SLsmg_refresh(); mutex_unlock(&ui__lock); @@ -197,7 +197,7 @@ void __ui__info_window(const char *title, const char *text, const char *exit_msg SLsmg_draw_box(y, x++, nr_lines, max_len); if (title) { SLsmg_gotorc(y, x + 1); - SLsmg_write_string((char *)title); + SLsmg_write_string(title); } SLsmg_gotorc(++y, x); if (exit_msg) @@ -207,9 +207,9 @@ void __ui__info_window(const char *title, const char *text, const char *exit_msg nr_lines, max_len, 1); if (exit_msg) { SLsmg_gotorc(y + nr_lines - 2, x); - SLsmg_write_nstring((char *)" ", max_len); + SLsmg_write_nstring(" ", max_len); SLsmg_gotorc(y + nr_lines - 1, x); - SLsmg_write_nstring((char *)exit_msg, max_len); + SLsmg_write_nstring(exit_msg, max_len); } } diff --git a/tools/perf/util/Build b/tools/perf/util/Build index 96f4ea1d45c5..6d657c9927f7 100644 --- a/tools/perf/util/Build +++ b/tools/perf/util/Build @@ -1,3 +1,6 @@ +include $(srctree)/tools/scripts/Makefile.include +include $(srctree)/tools/scripts/utilities.mak + perf-y += arm64-frame-pointer-unwind-support.o perf-y += addr_location.o perf-y += annotate.o @@ -20,13 +23,13 @@ perf-y += evswitch.o perf-y += find_bit.o perf-y += get_current_dir_name.o perf-y += levenshtein.o -perf-y += llvm-utils.o perf-y += mmap.o perf-y += memswap.o perf-y += parse-events.o perf-y += print-events.o perf-y += tracepoint.o perf-y += perf_regs.o +perf-y += perf-regs-arch/ perf-y += path.o perf-y += print_binary.o perf-y += rlimit.o @@ -147,7 +150,6 @@ perf-y += list_sort.o perf-y += mutex.o perf-y += sharded_mutex.o -perf-$(CONFIG_LIBBPF) += bpf-loader.o perf-$(CONFIG_LIBBPF) += bpf_map.o perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter_cgroup.o @@ -165,7 +167,6 @@ ifeq ($(CONFIG_LIBTRACEEVENT),y) perf-$(CONFIG_PERF_BPF_SKEL) += bpf_kwork.o endif -perf-$(CONFIG_BPF_PROLOGUE) += bpf-prologue.o perf-$(CONFIG_LIBELF) += symbol-elf.o perf-$(CONFIG_LIBELF) += probe-file.o perf-$(CONFIG_LIBELF) += probe-event.o @@ -229,12 +230,9 @@ perf-y += perf-hooks.o perf-$(CONFIG_LIBBPF) += bpf-event.o perf-$(CONFIG_LIBBPF) += bpf-utils.o -perf-$(CONFIG_CXX) += c++/ - perf-$(CONFIG_LIBPFM4) += pfm.o CFLAGS_config.o += -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" -CFLAGS_llvm-utils.o += -DLIBBPF_INCLUDE_DIR="BUILD_STR($(libbpf_include_dir_SQ))" # avoid compiler warnings in 32-bit mode CFLAGS_genelf_debug.o += -Wno-packed @@ -246,7 +244,7 @@ $(OUTPUT)util/parse-events-flex.c $(OUTPUT)util/parse-events-flex.h: util/parse- $(OUTPUT)util/parse-events-bison.c $(OUTPUT)util/parse-events-bison.h: util/parse-events.y $(call rule_mkdir) - $(Q)$(call echo-cmd,bison)$(BISON) -v $< -d $(PARSER_DEBUG_BISON) $(BISON_FILE_PREFIX_MAP) \ + $(Q)$(call echo-cmd,bison)$(BISON) -v $< -d $(PARSER_DEBUG_BISON) $(BISON_FILE_PREFIX_MAP) $(BISON_FALLBACK_FLAGS) \ -o $(OUTPUT)util/parse-events-bison.c -p parse_events_ $(OUTPUT)util/expr-flex.c $(OUTPUT)util/expr-flex.h: util/expr.l $(OUTPUT)util/expr-bison.c @@ -279,28 +277,58 @@ $(OUTPUT)util/bpf-filter-bison.c $(OUTPUT)util/bpf-filter-bison.h: util/bpf-filt $(Q)$(call echo-cmd,bison)$(BISON) -v $< -d $(PARSER_DEBUG_BISON) $(BISON_FILE_PREFIX_MAP) \ -o $(OUTPUT)util/bpf-filter-bison.c -p perf_bpf_filter_ -FLEX_GE_26 := $(shell expr $(shell $(FLEX) --version | sed -e 's/flex \([0-9]\+\).\([0-9]\+\)/\1\2/g') \>\= 26) -ifeq ($(FLEX_GE_26),1) - flex_flags := -Wno-switch-enum -Wno-switch-default -Wno-unused-function -Wno-redundant-decls -Wno-sign-compare -Wno-unused-parameter -Wno-missing-prototypes -Wno-missing-declarations - CC_HASNT_MISLEADING_INDENTATION := $(shell echo "int main(void) { return 0 }" | $(CC) -Werror -Wno-misleading-indentation -o /dev/null -xc - 2>&1 | grep -q -- -Wno-misleading-indentation ; echo $$?) - ifeq ($(CC_HASNT_MISLEADING_INDENTATION), 1) - flex_flags += -Wno-misleading-indentation +FLEX_VERSION := $(shell $(FLEX) --version | cut -d' ' -f2) + +FLEX_GE_260 := $(call version-ge3,$(FLEX_VERSION),2.6.0) +ifeq ($(FLEX_GE_260),1) + flex_flags := -Wno-redundant-decls -Wno-switch-default -Wno-unused-function -Wno-misleading-indentation + + # Some newer clang and gcc version complain about this + # util/parse-events-bison.c:1317:9: error: variable 'parse_events_nerrs' set but not used [-Werror,-Wunused-but-set-variable] + # int yynerrs = 0; + + flex_flags += -Wno-unused-but-set-variable + + FLEX_LT_262 := $(call version-lt3,$(FLEX_VERSION),2.6.2) + ifeq ($(FLEX_LT_262),1) + flex_flags += -Wno-sign-compare endif else flex_flags := -w endif -CFLAGS_parse-events-flex.o += $(flex_flags) -CFLAGS_pmu-flex.o += $(flex_flags) -CFLAGS_expr-flex.o += $(flex_flags) -CFLAGS_bpf-filter-flex.o += $(flex_flags) -bison_flags := -DYYENABLE_NLS=0 -BISON_GE_35 := $(shell expr $(shell $(BISON) --version | grep bison | sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\)/\1\2/g') \>\= 35) -ifeq ($(BISON_GE_35),1) - bison_flags += -Wno-unused-parameter -Wno-nested-externs -Wno-implicit-function-declaration -Wno-switch-enum -Wno-unused-but-set-variable -Wno-unknown-warning-option +# Some newer clang and gcc version complain about this +# util/parse-events-bison.c:1317:9: error: variable 'parse_events_nerrs' set but not used [-Werror,-Wunused-but-set-variable] +# int yynerrs = 0; + +bison_flags := -DYYENABLE_NLS=0 -Wno-unused-but-set-variable + +# Old clangs don't grok -Wno-unused-but-set-variable, remove it +ifeq ($(CC_NO_CLANG), 0) + CLANG_VERSION := $(shell $(CLANG) --version | head -1 | sed 's/.*clang version \([[:digit:]]\+.[[:digit:]]\+.[[:digit:]]\+\).*/\1/g') + ifeq ($(call version-lt3,$(CLANG_VERSION),13.0.0),1) + bison_flags := $(subst -Wno-unused-but-set-variable,,$(bison_flags)) + flex_flags := $(subst -Wno-unused-but-set-variable,,$(flex_flags)) + endif +endif + +BISON_GE_382 := $(shell expr $(shell $(BISON) --version | grep bison | sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\).\([0-9]\+\)/\1\2\3/g') \>\= 382) +ifeq ($(BISON_GE_382),1) + bison_flags += -Wno-switch-enum else bison_flags += -w endif + +BISON_LT_381 := $(shell expr $(shell $(BISON) --version | grep bison | sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\).\([0-9]\+\)/\1\2\3/g') \< 381) +ifeq ($(BISON_LT_381),1) + bison_flags += -DYYNOMEM=YYABORT +endif + +CFLAGS_parse-events-flex.o += $(flex_flags) -Wno-unused-label +CFLAGS_pmu-flex.o += $(flex_flags) +CFLAGS_expr-flex.o += $(flex_flags) +CFLAGS_bpf-filter-flex.o += $(flex_flags) + CFLAGS_parse-events-bison.o += $(bison_flags) CFLAGS_pmu-bison.o += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags) CFLAGS_expr-bison.o += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags) @@ -316,8 +344,6 @@ CFLAGS_find_bit.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ET CFLAGS_rbtree.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" CFLAGS_libstring.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" CFLAGS_hweight.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" -CFLAGS_parse-events.o += -Wno-redundant-decls -CFLAGS_expr.o += -Wno-redundant-decls CFLAGS_header.o += -include $(OUTPUT)PERF-VERSION-FILE CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/ diff --git a/tools/perf/util/amd-sample-raw.c b/tools/perf/util/amd-sample-raw.c index 6a6ddba76c75..9d0ce88e90e4 100644 --- a/tools/perf/util/amd-sample-raw.c +++ b/tools/perf/util/amd-sample-raw.c @@ -15,7 +15,6 @@ #include "session.h" #include "evlist.h" #include "sample-raw.h" -#include "pmu-events/pmu-events.h" #include "util/sample.h" static u32 cpu_family, cpu_model, ibs_fetch_type, ibs_op_type; diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c index ba988a13dacb..82956adf9963 100644 --- a/tools/perf/util/annotate.c +++ b/tools/perf/util/annotate.c @@ -1846,8 +1846,11 @@ static int symbol__disassemble_bpf(struct symbol *sym, perf_exe(tpath, sizeof(tpath)); bfdf = bfd_openr(tpath, NULL); - assert(bfdf); - assert(bfd_check_format(bfdf, bfd_object)); + if (bfdf == NULL) + abort(); + + if (!bfd_check_format(bfdf, bfd_object)) + abort(); s = open_memstream(&buf, &buf_size); if (!s) { @@ -1895,7 +1898,8 @@ static int symbol__disassemble_bpf(struct symbol *sym, #else disassemble = disassembler(bfdf); #endif - assert(disassemble); + if (disassemble == NULL) + abort(); fflush(s); do { diff --git a/tools/perf/util/bpf-filter.c b/tools/perf/util/bpf-filter.c index 0b30688d78a7..b51544996046 100644 --- a/tools/perf/util/bpf-filter.c +++ b/tools/perf/util/bpf-filter.c @@ -9,8 +9,8 @@ #include "util/evsel.h" #include "util/bpf-filter.h" -#include "util/bpf-filter-flex.h" -#include "util/bpf-filter-bison.h" +#include <util/bpf-filter-flex.h> +#include <util/bpf-filter-bison.h> #include "bpf_skel/sample-filter.h" #include "bpf_skel/sample_filter.skel.h" @@ -62,6 +62,16 @@ static int check_sample_flags(struct evsel *evsel, struct perf_bpf_filter_expr * if (evsel->core.attr.sample_type & expr->sample_flags) return 0; + if (expr->op == PBF_OP_GROUP_BEGIN) { + struct perf_bpf_filter_expr *group; + + list_for_each_entry(group, &expr->groups, list) { + if (check_sample_flags(evsel, group) < 0) + return -1; + } + return 0; + } + info = get_sample_info(expr->sample_flags); if (info == NULL) { pr_err("Error: %s event does not have sample flags %lx\n", diff --git a/tools/perf/util/bpf-filter.y b/tools/perf/util/bpf-filter.y index 07d6c7926c13..5dfa948fc986 100644 --- a/tools/perf/util/bpf-filter.y +++ b/tools/perf/util/bpf-filter.y @@ -9,6 +9,8 @@ #include <linux/list.h> #include "bpf-filter.h" +int perf_bpf_filter_lex(void); + static void perf_bpf_filter_error(struct list_head *expr __maybe_unused, char const *msg) { diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c deleted file mode 100644 index 44cde27d6389..000000000000 --- a/tools/perf/util/bpf-loader.c +++ /dev/null @@ -1,2110 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * bpf-loader.c - * - * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com> - * Copyright (C) 2015 Huawei Inc. - */ - -#include <linux/bpf.h> -#include <bpf/libbpf.h> -#include <bpf/bpf.h> -#include <linux/filter.h> -#include <linux/err.h> -#include <linux/kernel.h> -#include <linux/string.h> -#include <linux/zalloc.h> -#include <errno.h> -#include <stdlib.h> -#include "debug.h" -#include "evlist.h" -#include "bpf-loader.h" -#include "bpf-prologue.h" -#include "probe-event.h" -#include "probe-finder.h" // for MAX_PROBES -#include "parse-events.h" -#include "strfilter.h" -#include "util.h" -#include "llvm-utils.h" -#include "c++/clang-c.h" -#include "util/hashmap.h" -#include "asm/bug.h" - -#include <internal/xyarray.h> - -/* temporarily disable libbpf deprecation warnings */ -#pragma GCC diagnostic ignored "-Wdeprecated-declarations" - -static int libbpf_perf_print(enum libbpf_print_level level __attribute__((unused)), - const char *fmt, va_list args) -{ - return veprintf(1, verbose, pr_fmt(fmt), args); -} - -struct bpf_prog_priv { - bool is_tp; - char *sys_name; - char *evt_name; - struct perf_probe_event pev; - bool need_prologue; - struct bpf_insn *insns_buf; - int nr_types; - int *type_mapping; - int *prologue_fds; -}; - -struct bpf_perf_object { - struct list_head list; - struct bpf_object *obj; -}; - -struct bpf_preproc_result { - struct bpf_insn *new_insn_ptr; - int new_insn_cnt; -}; - -static LIST_HEAD(bpf_objects_list); -static struct hashmap *bpf_program_hash; -static struct hashmap *bpf_map_hash; - -static struct bpf_perf_object * -bpf_perf_object__next(struct bpf_perf_object *prev) -{ - if (!prev) { - if (list_empty(&bpf_objects_list)) - return NULL; - - return list_first_entry(&bpf_objects_list, struct bpf_perf_object, list); - } - if (list_is_last(&prev->list, &bpf_objects_list)) - return NULL; - - return list_next_entry(prev, list); -} - -#define bpf_perf_object__for_each(perf_obj, tmp) \ - for ((perf_obj) = bpf_perf_object__next(NULL), \ - (tmp) = bpf_perf_object__next(perf_obj); \ - (perf_obj) != NULL; \ - (perf_obj) = (tmp), (tmp) = bpf_perf_object__next(tmp)) - -static bool libbpf_initialized; -static int libbpf_sec_handler; - -static int bpf_perf_object__add(struct bpf_object *obj) -{ - struct bpf_perf_object *perf_obj = zalloc(sizeof(*perf_obj)); - - if (perf_obj) { - INIT_LIST_HEAD(&perf_obj->list); - perf_obj->obj = obj; - list_add_tail(&perf_obj->list, &bpf_objects_list); - } - return perf_obj ? 0 : -ENOMEM; -} - -static void *program_priv(const struct bpf_program *prog) -{ - void *priv; - - if (IS_ERR_OR_NULL(bpf_program_hash)) - return NULL; - if (!hashmap__find(bpf_program_hash, prog, &priv)) - return NULL; - return priv; -} - -static struct bpf_insn prologue_init_insn[] = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), -}; - -static int libbpf_prog_prepare_load_fn(struct bpf_program *prog, - struct bpf_prog_load_opts *opts __maybe_unused, - long cookie __maybe_unused) -{ - size_t init_size_cnt = ARRAY_SIZE(prologue_init_insn); - size_t orig_insn_cnt, insn_cnt, init_size, orig_size; - struct bpf_prog_priv *priv = program_priv(prog); - const struct bpf_insn *orig_insn; - struct bpf_insn *insn; - - if (IS_ERR_OR_NULL(priv)) { - pr_debug("bpf: failed to get private field\n"); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (!priv->need_prologue) - return 0; - - /* prepend initialization code to program instructions */ - orig_insn = bpf_program__insns(prog); - orig_insn_cnt = bpf_program__insn_cnt(prog); - init_size = init_size_cnt * sizeof(*insn); - orig_size = orig_insn_cnt * sizeof(*insn); - - insn_cnt = orig_insn_cnt + init_size_cnt; - insn = malloc(insn_cnt * sizeof(*insn)); - if (!insn) - return -ENOMEM; - - memcpy(insn, prologue_init_insn, init_size); - memcpy((char *) insn + init_size, orig_insn, orig_size); - bpf_program__set_insns(prog, insn, insn_cnt); - return 0; -} - -static int libbpf_init(void) -{ - LIBBPF_OPTS(libbpf_prog_handler_opts, handler_opts, - .prog_prepare_load_fn = libbpf_prog_prepare_load_fn, - ); - - if (libbpf_initialized) - return 0; - - libbpf_set_print(libbpf_perf_print); - libbpf_sec_handler = libbpf_register_prog_handler(NULL, BPF_PROG_TYPE_KPROBE, - 0, &handler_opts); - if (libbpf_sec_handler < 0) { - pr_debug("bpf: failed to register libbpf section handler: %d\n", - libbpf_sec_handler); - return -BPF_LOADER_ERRNO__INTERNAL; - } - libbpf_initialized = true; - return 0; -} - -struct bpf_object * -bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz, const char *name) -{ - LIBBPF_OPTS(bpf_object_open_opts, opts, .object_name = name); - struct bpf_object *obj; - int err; - - err = libbpf_init(); - if (err) - return ERR_PTR(err); - - obj = bpf_object__open_mem(obj_buf, obj_buf_sz, &opts); - if (IS_ERR_OR_NULL(obj)) { - pr_debug("bpf: failed to load buffer\n"); - return ERR_PTR(-EINVAL); - } - - if (bpf_perf_object__add(obj)) { - bpf_object__close(obj); - return ERR_PTR(-ENOMEM); - } - - return obj; -} - -static void bpf_perf_object__close(struct bpf_perf_object *perf_obj) -{ - list_del(&perf_obj->list); - bpf_object__close(perf_obj->obj); - free(perf_obj); -} - -struct bpf_object *bpf__prepare_load(const char *filename, bool source) -{ - LIBBPF_OPTS(bpf_object_open_opts, opts, .object_name = filename); - struct bpf_object *obj; - int err; - - err = libbpf_init(); - if (err) - return ERR_PTR(err); - - if (source) { - void *obj_buf; - size_t obj_buf_sz; - - perf_clang__init(); - err = perf_clang__compile_bpf(filename, &obj_buf, &obj_buf_sz); - perf_clang__cleanup(); - if (err) { - pr_debug("bpf: builtin compilation failed: %d, try external compiler\n", err); - err = llvm__compile_bpf(filename, &obj_buf, &obj_buf_sz); - if (err) - return ERR_PTR(-BPF_LOADER_ERRNO__COMPILE); - } else - pr_debug("bpf: successful builtin compilation\n"); - obj = bpf_object__open_mem(obj_buf, obj_buf_sz, &opts); - - if (!IS_ERR_OR_NULL(obj) && llvm_param.dump_obj) - llvm__dump_obj(filename, obj_buf, obj_buf_sz); - - free(obj_buf); - } else { - obj = bpf_object__open(filename); - } - - if (IS_ERR_OR_NULL(obj)) { - pr_debug("bpf: failed to load %s\n", filename); - return obj; - } - - if (bpf_perf_object__add(obj)) { - bpf_object__close(obj); - return ERR_PTR(-BPF_LOADER_ERRNO__COMPILE); - } - - return obj; -} - -static void close_prologue_programs(struct bpf_prog_priv *priv) -{ - struct perf_probe_event *pev; - int i, fd; - - if (!priv->need_prologue) - return; - pev = &priv->pev; - for (i = 0; i < pev->ntevs; i++) { - fd = priv->prologue_fds[i]; - if (fd != -1) - close(fd); - } -} - -static void -clear_prog_priv(const struct bpf_program *prog __maybe_unused, - void *_priv) -{ - struct bpf_prog_priv *priv = _priv; - - close_prologue_programs(priv); - cleanup_perf_probe_events(&priv->pev, 1); - zfree(&priv->insns_buf); - zfree(&priv->prologue_fds); - zfree(&priv->type_mapping); - zfree(&priv->sys_name); - zfree(&priv->evt_name); - free(priv); -} - -static void bpf_program_hash_free(void) -{ - struct hashmap_entry *cur; - size_t bkt; - - if (IS_ERR_OR_NULL(bpf_program_hash)) - return; - - hashmap__for_each_entry(bpf_program_hash, cur, bkt) - clear_prog_priv(cur->pkey, cur->pvalue); - - hashmap__free(bpf_program_hash); - bpf_program_hash = NULL; -} - -static void bpf_map_hash_free(void); - -void bpf__clear(void) -{ - struct bpf_perf_object *perf_obj, *tmp; - - bpf_perf_object__for_each(perf_obj, tmp) { - bpf__unprobe(perf_obj->obj); - bpf_perf_object__close(perf_obj); - } - - bpf_program_hash_free(); - bpf_map_hash_free(); -} - -static size_t ptr_hash(const long __key, void *ctx __maybe_unused) -{ - return __key; -} - -static bool ptr_equal(long key1, long key2, void *ctx __maybe_unused) -{ - return key1 == key2; -} - -static int program_set_priv(struct bpf_program *prog, void *priv) -{ - void *old_priv; - - /* - * Should not happen, we warn about it in the - * caller function - config_bpf_program - */ - if (IS_ERR(bpf_program_hash)) - return PTR_ERR(bpf_program_hash); - - if (!bpf_program_hash) { - bpf_program_hash = hashmap__new(ptr_hash, ptr_equal, NULL); - if (IS_ERR(bpf_program_hash)) - return PTR_ERR(bpf_program_hash); - } - - old_priv = program_priv(prog); - if (old_priv) { - clear_prog_priv(prog, old_priv); - return hashmap__set(bpf_program_hash, prog, priv, NULL, NULL); - } - return hashmap__add(bpf_program_hash, prog, priv); -} - -static int -prog_config__exec(const char *value, struct perf_probe_event *pev) -{ - pev->uprobes = true; - pev->target = strdup(value); - if (!pev->target) - return -ENOMEM; - return 0; -} - -static int -prog_config__module(const char *value, struct perf_probe_event *pev) -{ - pev->uprobes = false; - pev->target = strdup(value); - if (!pev->target) - return -ENOMEM; - return 0; -} - -static int -prog_config__bool(const char *value, bool *pbool, bool invert) -{ - int err; - bool bool_value; - - if (!pbool) - return -EINVAL; - - err = strtobool(value, &bool_value); - if (err) - return err; - - *pbool = invert ? !bool_value : bool_value; - return 0; -} - -static int -prog_config__inlines(const char *value, - struct perf_probe_event *pev __maybe_unused) -{ - return prog_config__bool(value, &probe_conf.no_inlines, true); -} - -static int -prog_config__force(const char *value, - struct perf_probe_event *pev __maybe_unused) -{ - return prog_config__bool(value, &probe_conf.force_add, false); -} - -static struct { - const char *key; - const char *usage; - const char *desc; - int (*func)(const char *, struct perf_probe_event *); -} bpf_prog_config_terms[] = { - { - .key = "exec", - .usage = "exec=<full path of file>", - .desc = "Set uprobe target", - .func = prog_config__exec, - }, - { - .key = "module", - .usage = "module=<module name> ", - .desc = "Set kprobe module", - .func = prog_config__module, - }, - { - .key = "inlines", - .usage = "inlines=[yes|no] ", - .desc = "Probe at inline symbol", - .func = prog_config__inlines, - }, - { - .key = "force", - .usage = "force=[yes|no] ", - .desc = "Forcibly add events with existing name", - .func = prog_config__force, - }, -}; - -static int -do_prog_config(const char *key, const char *value, - struct perf_probe_event *pev) -{ - unsigned int i; - - pr_debug("config bpf program: %s=%s\n", key, value); - for (i = 0; i < ARRAY_SIZE(bpf_prog_config_terms); i++) - if (strcmp(key, bpf_prog_config_terms[i].key) == 0) - return bpf_prog_config_terms[i].func(value, pev); - - pr_debug("BPF: ERROR: invalid program config option: %s=%s\n", - key, value); - - pr_debug("\nHint: Valid options are:\n"); - for (i = 0; i < ARRAY_SIZE(bpf_prog_config_terms); i++) - pr_debug("\t%s:\t%s\n", bpf_prog_config_terms[i].usage, - bpf_prog_config_terms[i].desc); - pr_debug("\n"); - - return -BPF_LOADER_ERRNO__PROGCONF_TERM; -} - -static const char * -parse_prog_config_kvpair(const char *config_str, struct perf_probe_event *pev) -{ - char *text = strdup(config_str); - char *sep, *line; - const char *main_str = NULL; - int err = 0; - - if (!text) { - pr_debug("Not enough memory: dup config_str failed\n"); - return ERR_PTR(-ENOMEM); - } - - line = text; - while ((sep = strchr(line, ';'))) { - char *equ; - - *sep = '\0'; - equ = strchr(line, '='); - if (!equ) { - pr_warning("WARNING: invalid config in BPF object: %s\n", - line); - pr_warning("\tShould be 'key=value'.\n"); - goto nextline; - } - *equ = '\0'; - - err = do_prog_config(line, equ + 1, pev); - if (err) - break; -nextline: - line = sep + 1; - } - - if (!err) - main_str = config_str + (line - text); - free(text); - - return err ? ERR_PTR(err) : main_str; -} - -static int -parse_prog_config(const char *config_str, const char **p_main_str, - bool *is_tp, struct perf_probe_event *pev) -{ - int err; - const char *main_str = parse_prog_config_kvpair(config_str, pev); - - if (IS_ERR(main_str)) - return PTR_ERR(main_str); - - *p_main_str = main_str; - if (!strchr(main_str, '=')) { - /* Is a tracepoint event? */ - const char *s = strchr(main_str, ':'); - - if (!s) { - pr_debug("bpf: '%s' is not a valid tracepoint\n", - config_str); - return -BPF_LOADER_ERRNO__CONFIG; - } - - *is_tp = true; - return 0; - } - - *is_tp = false; - err = parse_perf_probe_command(main_str, pev); - if (err < 0) { - pr_debug("bpf: '%s' is not a valid config string\n", - config_str); - /* parse failed, don't need clear pev. */ - return -BPF_LOADER_ERRNO__CONFIG; - } - return 0; -} - -static int -config_bpf_program(struct bpf_program *prog) -{ - struct perf_probe_event *pev = NULL; - struct bpf_prog_priv *priv = NULL; - const char *config_str, *main_str; - bool is_tp = false; - int err; - - /* Initialize per-program probing setting */ - probe_conf.no_inlines = false; - probe_conf.force_add = false; - - priv = calloc(sizeof(*priv), 1); - if (!priv) { - pr_debug("bpf: failed to alloc priv\n"); - return -ENOMEM; - } - pev = &priv->pev; - - config_str = bpf_program__section_name(prog); - pr_debug("bpf: config program '%s'\n", config_str); - err = parse_prog_config(config_str, &main_str, &is_tp, pev); - if (err) - goto errout; - - if (is_tp) { - char *s = strchr(main_str, ':'); - - priv->is_tp = true; - priv->sys_name = strndup(main_str, s - main_str); - priv->evt_name = strdup(s + 1); - goto set_priv; - } - - if (pev->group && strcmp(pev->group, PERF_BPF_PROBE_GROUP)) { - pr_debug("bpf: '%s': group for event is set and not '%s'.\n", - config_str, PERF_BPF_PROBE_GROUP); - err = -BPF_LOADER_ERRNO__GROUP; - goto errout; - } else if (!pev->group) - pev->group = strdup(PERF_BPF_PROBE_GROUP); - - if (!pev->group) { - pr_debug("bpf: strdup failed\n"); - err = -ENOMEM; - goto errout; - } - - if (!pev->event) { - pr_debug("bpf: '%s': event name is missing. Section name should be 'key=value'\n", - config_str); - err = -BPF_LOADER_ERRNO__EVENTNAME; - goto errout; - } - pr_debug("bpf: config '%s' is ok\n", config_str); - -set_priv: - err = program_set_priv(prog, priv); - if (err) { - pr_debug("Failed to set priv for program '%s'\n", config_str); - goto errout; - } - - return 0; - -errout: - if (pev) - clear_perf_probe_event(pev); - free(priv); - return err; -} - -static int bpf__prepare_probe(void) -{ - static int err = 0; - static bool initialized = false; - - /* - * Make err static, so if init failed the first, bpf__prepare_probe() - * fails each time without calling init_probe_symbol_maps multiple - * times. - */ - if (initialized) - return err; - - initialized = true; - err = init_probe_symbol_maps(false); - if (err < 0) - pr_debug("Failed to init_probe_symbol_maps\n"); - probe_conf.max_probes = MAX_PROBES; - return err; -} - -static int -preproc_gen_prologue(struct bpf_program *prog, int n, - const struct bpf_insn *orig_insns, int orig_insns_cnt, - struct bpf_preproc_result *res) -{ - struct bpf_prog_priv *priv = program_priv(prog); - struct probe_trace_event *tev; - struct perf_probe_event *pev; - struct bpf_insn *buf; - size_t prologue_cnt = 0; - int i, err; - - if (IS_ERR_OR_NULL(priv) || priv->is_tp) - goto errout; - - pev = &priv->pev; - - if (n < 0 || n >= priv->nr_types) - goto errout; - - /* Find a tev belongs to that type */ - for (i = 0; i < pev->ntevs; i++) { - if (priv->type_mapping[i] == n) - break; - } - - if (i >= pev->ntevs) { - pr_debug("Internal error: prologue type %d not found\n", n); - return -BPF_LOADER_ERRNO__PROLOGUE; - } - - tev = &pev->tevs[i]; - - buf = priv->insns_buf; - err = bpf__gen_prologue(tev->args, tev->nargs, - buf, &prologue_cnt, - BPF_MAXINSNS - orig_insns_cnt); - if (err) { - const char *title; - - title = bpf_program__section_name(prog); - pr_debug("Failed to generate prologue for program %s\n", - title); - return err; - } - - memcpy(&buf[prologue_cnt], orig_insns, - sizeof(struct bpf_insn) * orig_insns_cnt); - - res->new_insn_ptr = buf; - res->new_insn_cnt = prologue_cnt + orig_insns_cnt; - return 0; - -errout: - pr_debug("Internal error in preproc_gen_prologue\n"); - return -BPF_LOADER_ERRNO__PROLOGUE; -} - -/* - * compare_tev_args is reflexive, transitive and antisymmetric. - * I can proof it but this margin is too narrow to contain. - */ -static int compare_tev_args(const void *ptev1, const void *ptev2) -{ - int i, ret; - const struct probe_trace_event *tev1 = - *(const struct probe_trace_event **)ptev1; - const struct probe_trace_event *tev2 = - *(const struct probe_trace_event **)ptev2; - - ret = tev2->nargs - tev1->nargs; - if (ret) - return ret; - - for (i = 0; i < tev1->nargs; i++) { - struct probe_trace_arg *arg1, *arg2; - struct probe_trace_arg_ref *ref1, *ref2; - - arg1 = &tev1->args[i]; - arg2 = &tev2->args[i]; - - ret = strcmp(arg1->value, arg2->value); - if (ret) - return ret; - - ref1 = arg1->ref; - ref2 = arg2->ref; - - while (ref1 && ref2) { - ret = ref2->offset - ref1->offset; - if (ret) - return ret; - - ref1 = ref1->next; - ref2 = ref2->next; - } - - if (ref1 || ref2) - return ref2 ? 1 : -1; - } - - return 0; -} - -/* - * Assign a type number to each tevs in a pev. - * mapping is an array with same slots as tevs in that pev. - * nr_types will be set to number of types. - */ -static int map_prologue(struct perf_probe_event *pev, int *mapping, - int *nr_types) -{ - int i, type = 0; - struct probe_trace_event **ptevs; - - size_t array_sz = sizeof(*ptevs) * pev->ntevs; - - ptevs = malloc(array_sz); - if (!ptevs) { - pr_debug("Not enough memory: alloc ptevs failed\n"); - return -ENOMEM; - } - - pr_debug("In map_prologue, ntevs=%d\n", pev->ntevs); - for (i = 0; i < pev->ntevs; i++) - ptevs[i] = &pev->tevs[i]; - - qsort(ptevs, pev->ntevs, sizeof(*ptevs), - compare_tev_args); - - for (i = 0; i < pev->ntevs; i++) { - int n; - - n = ptevs[i] - pev->tevs; - if (i == 0) { - mapping[n] = type; - pr_debug("mapping[%d]=%d\n", n, type); - continue; - } - - if (compare_tev_args(ptevs + i, ptevs + i - 1) == 0) - mapping[n] = type; - else - mapping[n] = ++type; - - pr_debug("mapping[%d]=%d\n", n, mapping[n]); - } - free(ptevs); - *nr_types = type + 1; - - return 0; -} - -static int hook_load_preprocessor(struct bpf_program *prog) -{ - struct bpf_prog_priv *priv = program_priv(prog); - struct perf_probe_event *pev; - bool need_prologue = false; - int i; - - if (IS_ERR_OR_NULL(priv)) { - pr_debug("Internal error when hook preprocessor\n"); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (priv->is_tp) { - priv->need_prologue = false; - return 0; - } - - pev = &priv->pev; - for (i = 0; i < pev->ntevs; i++) { - struct probe_trace_event *tev = &pev->tevs[i]; - - if (tev->nargs > 0) { - need_prologue = true; - break; - } - } - - /* - * Since all tevs don't have argument, we don't need generate - * prologue. - */ - if (!need_prologue) { - priv->need_prologue = false; - return 0; - } - - priv->need_prologue = true; - priv->insns_buf = malloc(sizeof(struct bpf_insn) * BPF_MAXINSNS); - if (!priv->insns_buf) { - pr_debug("Not enough memory: alloc insns_buf failed\n"); - return -ENOMEM; - } - - priv->prologue_fds = malloc(sizeof(int) * pev->ntevs); - if (!priv->prologue_fds) { - pr_debug("Not enough memory: alloc prologue fds failed\n"); - return -ENOMEM; - } - memset(priv->prologue_fds, -1, sizeof(int) * pev->ntevs); - - priv->type_mapping = malloc(sizeof(int) * pev->ntevs); - if (!priv->type_mapping) { - pr_debug("Not enough memory: alloc type_mapping failed\n"); - return -ENOMEM; - } - memset(priv->type_mapping, -1, - sizeof(int) * pev->ntevs); - - return map_prologue(pev, priv->type_mapping, &priv->nr_types); -} - -int bpf__probe(struct bpf_object *obj) -{ - int err = 0; - struct bpf_program *prog; - struct bpf_prog_priv *priv; - struct perf_probe_event *pev; - - err = bpf__prepare_probe(); - if (err) { - pr_debug("bpf__prepare_probe failed\n"); - return err; - } - - bpf_object__for_each_program(prog, obj) { - err = config_bpf_program(prog); - if (err) - goto out; - - priv = program_priv(prog); - if (IS_ERR_OR_NULL(priv)) { - if (!priv) - err = -BPF_LOADER_ERRNO__INTERNAL; - else - err = PTR_ERR(priv); - goto out; - } - - if (priv->is_tp) { - bpf_program__set_type(prog, BPF_PROG_TYPE_TRACEPOINT); - continue; - } - - bpf_program__set_type(prog, BPF_PROG_TYPE_KPROBE); - pev = &priv->pev; - - err = convert_perf_probe_events(pev, 1); - if (err < 0) { - pr_debug("bpf_probe: failed to convert perf probe events\n"); - goto out; - } - - err = apply_perf_probe_events(pev, 1); - if (err < 0) { - pr_debug("bpf_probe: failed to apply perf probe events\n"); - goto out; - } - - /* - * After probing, let's consider prologue, which - * adds program fetcher to BPF programs. - * - * hook_load_preprocessor() hooks pre-processor - * to bpf_program, let it generate prologue - * dynamically during loading. - */ - err = hook_load_preprocessor(prog); - if (err) - goto out; - } -out: - return err < 0 ? err : 0; -} - -#define EVENTS_WRITE_BUFSIZE 4096 -int bpf__unprobe(struct bpf_object *obj) -{ - int err, ret = 0; - struct bpf_program *prog; - - bpf_object__for_each_program(prog, obj) { - struct bpf_prog_priv *priv = program_priv(prog); - int i; - - if (IS_ERR_OR_NULL(priv) || priv->is_tp) - continue; - - for (i = 0; i < priv->pev.ntevs; i++) { - struct probe_trace_event *tev = &priv->pev.tevs[i]; - char name_buf[EVENTS_WRITE_BUFSIZE]; - struct strfilter *delfilter; - - snprintf(name_buf, EVENTS_WRITE_BUFSIZE, - "%s:%s", tev->group, tev->event); - name_buf[EVENTS_WRITE_BUFSIZE - 1] = '\0'; - - delfilter = strfilter__new(name_buf, NULL); - if (!delfilter) { - pr_debug("Failed to create filter for unprobing\n"); - ret = -ENOMEM; - continue; - } - - err = del_perf_probe_events(delfilter); - strfilter__delete(delfilter); - if (err) { - pr_debug("Failed to delete %s\n", name_buf); - ret = err; - continue; - } - } - } - return ret; -} - -static int bpf_object__load_prologue(struct bpf_object *obj) -{ - int init_cnt = ARRAY_SIZE(prologue_init_insn); - const struct bpf_insn *orig_insns; - struct bpf_preproc_result res; - struct perf_probe_event *pev; - struct bpf_program *prog; - int orig_insns_cnt; - - bpf_object__for_each_program(prog, obj) { - struct bpf_prog_priv *priv = program_priv(prog); - int err, i, fd; - - if (IS_ERR_OR_NULL(priv)) { - pr_debug("bpf: failed to get private field\n"); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (!priv->need_prologue) - continue; - - /* - * For each program that needs prologue we do following: - * - * - take its current instructions and use them - * to generate the new code with prologue - * - load new instructions with bpf_prog_load - * and keep the fd in prologue_fds - * - new fd will be used in bpf__foreach_event - * to connect this program with perf evsel - */ - orig_insns = bpf_program__insns(prog); - orig_insns_cnt = bpf_program__insn_cnt(prog); - - pev = &priv->pev; - for (i = 0; i < pev->ntevs; i++) { - /* - * Skipping artificall prologue_init_insn instructions - * (init_cnt), so the prologue can be generated instead - * of them. - */ - err = preproc_gen_prologue(prog, i, - orig_insns + init_cnt, - orig_insns_cnt - init_cnt, - &res); - if (err) - return err; - - fd = bpf_prog_load(bpf_program__get_type(prog), - bpf_program__name(prog), "GPL", - res.new_insn_ptr, - res.new_insn_cnt, NULL); - if (fd < 0) { - char bf[128]; - - libbpf_strerror(-errno, bf, sizeof(bf)); - pr_debug("bpf: load objects with prologue failed: err=%d: (%s)\n", - -errno, bf); - return -errno; - } - priv->prologue_fds[i] = fd; - } - /* - * We no longer need the original program, - * we can unload it. - */ - bpf_program__unload(prog); - } - return 0; -} - -int bpf__load(struct bpf_object *obj) -{ - int err; - - err = bpf_object__load(obj); - if (err) { - char bf[128]; - libbpf_strerror(err, bf, sizeof(bf)); - pr_debug("bpf: load objects failed: err=%d: (%s)\n", err, bf); - return err; - } - return bpf_object__load_prologue(obj); -} - -int bpf__foreach_event(struct bpf_object *obj, - bpf_prog_iter_callback_t func, - void *arg) -{ - struct bpf_program *prog; - int err; - - bpf_object__for_each_program(prog, obj) { - struct bpf_prog_priv *priv = program_priv(prog); - struct probe_trace_event *tev; - struct perf_probe_event *pev; - int i, fd; - - if (IS_ERR_OR_NULL(priv)) { - pr_debug("bpf: failed to get private field\n"); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (priv->is_tp) { - fd = bpf_program__fd(prog); - err = (*func)(priv->sys_name, priv->evt_name, fd, obj, arg); - if (err) { - pr_debug("bpf: tracepoint call back failed, stop iterate\n"); - return err; - } - continue; - } - - pev = &priv->pev; - for (i = 0; i < pev->ntevs; i++) { - tev = &pev->tevs[i]; - - if (priv->need_prologue) - fd = priv->prologue_fds[i]; - else - fd = bpf_program__fd(prog); - - if (fd < 0) { - pr_debug("bpf: failed to get file descriptor\n"); - return fd; - } - - err = (*func)(tev->group, tev->event, fd, obj, arg); - if (err) { - pr_debug("bpf: call back failed, stop iterate\n"); - return err; - } - } - } - return 0; -} - -enum bpf_map_op_type { - BPF_MAP_OP_SET_VALUE, - BPF_MAP_OP_SET_EVSEL, -}; - -enum bpf_map_key_type { - BPF_MAP_KEY_ALL, - BPF_MAP_KEY_RANGES, -}; - -struct bpf_map_op { - struct list_head list; - enum bpf_map_op_type op_type; - enum bpf_map_key_type key_type; - union { - struct parse_events_array array; - } k; - union { - u64 value; - struct evsel *evsel; - } v; -}; - -struct bpf_map_priv { - struct list_head ops_list; -}; - -static void -bpf_map_op__delete(struct bpf_map_op *op) -{ - if (!list_empty(&op->list)) - list_del_init(&op->list); - if (op->key_type == BPF_MAP_KEY_RANGES) - parse_events__clear_array(&op->k.array); - free(op); -} - -static void -bpf_map_priv__purge(struct bpf_map_priv *priv) -{ - struct bpf_map_op *pos, *n; - - list_for_each_entry_safe(pos, n, &priv->ops_list, list) { - list_del_init(&pos->list); - bpf_map_op__delete(pos); - } -} - -static void -bpf_map_priv__clear(const struct bpf_map *map __maybe_unused, - void *_priv) -{ - struct bpf_map_priv *priv = _priv; - - bpf_map_priv__purge(priv); - free(priv); -} - -static void *map_priv(const struct bpf_map *map) -{ - void *priv; - - if (IS_ERR_OR_NULL(bpf_map_hash)) - return NULL; - if (!hashmap__find(bpf_map_hash, map, &priv)) - return NULL; - return priv; -} - -static void bpf_map_hash_free(void) -{ - struct hashmap_entry *cur; - size_t bkt; - - if (IS_ERR_OR_NULL(bpf_map_hash)) - return; - - hashmap__for_each_entry(bpf_map_hash, cur, bkt) - bpf_map_priv__clear(cur->pkey, cur->pvalue); - - hashmap__free(bpf_map_hash); - bpf_map_hash = NULL; -} - -static int map_set_priv(struct bpf_map *map, void *priv) -{ - void *old_priv; - - if (WARN_ON_ONCE(IS_ERR(bpf_map_hash))) - return PTR_ERR(bpf_program_hash); - - if (!bpf_map_hash) { - bpf_map_hash = hashmap__new(ptr_hash, ptr_equal, NULL); - if (IS_ERR(bpf_map_hash)) - return PTR_ERR(bpf_map_hash); - } - - old_priv = map_priv(map); - if (old_priv) { - bpf_map_priv__clear(map, old_priv); - return hashmap__set(bpf_map_hash, map, priv, NULL, NULL); - } - return hashmap__add(bpf_map_hash, map, priv); -} - -static int -bpf_map_op_setkey(struct bpf_map_op *op, struct parse_events_term *term) -{ - op->key_type = BPF_MAP_KEY_ALL; - if (!term) - return 0; - - if (term->array.nr_ranges) { - size_t memsz = term->array.nr_ranges * - sizeof(op->k.array.ranges[0]); - - op->k.array.ranges = memdup(term->array.ranges, memsz); - if (!op->k.array.ranges) { - pr_debug("Not enough memory to alloc indices for map\n"); - return -ENOMEM; - } - op->key_type = BPF_MAP_KEY_RANGES; - op->k.array.nr_ranges = term->array.nr_ranges; - } - return 0; -} - -static struct bpf_map_op * -bpf_map_op__new(struct parse_events_term *term) -{ - struct bpf_map_op *op; - int err; - - op = zalloc(sizeof(*op)); - if (!op) { - pr_debug("Failed to alloc bpf_map_op\n"); - return ERR_PTR(-ENOMEM); - } - INIT_LIST_HEAD(&op->list); - - err = bpf_map_op_setkey(op, term); - if (err) { - free(op); - return ERR_PTR(err); - } - return op; -} - -static struct bpf_map_op * -bpf_map_op__clone(struct bpf_map_op *op) -{ - struct bpf_map_op *newop; - - newop = memdup(op, sizeof(*op)); - if (!newop) { - pr_debug("Failed to alloc bpf_map_op\n"); - return NULL; - } - - INIT_LIST_HEAD(&newop->list); - if (op->key_type == BPF_MAP_KEY_RANGES) { - size_t memsz = op->k.array.nr_ranges * - sizeof(op->k.array.ranges[0]); - - newop->k.array.ranges = memdup(op->k.array.ranges, memsz); - if (!newop->k.array.ranges) { - pr_debug("Failed to alloc indices for map\n"); - free(newop); - return NULL; - } - } - - return newop; -} - -static struct bpf_map_priv * -bpf_map_priv__clone(struct bpf_map_priv *priv) -{ - struct bpf_map_priv *newpriv; - struct bpf_map_op *pos, *newop; - - newpriv = zalloc(sizeof(*newpriv)); - if (!newpriv) { - pr_debug("Not enough memory to alloc map private\n"); - return NULL; - } - INIT_LIST_HEAD(&newpriv->ops_list); - - list_for_each_entry(pos, &priv->ops_list, list) { - newop = bpf_map_op__clone(pos); - if (!newop) { - bpf_map_priv__purge(newpriv); - return NULL; - } - list_add_tail(&newop->list, &newpriv->ops_list); - } - - return newpriv; -} - -static int -bpf_map__add_op(struct bpf_map *map, struct bpf_map_op *op) -{ - const char *map_name = bpf_map__name(map); - struct bpf_map_priv *priv = map_priv(map); - - if (IS_ERR(priv)) { - pr_debug("Failed to get private from map %s\n", map_name); - return PTR_ERR(priv); - } - - if (!priv) { - priv = zalloc(sizeof(*priv)); - if (!priv) { - pr_debug("Not enough memory to alloc map private\n"); - return -ENOMEM; - } - INIT_LIST_HEAD(&priv->ops_list); - - if (map_set_priv(map, priv)) { - free(priv); - return -BPF_LOADER_ERRNO__INTERNAL; - } - } - - list_add_tail(&op->list, &priv->ops_list); - return 0; -} - -static struct bpf_map_op * -bpf_map__add_newop(struct bpf_map *map, struct parse_events_term *term) -{ - struct bpf_map_op *op; - int err; - - op = bpf_map_op__new(term); - if (IS_ERR(op)) - return op; - - err = bpf_map__add_op(map, op); - if (err) { - bpf_map_op__delete(op); - return ERR_PTR(err); - } - return op; -} - -static int -__bpf_map__config_value(struct bpf_map *map, - struct parse_events_term *term) -{ - struct bpf_map_op *op; - const char *map_name = bpf_map__name(map); - - if (!map) { - pr_debug("Map '%s' is invalid\n", map_name); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (bpf_map__type(map) != BPF_MAP_TYPE_ARRAY) { - pr_debug("Map %s type is not BPF_MAP_TYPE_ARRAY\n", - map_name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE; - } - if (bpf_map__key_size(map) < sizeof(unsigned int)) { - pr_debug("Map %s has incorrect key size\n", map_name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_KEYSIZE; - } - switch (bpf_map__value_size(map)) { - case 1: - case 2: - case 4: - case 8: - break; - default: - pr_debug("Map %s has incorrect value size\n", map_name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUESIZE; - } - - op = bpf_map__add_newop(map, term); - if (IS_ERR(op)) - return PTR_ERR(op); - op->op_type = BPF_MAP_OP_SET_VALUE; - op->v.value = term->val.num; - return 0; -} - -static int -bpf_map__config_value(struct bpf_map *map, - struct parse_events_term *term, - struct evlist *evlist __maybe_unused) -{ - if (!term->err_val) { - pr_debug("Config value not set\n"); - return -BPF_LOADER_ERRNO__OBJCONF_CONF; - } - - if (term->type_val != PARSE_EVENTS__TERM_TYPE_NUM) { - pr_debug("ERROR: wrong value type for 'value'\n"); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE; - } - - return __bpf_map__config_value(map, term); -} - -static int -__bpf_map__config_event(struct bpf_map *map, - struct parse_events_term *term, - struct evlist *evlist) -{ - struct bpf_map_op *op; - const char *map_name = bpf_map__name(map); - struct evsel *evsel = evlist__find_evsel_by_str(evlist, term->val.str); - - if (!evsel) { - pr_debug("Event (for '%s') '%s' doesn't exist\n", - map_name, term->val.str); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_NOEVT; - } - - if (!map) { - pr_debug("Map '%s' is invalid\n", map_name); - return PTR_ERR(map); - } - - /* - * No need to check key_size and value_size: - * kernel has already checked them. - */ - if (bpf_map__type(map) != BPF_MAP_TYPE_PERF_EVENT_ARRAY) { - pr_debug("Map %s type is not BPF_MAP_TYPE_PERF_EVENT_ARRAY\n", - map_name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE; - } - - op = bpf_map__add_newop(map, term); - if (IS_ERR(op)) - return PTR_ERR(op); - op->op_type = BPF_MAP_OP_SET_EVSEL; - op->v.evsel = evsel; - return 0; -} - -static int -bpf_map__config_event(struct bpf_map *map, - struct parse_events_term *term, - struct evlist *evlist) -{ - if (!term->err_val) { - pr_debug("Config value not set\n"); - return -BPF_LOADER_ERRNO__OBJCONF_CONF; - } - - if (term->type_val != PARSE_EVENTS__TERM_TYPE_STR) { - pr_debug("ERROR: wrong value type for 'event'\n"); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE; - } - - return __bpf_map__config_event(map, term, evlist); -} - -struct bpf_obj_config__map_func { - const char *config_opt; - int (*config_func)(struct bpf_map *, struct parse_events_term *, - struct evlist *); -}; - -struct bpf_obj_config__map_func bpf_obj_config__map_funcs[] = { - {"value", bpf_map__config_value}, - {"event", bpf_map__config_event}, -}; - -static int -config_map_indices_range_check(struct parse_events_term *term, - struct bpf_map *map, - const char *map_name) -{ - struct parse_events_array *array = &term->array; - unsigned int i; - - if (!array->nr_ranges) - return 0; - if (!array->ranges) { - pr_debug("ERROR: map %s: array->nr_ranges is %d but range array is NULL\n", - map_name, (int)array->nr_ranges); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (!map) { - pr_debug("Map '%s' is invalid\n", map_name); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - for (i = 0; i < array->nr_ranges; i++) { - unsigned int start = array->ranges[i].start; - size_t length = array->ranges[i].length; - unsigned int idx = start + length - 1; - - if (idx >= bpf_map__max_entries(map)) { - pr_debug("ERROR: index %d too large\n", idx); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_IDX2BIG; - } - } - return 0; -} - -static int -bpf__obj_config_map(struct bpf_object *obj, - struct parse_events_term *term, - struct evlist *evlist, - int *key_scan_pos) -{ - /* key is "map:<mapname>.<config opt>" */ - char *map_name = strdup(term->config + sizeof("map:") - 1); - struct bpf_map *map; - int err = -BPF_LOADER_ERRNO__OBJCONF_OPT; - char *map_opt; - size_t i; - - if (!map_name) - return -ENOMEM; - - map_opt = strchr(map_name, '.'); - if (!map_opt) { - pr_debug("ERROR: Invalid map config: %s\n", map_name); - goto out; - } - - *map_opt++ = '\0'; - if (*map_opt == '\0') { - pr_debug("ERROR: Invalid map option: %s\n", term->config); - goto out; - } - - map = bpf_object__find_map_by_name(obj, map_name); - if (!map) { - pr_debug("ERROR: Map %s doesn't exist\n", map_name); - err = -BPF_LOADER_ERRNO__OBJCONF_MAP_NOTEXIST; - goto out; - } - - *key_scan_pos += strlen(map_opt); - err = config_map_indices_range_check(term, map, map_name); - if (err) - goto out; - *key_scan_pos -= strlen(map_opt); - - for (i = 0; i < ARRAY_SIZE(bpf_obj_config__map_funcs); i++) { - struct bpf_obj_config__map_func *func = - &bpf_obj_config__map_funcs[i]; - - if (strcmp(map_opt, func->config_opt) == 0) { - err = func->config_func(map, term, evlist); - goto out; - } - } - - pr_debug("ERROR: Invalid map config option '%s'\n", map_opt); - err = -BPF_LOADER_ERRNO__OBJCONF_MAP_OPT; -out: - if (!err) - *key_scan_pos += strlen(map_opt); - - free(map_name); - return err; -} - -int bpf__config_obj(struct bpf_object *obj, - struct parse_events_term *term, - struct evlist *evlist, - int *error_pos) -{ - int key_scan_pos = 0; - int err; - - if (!obj || !term || !term->config) - return -EINVAL; - - if (strstarts(term->config, "map:")) { - key_scan_pos = sizeof("map:") - 1; - err = bpf__obj_config_map(obj, term, evlist, &key_scan_pos); - goto out; - } - err = -BPF_LOADER_ERRNO__OBJCONF_OPT; -out: - if (error_pos) - *error_pos = key_scan_pos; - return err; - -} - -typedef int (*map_config_func_t)(const char *name, int map_fd, - const struct bpf_map *map, - struct bpf_map_op *op, - void *pkey, void *arg); - -static int -foreach_key_array_all(map_config_func_t func, - void *arg, const char *name, - int map_fd, const struct bpf_map *map, - struct bpf_map_op *op) -{ - unsigned int i; - int err; - - for (i = 0; i < bpf_map__max_entries(map); i++) { - err = func(name, map_fd, map, op, &i, arg); - if (err) { - pr_debug("ERROR: failed to insert value to %s[%u]\n", - name, i); - return err; - } - } - return 0; -} - -static int -foreach_key_array_ranges(map_config_func_t func, void *arg, - const char *name, int map_fd, - const struct bpf_map *map, - struct bpf_map_op *op) -{ - unsigned int i, j; - int err; - - for (i = 0; i < op->k.array.nr_ranges; i++) { - unsigned int start = op->k.array.ranges[i].start; - size_t length = op->k.array.ranges[i].length; - - for (j = 0; j < length; j++) { - unsigned int idx = start + j; - - err = func(name, map_fd, map, op, &idx, arg); - if (err) { - pr_debug("ERROR: failed to insert value to %s[%u]\n", - name, idx); - return err; - } - } - } - return 0; -} - -static int -bpf_map_config_foreach_key(struct bpf_map *map, - map_config_func_t func, - void *arg) -{ - int err, map_fd, type; - struct bpf_map_op *op; - const char *name = bpf_map__name(map); - struct bpf_map_priv *priv = map_priv(map); - - if (IS_ERR(priv)) { - pr_debug("ERROR: failed to get private from map %s\n", name); - return -BPF_LOADER_ERRNO__INTERNAL; - } - if (!priv || list_empty(&priv->ops_list)) { - pr_debug("INFO: nothing to config for map %s\n", name); - return 0; - } - - if (!map) { - pr_debug("Map '%s' is invalid\n", name); - return -BPF_LOADER_ERRNO__INTERNAL; - } - map_fd = bpf_map__fd(map); - if (map_fd < 0) { - pr_debug("ERROR: failed to get fd from map %s\n", name); - return map_fd; - } - - type = bpf_map__type(map); - list_for_each_entry(op, &priv->ops_list, list) { - switch (type) { - case BPF_MAP_TYPE_ARRAY: - case BPF_MAP_TYPE_PERF_EVENT_ARRAY: - switch (op->key_type) { - case BPF_MAP_KEY_ALL: - err = foreach_key_array_all(func, arg, name, - map_fd, map, op); - break; - case BPF_MAP_KEY_RANGES: - err = foreach_key_array_ranges(func, arg, name, - map_fd, map, op); - break; - default: - pr_debug("ERROR: keytype for map '%s' invalid\n", - name); - return -BPF_LOADER_ERRNO__INTERNAL; - } - if (err) - return err; - break; - default: - pr_debug("ERROR: type of '%s' incorrect\n", name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE; - } - } - - return 0; -} - -static int -apply_config_value_for_key(int map_fd, void *pkey, - size_t val_size, u64 val) -{ - int err = 0; - - switch (val_size) { - case 1: { - u8 _val = (u8)(val); - err = bpf_map_update_elem(map_fd, pkey, &_val, BPF_ANY); - break; - } - case 2: { - u16 _val = (u16)(val); - err = bpf_map_update_elem(map_fd, pkey, &_val, BPF_ANY); - break; - } - case 4: { - u32 _val = (u32)(val); - err = bpf_map_update_elem(map_fd, pkey, &_val, BPF_ANY); - break; - } - case 8: { - err = bpf_map_update_elem(map_fd, pkey, &val, BPF_ANY); - break; - } - default: - pr_debug("ERROR: invalid value size\n"); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUESIZE; - } - if (err && errno) - err = -errno; - return err; -} - -static int -apply_config_evsel_for_key(const char *name, int map_fd, void *pkey, - struct evsel *evsel) -{ - struct xyarray *xy = evsel->core.fd; - struct perf_event_attr *attr; - unsigned int key, events; - bool check_pass = false; - int *evt_fd; - int err; - - if (!xy) { - pr_debug("ERROR: evsel not ready for map %s\n", name); - return -BPF_LOADER_ERRNO__INTERNAL; - } - - if (xy->row_size / xy->entry_size != 1) { - pr_debug("ERROR: Dimension of target event is incorrect for map %s\n", - name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_EVTDIM; - } - - attr = &evsel->core.attr; - if (attr->inherit) { - pr_debug("ERROR: Can't put inherit event into map %s\n", name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_EVTINH; - } - - if (evsel__is_bpf_output(evsel)) - check_pass = true; - if (attr->type == PERF_TYPE_RAW) - check_pass = true; - if (attr->type == PERF_TYPE_HARDWARE) - check_pass = true; - if (!check_pass) { - pr_debug("ERROR: Event type is wrong for map %s\n", name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_EVTTYPE; - } - - events = xy->entries / (xy->row_size / xy->entry_size); - key = *((unsigned int *)pkey); - if (key >= events) { - pr_debug("ERROR: there is no event %d for map %s\n", - key, name); - return -BPF_LOADER_ERRNO__OBJCONF_MAP_MAPSIZE; - } - evt_fd = xyarray__entry(xy, key, 0); - err = bpf_map_update_elem(map_fd, pkey, evt_fd, BPF_ANY); - if (err && errno) - err = -errno; - return err; -} - -static int -apply_obj_config_map_for_key(const char *name, int map_fd, - const struct bpf_map *map, - struct bpf_map_op *op, - void *pkey, void *arg __maybe_unused) -{ - int err; - - switch (op->op_type) { - case BPF_MAP_OP_SET_VALUE: - err = apply_config_value_for_key(map_fd, pkey, - bpf_map__value_size(map), - op->v.value); - break; - case BPF_MAP_OP_SET_EVSEL: - err = apply_config_evsel_for_key(name, map_fd, pkey, - op->v.evsel); - break; - default: - pr_debug("ERROR: unknown value type for '%s'\n", name); - err = -BPF_LOADER_ERRNO__INTERNAL; - } - return err; -} - -static int -apply_obj_config_map(struct bpf_map *map) -{ - return bpf_map_config_foreach_key(map, - apply_obj_config_map_for_key, - NULL); -} - -static int -apply_obj_config_object(struct bpf_object *obj) -{ - struct bpf_map *map; - int err; - - bpf_object__for_each_map(map, obj) { - err = apply_obj_config_map(map); - if (err) - return err; - } - return 0; -} - -int bpf__apply_obj_config(void) -{ - struct bpf_perf_object *perf_obj, *tmp; - int err; - - bpf_perf_object__for_each(perf_obj, tmp) { - err = apply_obj_config_object(perf_obj->obj); - if (err) - return err; - } - - return 0; -} - -#define bpf__perf_for_each_map(map, pobj, tmp) \ - bpf_perf_object__for_each(pobj, tmp) \ - bpf_object__for_each_map(map, pobj->obj) - -#define bpf__perf_for_each_map_named(map, pobj, pobjtmp, name) \ - bpf__perf_for_each_map(map, pobj, pobjtmp) \ - if (bpf_map__name(map) && (strcmp(name, bpf_map__name(map)) == 0)) - -struct evsel *bpf__setup_output_event(struct evlist *evlist, const char *name) -{ - struct bpf_map_priv *tmpl_priv = NULL; - struct bpf_perf_object *perf_obj, *tmp; - struct evsel *evsel = NULL; - struct bpf_map *map; - int err; - bool need_init = false; - - bpf__perf_for_each_map_named(map, perf_obj, tmp, name) { - struct bpf_map_priv *priv = map_priv(map); - - if (IS_ERR(priv)) - return ERR_PTR(-BPF_LOADER_ERRNO__INTERNAL); - - /* - * No need to check map type: type should have been - * verified by kernel. - */ - if (!need_init && !priv) - need_init = !priv; - if (!tmpl_priv && priv) - tmpl_priv = priv; - } - - if (!need_init) - return NULL; - - if (!tmpl_priv) { - char *event_definition = NULL; - - if (asprintf(&event_definition, "bpf-output/no-inherit=1,name=%s/", name) < 0) - return ERR_PTR(-ENOMEM); - - err = parse_event(evlist, event_definition); - free(event_definition); - - if (err) { - pr_debug("ERROR: failed to create the \"%s\" bpf-output event\n", name); - return ERR_PTR(-err); - } - - evsel = evlist__last(evlist); - } - - bpf__perf_for_each_map_named(map, perf_obj, tmp, name) { - struct bpf_map_priv *priv = map_priv(map); - - if (IS_ERR(priv)) - return ERR_PTR(-BPF_LOADER_ERRNO__INTERNAL); - if (priv) - continue; - - if (tmpl_priv) { - priv = bpf_map_priv__clone(tmpl_priv); - if (!priv) - return ERR_PTR(-ENOMEM); - - err = map_set_priv(map, priv); - if (err) { - bpf_map_priv__clear(map, priv); - return ERR_PTR(err); - } - } else if (evsel) { - struct bpf_map_op *op; - - op = bpf_map__add_newop(map, NULL); - if (IS_ERR(op)) - return ERR_CAST(op); - op->op_type = BPF_MAP_OP_SET_EVSEL; - op->v.evsel = evsel; - } - } - - return evsel; -} - -int bpf__setup_stdout(struct evlist *evlist) -{ - struct evsel *evsel = bpf__setup_output_event(evlist, "__bpf_stdout__"); - return PTR_ERR_OR_ZERO(evsel); -} - -#define ERRNO_OFFSET(e) ((e) - __BPF_LOADER_ERRNO__START) -#define ERRCODE_OFFSET(c) ERRNO_OFFSET(BPF_LOADER_ERRNO__##c) -#define NR_ERRNO (__BPF_LOADER_ERRNO__END - __BPF_LOADER_ERRNO__START) - -static const char *bpf_loader_strerror_table[NR_ERRNO] = { - [ERRCODE_OFFSET(CONFIG)] = "Invalid config string", - [ERRCODE_OFFSET(GROUP)] = "Invalid group name", - [ERRCODE_OFFSET(EVENTNAME)] = "No event name found in config string", - [ERRCODE_OFFSET(INTERNAL)] = "BPF loader internal error", - [ERRCODE_OFFSET(COMPILE)] = "Error when compiling BPF scriptlet", - [ERRCODE_OFFSET(PROGCONF_TERM)] = "Invalid program config term in config string", - [ERRCODE_OFFSET(PROLOGUE)] = "Failed to generate prologue", - [ERRCODE_OFFSET(PROLOGUE2BIG)] = "Prologue too big for program", - [ERRCODE_OFFSET(PROLOGUEOOB)] = "Offset out of bound for prologue", - [ERRCODE_OFFSET(OBJCONF_OPT)] = "Invalid object config option", - [ERRCODE_OFFSET(OBJCONF_CONF)] = "Config value not set (missing '=')", - [ERRCODE_OFFSET(OBJCONF_MAP_OPT)] = "Invalid object map config option", - [ERRCODE_OFFSET(OBJCONF_MAP_NOTEXIST)] = "Target map doesn't exist", - [ERRCODE_OFFSET(OBJCONF_MAP_VALUE)] = "Incorrect value type for map", - [ERRCODE_OFFSET(OBJCONF_MAP_TYPE)] = "Incorrect map type", - [ERRCODE_OFFSET(OBJCONF_MAP_KEYSIZE)] = "Incorrect map key size", - [ERRCODE_OFFSET(OBJCONF_MAP_VALUESIZE)] = "Incorrect map value size", - [ERRCODE_OFFSET(OBJCONF_MAP_NOEVT)] = "Event not found for map setting", - [ERRCODE_OFFSET(OBJCONF_MAP_MAPSIZE)] = "Invalid map size for event setting", - [ERRCODE_OFFSET(OBJCONF_MAP_EVTDIM)] = "Event dimension too large", - [ERRCODE_OFFSET(OBJCONF_MAP_EVTINH)] = "Doesn't support inherit event", - [ERRCODE_OFFSET(OBJCONF_MAP_EVTTYPE)] = "Wrong event type for map", - [ERRCODE_OFFSET(OBJCONF_MAP_IDX2BIG)] = "Index too large", -}; - -static int -bpf_loader_strerror(int err, char *buf, size_t size) -{ - char sbuf[STRERR_BUFSIZE]; - const char *msg; - - if (!buf || !size) - return -1; - - err = err > 0 ? err : -err; - - if (err >= __LIBBPF_ERRNO__START) - return libbpf_strerror(err, buf, size); - - if (err >= __BPF_LOADER_ERRNO__START && err < __BPF_LOADER_ERRNO__END) { - msg = bpf_loader_strerror_table[ERRNO_OFFSET(err)]; - snprintf(buf, size, "%s", msg); - buf[size - 1] = '\0'; - return 0; - } - - if (err >= __BPF_LOADER_ERRNO__END) - snprintf(buf, size, "Unknown bpf loader error %d", err); - else - snprintf(buf, size, "%s", - str_error_r(err, sbuf, sizeof(sbuf))); - - buf[size - 1] = '\0'; - return -1; -} - -#define bpf__strerror_head(err, buf, size) \ - char sbuf[STRERR_BUFSIZE], *emsg;\ - if (!size)\ - return 0;\ - if (err < 0)\ - err = -err;\ - bpf_loader_strerror(err, sbuf, sizeof(sbuf));\ - emsg = sbuf;\ - switch (err) {\ - default:\ - scnprintf(buf, size, "%s", emsg);\ - break; - -#define bpf__strerror_entry(val, fmt...)\ - case val: {\ - scnprintf(buf, size, fmt);\ - break;\ - } - -#define bpf__strerror_end(buf, size)\ - }\ - buf[size - 1] = '\0'; - -int bpf__strerror_prepare_load(const char *filename, bool source, - int err, char *buf, size_t size) -{ - size_t n; - int ret; - - n = snprintf(buf, size, "Failed to load %s%s: ", - filename, source ? " from source" : ""); - if (n >= size) { - buf[size - 1] = '\0'; - return 0; - } - buf += n; - size -= n; - - ret = bpf_loader_strerror(err, buf, size); - buf[size - 1] = '\0'; - return ret; -} - -int bpf__strerror_probe(struct bpf_object *obj __maybe_unused, - int err, char *buf, size_t size) -{ - bpf__strerror_head(err, buf, size); - case BPF_LOADER_ERRNO__PROGCONF_TERM: { - scnprintf(buf, size, "%s (add -v to see detail)", emsg); - break; - } - bpf__strerror_entry(EEXIST, "Probe point exist. Try 'perf probe -d \"*\"' and set 'force=yes'"); - bpf__strerror_entry(EACCES, "You need to be root"); - bpf__strerror_entry(EPERM, "You need to be root, and /proc/sys/kernel/kptr_restrict should be 0"); - bpf__strerror_entry(ENOENT, "You need to check probing points in BPF file"); - bpf__strerror_end(buf, size); - return 0; -} - -int bpf__strerror_load(struct bpf_object *obj, - int err, char *buf, size_t size) -{ - bpf__strerror_head(err, buf, size); - case LIBBPF_ERRNO__KVER: { - unsigned int obj_kver = bpf_object__kversion(obj); - unsigned int real_kver; - - if (fetch_kernel_version(&real_kver, NULL, 0)) { - scnprintf(buf, size, "Unable to fetch kernel version"); - break; - } - - if (obj_kver != real_kver) { - scnprintf(buf, size, - "'version' ("KVER_FMT") doesn't match running kernel ("KVER_FMT")", - KVER_PARAM(obj_kver), - KVER_PARAM(real_kver)); - break; - } - - scnprintf(buf, size, "Failed to load program for unknown reason"); - break; - } - bpf__strerror_end(buf, size); - return 0; -} - -int bpf__strerror_config_obj(struct bpf_object *obj __maybe_unused, - struct parse_events_term *term __maybe_unused, - struct evlist *evlist __maybe_unused, - int *error_pos __maybe_unused, int err, - char *buf, size_t size) -{ - bpf__strerror_head(err, buf, size); - bpf__strerror_entry(BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE, - "Can't use this config term with this map type"); - bpf__strerror_end(buf, size); - return 0; -} - -int bpf__strerror_apply_obj_config(int err, char *buf, size_t size) -{ - bpf__strerror_head(err, buf, size); - bpf__strerror_entry(BPF_LOADER_ERRNO__OBJCONF_MAP_EVTDIM, - "Cannot set event to BPF map in multi-thread tracing"); - bpf__strerror_entry(BPF_LOADER_ERRNO__OBJCONF_MAP_EVTINH, - "%s (Hint: use -i to turn off inherit)", emsg); - bpf__strerror_entry(BPF_LOADER_ERRNO__OBJCONF_MAP_EVTTYPE, - "Can only put raw, hardware and BPF output event into a BPF map"); - bpf__strerror_end(buf, size); - return 0; -} - -int bpf__strerror_setup_output_event(struct evlist *evlist __maybe_unused, - int err, char *buf, size_t size) -{ - bpf__strerror_head(err, buf, size); - bpf__strerror_end(buf, size); - return 0; -} diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h deleted file mode 100644 index 5d1c725cea29..000000000000 --- a/tools/perf/util/bpf-loader.h +++ /dev/null @@ -1,216 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2015, Wang Nan <wangnan0@huawei.com> - * Copyright (C) 2015, Huawei Inc. - */ -#ifndef __BPF_LOADER_H -#define __BPF_LOADER_H - -#include <linux/compiler.h> -#include <linux/err.h> - -#ifdef HAVE_LIBBPF_SUPPORT -#include <bpf/libbpf.h> - -enum bpf_loader_errno { - __BPF_LOADER_ERRNO__START = __LIBBPF_ERRNO__START - 100, - /* Invalid config string */ - BPF_LOADER_ERRNO__CONFIG = __BPF_LOADER_ERRNO__START, - BPF_LOADER_ERRNO__GROUP, /* Invalid group name */ - BPF_LOADER_ERRNO__EVENTNAME, /* Event name is missing */ - BPF_LOADER_ERRNO__INTERNAL, /* BPF loader internal error */ - BPF_LOADER_ERRNO__COMPILE, /* Error when compiling BPF scriptlet */ - BPF_LOADER_ERRNO__PROGCONF_TERM,/* Invalid program config term in config string */ - BPF_LOADER_ERRNO__PROLOGUE, /* Failed to generate prologue */ - BPF_LOADER_ERRNO__PROLOGUE2BIG, /* Prologue too big for program */ - BPF_LOADER_ERRNO__PROLOGUEOOB, /* Offset out of bound for prologue */ - BPF_LOADER_ERRNO__OBJCONF_OPT, /* Invalid object config option */ - BPF_LOADER_ERRNO__OBJCONF_CONF, /* Config value not set (lost '=')) */ - BPF_LOADER_ERRNO__OBJCONF_MAP_OPT, /* Invalid object map config option */ - BPF_LOADER_ERRNO__OBJCONF_MAP_NOTEXIST, /* Target map not exist */ - BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE, /* Incorrect value type for map */ - BPF_LOADER_ERRNO__OBJCONF_MAP_TYPE, /* Incorrect map type */ - BPF_LOADER_ERRNO__OBJCONF_MAP_KEYSIZE, /* Incorrect map key size */ - BPF_LOADER_ERRNO__OBJCONF_MAP_VALUESIZE,/* Incorrect map value size */ - BPF_LOADER_ERRNO__OBJCONF_MAP_NOEVT, /* Event not found for map setting */ - BPF_LOADER_ERRNO__OBJCONF_MAP_MAPSIZE, /* Invalid map size for event setting */ - BPF_LOADER_ERRNO__OBJCONF_MAP_EVTDIM, /* Event dimension too large */ - BPF_LOADER_ERRNO__OBJCONF_MAP_EVTINH, /* Doesn't support inherit event */ - BPF_LOADER_ERRNO__OBJCONF_MAP_EVTTYPE, /* Wrong event type for map */ - BPF_LOADER_ERRNO__OBJCONF_MAP_IDX2BIG, /* Index too large */ - __BPF_LOADER_ERRNO__END, -}; -#endif // HAVE_LIBBPF_SUPPORT - -struct evsel; -struct evlist; -struct bpf_object; -struct parse_events_term; -#define PERF_BPF_PROBE_GROUP "perf_bpf_probe" - -typedef int (*bpf_prog_iter_callback_t)(const char *group, const char *event, - int fd, struct bpf_object *obj, void *arg); - -#ifdef HAVE_LIBBPF_SUPPORT -struct bpf_object *bpf__prepare_load(const char *filename, bool source); -int bpf__strerror_prepare_load(const char *filename, bool source, - int err, char *buf, size_t size); - -struct bpf_object *bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz, - const char *name); - -void bpf__clear(void); - -int bpf__probe(struct bpf_object *obj); -int bpf__unprobe(struct bpf_object *obj); -int bpf__strerror_probe(struct bpf_object *obj, int err, - char *buf, size_t size); - -int bpf__load(struct bpf_object *obj); -int bpf__strerror_load(struct bpf_object *obj, int err, - char *buf, size_t size); -int bpf__foreach_event(struct bpf_object *obj, - bpf_prog_iter_callback_t func, void *arg); - -int bpf__config_obj(struct bpf_object *obj, struct parse_events_term *term, - struct evlist *evlist, int *error_pos); -int bpf__strerror_config_obj(struct bpf_object *obj, - struct parse_events_term *term, - struct evlist *evlist, - int *error_pos, int err, char *buf, - size_t size); -int bpf__apply_obj_config(void); -int bpf__strerror_apply_obj_config(int err, char *buf, size_t size); - -int bpf__setup_stdout(struct evlist *evlist); -struct evsel *bpf__setup_output_event(struct evlist *evlist, const char *name); -int bpf__strerror_setup_output_event(struct evlist *evlist, int err, char *buf, size_t size); -#else -#include <errno.h> -#include <string.h> -#include "debug.h" - -static inline struct bpf_object * -bpf__prepare_load(const char *filename __maybe_unused, - bool source __maybe_unused) -{ - pr_debug("ERROR: eBPF object loading is disabled during compiling.\n"); - return ERR_PTR(-ENOTSUP); -} - -static inline struct bpf_object * -bpf__prepare_load_buffer(void *obj_buf __maybe_unused, - size_t obj_buf_sz __maybe_unused) -{ - return ERR_PTR(-ENOTSUP); -} - -static inline void bpf__clear(void) { } - -static inline int bpf__probe(struct bpf_object *obj __maybe_unused) { return 0;} -static inline int bpf__unprobe(struct bpf_object *obj __maybe_unused) { return 0;} -static inline int bpf__load(struct bpf_object *obj __maybe_unused) { return 0; } - -static inline int -bpf__foreach_event(struct bpf_object *obj __maybe_unused, - bpf_prog_iter_callback_t func __maybe_unused, - void *arg __maybe_unused) -{ - return 0; -} - -static inline int -bpf__config_obj(struct bpf_object *obj __maybe_unused, - struct parse_events_term *term __maybe_unused, - struct evlist *evlist __maybe_unused, - int *error_pos __maybe_unused) -{ - return 0; -} - -static inline int -bpf__apply_obj_config(void) -{ - return 0; -} - -static inline int -bpf__setup_stdout(struct evlist *evlist __maybe_unused) -{ - return 0; -} - -static inline struct evsel * -bpf__setup_output_event(struct evlist *evlist __maybe_unused, const char *name __maybe_unused) -{ - return NULL; -} - -static inline int -__bpf_strerror(char *buf, size_t size) -{ - if (!size) - return 0; - strncpy(buf, - "ERROR: eBPF object loading is disabled during compiling.\n", - size); - buf[size - 1] = '\0'; - return 0; -} - -static inline -int bpf__strerror_prepare_load(const char *filename __maybe_unused, - bool source __maybe_unused, - int err __maybe_unused, - char *buf, size_t size) -{ - return __bpf_strerror(buf, size); -} - -static inline int -bpf__strerror_probe(struct bpf_object *obj __maybe_unused, - int err __maybe_unused, - char *buf, size_t size) -{ - return __bpf_strerror(buf, size); -} - -static inline int bpf__strerror_load(struct bpf_object *obj __maybe_unused, - int err __maybe_unused, - char *buf, size_t size) -{ - return __bpf_strerror(buf, size); -} - -static inline int -bpf__strerror_config_obj(struct bpf_object *obj __maybe_unused, - struct parse_events_term *term __maybe_unused, - struct evlist *evlist __maybe_unused, - int *error_pos __maybe_unused, - int err __maybe_unused, - char *buf, size_t size) -{ - return __bpf_strerror(buf, size); -} - -static inline int -bpf__strerror_apply_obj_config(int err __maybe_unused, - char *buf, size_t size) -{ - return __bpf_strerror(buf, size); -} - -static inline int -bpf__strerror_setup_output_event(struct evlist *evlist __maybe_unused, - int err __maybe_unused, char *buf, size_t size) -{ - return __bpf_strerror(buf, size); -} - -#endif - -static inline int bpf__strerror_setup_stdout(struct evlist *evlist, int err, char *buf, size_t size) -{ - return bpf__strerror_setup_output_event(evlist, err, buf, size); -} -#endif diff --git a/tools/perf/examples/bpf/augmented_raw_syscalls.c b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c index 9a03189d33d3..90ce22f9c1a9 100644 --- a/tools/perf/examples/bpf/augmented_raw_syscalls.c +++ b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c @@ -2,22 +2,26 @@ /* * Augment the raw_syscalls tracepoints with the contents of the pointer arguments. * - * Test it with: - * - * perf trace -e tools/perf/examples/bpf/augmented_raw_syscalls.c cat /etc/passwd > /dev/null - * * This exactly matches what is marshalled into the raw_syscall:sys_enter * payload expected by the 'perf trace' beautifiers. - * - * For now it just uses the existing tracepoint augmentation code in 'perf - * trace', in the next csets we'll hook up these with the sys_enter/sys_exit - * code that will combine entry/exit in a strace like way. */ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include <linux/limits.h> +/** + * is_power_of_2() - check if a value is a power of two + * @n: the value to check + * + * Determine whether some value is a power of two, where zero is *not* + * considered a power of two. Return: true if @n is a power of 2, otherwise + * false. + */ +#define is_power_of_2(n) (n != 0 && ((n & (n - 1)) == 0)) + +#define MAX_CPUS 4096 + // FIXME: These should come from system headers typedef char bool; typedef int pid_t; @@ -34,7 +38,7 @@ struct __augmented_syscalls__ { __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); __type(key, int); __type(value, __u32); - __uint(max_entries, __NR_CPUS__); + __uint(max_entries, MAX_CPUS); } __augmented_syscalls__ SEC(".maps"); /* @@ -156,6 +160,7 @@ unsigned int augmented_arg__read_str(struct augmented_arg *augmented_arg, const */ if (string_len > 0) { augmented_len -= sizeof(augmented_arg->value) - string_len; + _Static_assert(is_power_of_2(sizeof(augmented_arg->value)), "sizeof(augmented_arg->value) needs to be a power of two"); augmented_len &= sizeof(augmented_arg->value) - 1; augmented_arg->size = string_len; } else { @@ -170,7 +175,7 @@ unsigned int augmented_arg__read_str(struct augmented_arg *augmented_arg, const return augmented_len; } -SEC("!raw_syscalls:unaugmented") +SEC("tp/raw_syscalls/sys_enter") int syscall_unaugmented(struct syscall_enter_args *args) { return 1; @@ -182,7 +187,7 @@ int syscall_unaugmented(struct syscall_enter_args *args) * on from there, reading the first syscall arg as a string, i.e. open's * filename. */ -SEC("!syscalls:sys_enter_connect") +SEC("tp/syscalls/sys_enter_connect") int sys_enter_connect(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -193,15 +198,15 @@ int sys_enter_connect(struct syscall_enter_args *args) if (augmented_args == NULL) return 1; /* Failure: don't filter */ - if (socklen > sizeof(augmented_args->saddr)) - socklen = sizeof(augmented_args->saddr); + _Static_assert(is_power_of_2(sizeof(augmented_args->saddr)), "sizeof(augmented_args->saddr) needs to be a power of two"); + socklen &= sizeof(augmented_args->saddr) - 1; bpf_probe_read(&augmented_args->saddr, socklen, sockaddr_arg); return augmented__output(args, augmented_args, len + socklen); } -SEC("!syscalls:sys_enter_sendto") +SEC("tp/syscalls/sys_enter_sendto") int sys_enter_sendto(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -212,15 +217,14 @@ int sys_enter_sendto(struct syscall_enter_args *args) if (augmented_args == NULL) return 1; /* Failure: don't filter */ - if (socklen > sizeof(augmented_args->saddr)) - socklen = sizeof(augmented_args->saddr); + socklen &= sizeof(augmented_args->saddr) - 1; bpf_probe_read(&augmented_args->saddr, socklen, sockaddr_arg); return augmented__output(args, augmented_args, len + socklen); } -SEC("!syscalls:sys_enter_open") +SEC("tp/syscalls/sys_enter_open") int sys_enter_open(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -235,7 +239,7 @@ int sys_enter_open(struct syscall_enter_args *args) return augmented__output(args, augmented_args, len); } -SEC("!syscalls:sys_enter_openat") +SEC("tp/syscalls/sys_enter_openat") int sys_enter_openat(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -250,7 +254,7 @@ int sys_enter_openat(struct syscall_enter_args *args) return augmented__output(args, augmented_args, len); } -SEC("!syscalls:sys_enter_rename") +SEC("tp/syscalls/sys_enter_rename") int sys_enter_rename(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -267,7 +271,7 @@ int sys_enter_rename(struct syscall_enter_args *args) return augmented__output(args, augmented_args, len); } -SEC("!syscalls:sys_enter_renameat") +SEC("tp/syscalls/sys_enter_renameat") int sys_enter_renameat(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -295,7 +299,7 @@ struct perf_event_attr_size { __u32 size; }; -SEC("!syscalls:sys_enter_perf_event_open") +SEC("tp/syscalls/sys_enter_perf_event_open") int sys_enter_perf_event_open(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -327,7 +331,7 @@ failure: return 1; /* Failure: don't filter */ } -SEC("!syscalls:sys_enter_clock_nanosleep") +SEC("tp/syscalls/sys_enter_clock_nanosleep") int sys_enter_clock_nanosleep(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args = augmented_args_payload(); @@ -358,7 +362,7 @@ static bool pid_filter__has(struct pids_filtered *pids, pid_t pid) return bpf_map_lookup_elem(pids, &pid) != NULL; } -SEC("raw_syscalls:sys_enter") +SEC("tp/raw_syscalls/sys_enter") int sys_enter(struct syscall_enter_args *args) { struct augmented_args_payload *augmented_args; @@ -371,7 +375,6 @@ int sys_enter(struct syscall_enter_args *args) * We'll add to this as we add augmented syscalls right after that * initial, non-augmented raw_syscalls:sys_enter payload. */ - unsigned int len = sizeof(augmented_args->args); if (pid_filter__has(&pids_filtered, getpid())) return 0; @@ -393,7 +396,7 @@ int sys_enter(struct syscall_enter_args *args) return 0; } -SEC("raw_syscalls:sys_exit") +SEC("tp/raw_syscalls/sys_exit") int sys_exit(struct syscall_exit_args *args) { struct syscall_exit_args exit_args; diff --git a/tools/perf/util/bpf_skel/bench_uprobe.bpf.c b/tools/perf/util/bpf_skel/bench_uprobe.bpf.c new file mode 100644 index 000000000000..2c55896bb33c --- /dev/null +++ b/tools/perf/util/bpf_skel/bench_uprobe.bpf.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +// Copyright (c) 2023 Red Hat +#include "vmlinux.h" +#include <bpf/bpf_tracing.h> + +unsigned int nr_uprobes; + +SEC("uprobe") +int BPF_UPROBE(empty) +{ + return 0; +} + +SEC("uprobe") +int BPF_UPROBE(trace_printk) +{ + char fmt[] = "perf bench uprobe %u"; + + bpf_trace_printk(fmt, sizeof(fmt), ++nr_uprobes); + return 0; +} + +char LICENSE[] SEC("license") = "Dual BSD/GPL"; diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c index 36728222a5b4..03c64b85383b 100644 --- a/tools/perf/util/build-id.c +++ b/tools/perf/util/build-id.c @@ -560,7 +560,7 @@ char *build_id_cache__cachedir(const char *sbuild_id, const char *name, struct nsinfo *nsi, bool is_kallsyms, bool is_vdso) { - char *realname = (char *)name, *filename; + char *realname = NULL, *filename; bool slash = is_kallsyms || is_vdso; if (!slash) @@ -571,9 +571,7 @@ char *build_id_cache__cachedir(const char *sbuild_id, const char *name, sbuild_id ? "/" : "", sbuild_id ?: "") < 0) filename = NULL; - if (!slash) - free(realname); - + free(realname); return filename; } diff --git a/tools/perf/util/c++/Build b/tools/perf/util/c++/Build deleted file mode 100644 index 613ecfd76527..000000000000 --- a/tools/perf/util/c++/Build +++ /dev/null @@ -1,2 +0,0 @@ -perf-$(CONFIG_CLANGLLVM) += clang.o -perf-$(CONFIG_CLANGLLVM) += clang-test.o diff --git a/tools/perf/util/c++/clang-c.h b/tools/perf/util/c++/clang-c.h deleted file mode 100644 index d3731a876b6c..000000000000 --- a/tools/perf/util/c++/clang-c.h +++ /dev/null @@ -1,43 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef PERF_UTIL_CLANG_C_H -#define PERF_UTIL_CLANG_C_H - -#include <stddef.h> /* for size_t */ - -#ifdef __cplusplus -extern "C" { -#endif - -#ifdef HAVE_LIBCLANGLLVM_SUPPORT -extern void perf_clang__init(void); -extern void perf_clang__cleanup(void); - -struct test_suite; -extern int test__clang_to_IR(struct test_suite *test, int subtest); -extern int test__clang_to_obj(struct test_suite *test, int subtest); - -extern int perf_clang__compile_bpf(const char *filename, - void **p_obj_buf, - size_t *p_obj_buf_sz); -#else - -#include <errno.h> -#include <linux/compiler.h> /* for __maybe_unused */ - -static inline void perf_clang__init(void) { } -static inline void perf_clang__cleanup(void) { } - -static inline int -perf_clang__compile_bpf(const char *filename __maybe_unused, - void **p_obj_buf __maybe_unused, - size_t *p_obj_buf_sz __maybe_unused) -{ - return -ENOTSUP; -} - -#endif - -#ifdef __cplusplus -} -#endif -#endif diff --git a/tools/perf/util/c++/clang-test.cpp b/tools/perf/util/c++/clang-test.cpp deleted file mode 100644 index a4683ca53697..000000000000 --- a/tools/perf/util/c++/clang-test.cpp +++ /dev/null @@ -1,67 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include "clang.h" -#include "clang-c.h" -extern "C" { -#include "../util.h" -} -#include "llvm/IR/Function.h" -#include "llvm/IR/LLVMContext.h" - -#include <tests/llvm.h> -#include <string> - -class perf_clang_scope { -public: - explicit perf_clang_scope() {perf_clang__init();} - ~perf_clang_scope() {perf_clang__cleanup();} -}; - -static std::unique_ptr<llvm::Module> -__test__clang_to_IR(void) -{ - unsigned int kernel_version; - - if (fetch_kernel_version(&kernel_version, NULL, 0)) - return std::unique_ptr<llvm::Module>(nullptr); - - std::string cflag_kver("-DLINUX_VERSION_CODE=" + - std::to_string(kernel_version)); - - std::unique_ptr<llvm::Module> M = - perf::getModuleFromSource({cflag_kver.c_str()}, - "perf-test.c", - test_llvm__bpf_base_prog); - return M; -} - -extern "C" { -int test__clang_to_IR(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ - perf_clang_scope _scope; - - auto M = __test__clang_to_IR(); - if (!M) - return -1; - for (llvm::Function& F : *M) - if (F.getName() == "bpf_func__SyS_epoll_pwait") - return 0; - return -1; -} - -int test__clang_to_obj(struct test_suite *test __maybe_unused, - int subtest __maybe_unused) -{ - perf_clang_scope _scope; - - auto M = __test__clang_to_IR(); - if (!M) - return -1; - - auto Buffer = perf::getBPFObjectFromModule(&*M); - if (!Buffer) - return -1; - return 0; -} - -} diff --git a/tools/perf/util/c++/clang.cpp b/tools/perf/util/c++/clang.cpp deleted file mode 100644 index 1aad7d6d34aa..000000000000 --- a/tools/perf/util/c++/clang.cpp +++ /dev/null @@ -1,225 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * llvm C frontend for perf. Support dynamically compile C file - * - * Inspired by clang example code: - * http://llvm.org/svn/llvm-project/cfe/trunk/examples/clang-interpreter/main.cpp - * - * Copyright (C) 2016 Wang Nan <wangnan0@huawei.com> - * Copyright (C) 2016 Huawei Inc. - */ - -#include "clang/Basic/Version.h" -#include "clang/CodeGen/CodeGenAction.h" -#include "clang/Frontend/CompilerInvocation.h" -#include "clang/Frontend/CompilerInstance.h" -#include "clang/Frontend/TextDiagnosticPrinter.h" -#include "clang/Tooling/Tooling.h" -#include "llvm/IR/LegacyPassManager.h" -#include "llvm/IR/Module.h" -#include "llvm/Option/Option.h" -#include "llvm/Support/FileSystem.h" -#include "llvm/Support/ManagedStatic.h" -#if CLANG_VERSION_MAJOR >= 14 -#include "llvm/MC/TargetRegistry.h" -#else -#include "llvm/Support/TargetRegistry.h" -#endif -#include "llvm/Support/TargetSelect.h" -#include "llvm/Target/TargetMachine.h" -#include "llvm/Target/TargetOptions.h" -#include <memory> - -#include "clang.h" -#include "clang-c.h" - -namespace perf { - -static std::unique_ptr<llvm::LLVMContext> LLVMCtx; - -using namespace clang; - -static CompilerInvocation * -createCompilerInvocation(llvm::opt::ArgStringList CFlags, StringRef& Path, - DiagnosticsEngine& Diags) -{ - llvm::opt::ArgStringList CCArgs { - "-cc1", - "-triple", "bpf-pc-linux", - "-fsyntax-only", - "-O2", - "-nostdsysteminc", - "-nobuiltininc", - "-vectorize-loops", - "-vectorize-slp", - "-Wno-unused-value", - "-Wno-pointer-sign", - "-x", "c"}; - - CCArgs.append(CFlags.begin(), CFlags.end()); - CompilerInvocation *CI = tooling::newInvocation(&Diags, CCArgs -#if CLANG_VERSION_MAJOR >= 11 - ,/*BinaryName=*/nullptr -#endif - ); - - FrontendOptions& Opts = CI->getFrontendOpts(); - Opts.Inputs.clear(); - Opts.Inputs.emplace_back(Path, - FrontendOptions::getInputKindForExtension("c")); - return CI; -} - -static std::unique_ptr<llvm::Module> -getModuleFromSource(llvm::opt::ArgStringList CFlags, - StringRef Path, IntrusiveRefCntPtr<vfs::FileSystem> VFS) -{ - CompilerInstance Clang; - Clang.createDiagnostics(); - -#if CLANG_VERSION_MAJOR < 9 - Clang.setVirtualFileSystem(&*VFS); -#else - Clang.createFileManager(&*VFS); -#endif - -#if CLANG_VERSION_MAJOR < 4 - IntrusiveRefCntPtr<CompilerInvocation> CI = - createCompilerInvocation(std::move(CFlags), Path, - Clang.getDiagnostics()); - Clang.setInvocation(&*CI); -#else - std::shared_ptr<CompilerInvocation> CI( - createCompilerInvocation(std::move(CFlags), Path, - Clang.getDiagnostics())); - Clang.setInvocation(CI); -#endif - - std::unique_ptr<CodeGenAction> Act(new EmitLLVMOnlyAction(&*LLVMCtx)); - if (!Clang.ExecuteAction(*Act)) - return std::unique_ptr<llvm::Module>(nullptr); - - return Act->takeModule(); -} - -std::unique_ptr<llvm::Module> -getModuleFromSource(llvm::opt::ArgStringList CFlags, - StringRef Name, StringRef Content) -{ - using namespace vfs; - - llvm::IntrusiveRefCntPtr<OverlayFileSystem> OverlayFS( - new OverlayFileSystem(getRealFileSystem())); - llvm::IntrusiveRefCntPtr<InMemoryFileSystem> MemFS( - new InMemoryFileSystem(true)); - - /* - * pushOverlay helps setting working dir for MemFS. Must call - * before addFile. - */ - OverlayFS->pushOverlay(MemFS); - MemFS->addFile(Twine(Name), 0, llvm::MemoryBuffer::getMemBuffer(Content)); - - return getModuleFromSource(std::move(CFlags), Name, OverlayFS); -} - -std::unique_ptr<llvm::Module> -getModuleFromSource(llvm::opt::ArgStringList CFlags, StringRef Path) -{ - IntrusiveRefCntPtr<vfs::FileSystem> VFS(vfs::getRealFileSystem()); - return getModuleFromSource(std::move(CFlags), Path, VFS); -} - -std::unique_ptr<llvm::SmallVectorImpl<char>> -getBPFObjectFromModule(llvm::Module *Module) -{ - using namespace llvm; - - std::string TargetTriple("bpf-pc-linux"); - std::string Error; - const Target* Target = TargetRegistry::lookupTarget(TargetTriple, Error); - if (!Target) { - llvm::errs() << Error; - return std::unique_ptr<llvm::SmallVectorImpl<char>>(nullptr); - } - - llvm::TargetOptions Opt; - TargetMachine *TargetMachine = - Target->createTargetMachine(TargetTriple, - "generic", "", - Opt, Reloc::Static); - - Module->setDataLayout(TargetMachine->createDataLayout()); - Module->setTargetTriple(TargetTriple); - - std::unique_ptr<SmallVectorImpl<char>> Buffer(new SmallVector<char, 0>()); - raw_svector_ostream ostream(*Buffer); - - legacy::PassManager PM; - bool NotAdded; - NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream -#if CLANG_VERSION_MAJOR >= 7 - , /*DwoOut=*/nullptr -#endif -#if CLANG_VERSION_MAJOR < 10 - , TargetMachine::CGFT_ObjectFile -#else - , llvm::CGFT_ObjectFile -#endif - ); - if (NotAdded) { - llvm::errs() << "TargetMachine can't emit a file of this type\n"; - return std::unique_ptr<llvm::SmallVectorImpl<char>>(nullptr); - } - PM.run(*Module); - - return Buffer; -} - -} - -extern "C" { -void perf_clang__init(void) -{ - perf::LLVMCtx.reset(new llvm::LLVMContext()); - LLVMInitializeBPFTargetInfo(); - LLVMInitializeBPFTarget(); - LLVMInitializeBPFTargetMC(); - LLVMInitializeBPFAsmPrinter(); -} - -void perf_clang__cleanup(void) -{ - perf::LLVMCtx.reset(nullptr); - llvm::llvm_shutdown(); -} - -int perf_clang__compile_bpf(const char *filename, - void **p_obj_buf, - size_t *p_obj_buf_sz) -{ - using namespace perf; - - if (!p_obj_buf || !p_obj_buf_sz) - return -EINVAL; - - llvm::opt::ArgStringList CFlags; - auto M = getModuleFromSource(std::move(CFlags), filename); - if (!M) - return -EINVAL; - auto O = getBPFObjectFromModule(&*M); - if (!O) - return -EINVAL; - - size_t size = O->size_in_bytes(); - void *buffer; - - buffer = malloc(size); - if (!buffer) - return -ENOMEM; - memcpy(buffer, O->data(), size); - *p_obj_buf = buffer; - *p_obj_buf_sz = size; - return 0; -} -} diff --git a/tools/perf/util/c++/clang.h b/tools/perf/util/c++/clang.h deleted file mode 100644 index 6ce33e22f23c..000000000000 --- a/tools/perf/util/c++/clang.h +++ /dev/null @@ -1,27 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef PERF_UTIL_CLANG_H -#define PERF_UTIL_CLANG_H - -#include "llvm/ADT/StringRef.h" -#include "llvm/IR/LLVMContext.h" -#include "llvm/IR/Module.h" -#include "llvm/Option/Option.h" -#include <memory> - -namespace perf { - -using namespace llvm; - -std::unique_ptr<Module> -getModuleFromSource(opt::ArgStringList CFlags, - StringRef Name, StringRef Content); - -std::unique_ptr<Module> -getModuleFromSource(opt::ArgStringList CFlags, - StringRef Path); - -std::unique_ptr<llvm::SmallVectorImpl<char>> -getBPFObjectFromModule(llvm::Module *Module); - -} -#endif diff --git a/tools/perf/util/config.c b/tools/perf/util/config.c index 46f144c46827..7a650de0db83 100644 --- a/tools/perf/util/config.c +++ b/tools/perf/util/config.c @@ -16,7 +16,6 @@ #include <subcmd/exec-cmd.h> #include "util/event.h" /* proc_map_timeout */ #include "util/hist.h" /* perf_hist_config */ -#include "util/llvm-utils.h" /* perf_llvm_config */ #include "util/stat.h" /* perf_stat__set_big_num */ #include "util/evsel.h" /* evsel__hw_names, evsel__use_bpf_counters */ #include "util/srcline.h" /* addr2line_timeout_ms */ @@ -486,9 +485,6 @@ int perf_default_config(const char *var, const char *value, if (strstarts(var, "call-graph.")) return perf_callchain_config(var, value); - if (strstarts(var, "llvm.")) - return perf_llvm_config(var, value); - if (strstarts(var, "buildid.")) return perf_buildid_config(var, value); diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c index 1419b40dfbe8..9729d006550d 100644 --- a/tools/perf/util/cs-etm.c +++ b/tools/perf/util/cs-etm.c @@ -6,10 +6,11 @@ * Author: Mathieu Poirier <mathieu.poirier@linaro.org> */ +#include <linux/kernel.h> +#include <linux/bitfield.h> #include <linux/bitops.h> #include <linux/coresight-pmu.h> #include <linux/err.h> -#include <linux/kernel.h> #include <linux/log2.h> #include <linux/types.h> #include <linux/zalloc.h> @@ -282,17 +283,6 @@ static int cs_etm__metadata_set_trace_id(u8 trace_chan_id, u64 *cpu_metadata) } /* - * FIELD_GET (linux/bitfield.h) not available outside kernel code, - * and the header contains too many dependencies to just copy over, - * so roll our own based on the original - */ -#define __bf_shf(x) (__builtin_ffsll(x) - 1) -#define FIELD_GET(_mask, _reg) \ - ({ \ - (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ - }) - -/* * Get a metadata for a specific cpu from an array. * */ diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c index 46f74b2344db..1dbf27822ee2 100644 --- a/tools/perf/util/dlfilter.c +++ b/tools/perf/util/dlfilter.c @@ -10,6 +10,8 @@ #include <subcmd/exec-cmd.h> #include <linux/zalloc.h> #include <linux/build_bug.h> +#include <linux/kernel.h> +#include <linux/string.h> #include "debug.h" #include "event.h" @@ -63,6 +65,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al) d_al->addr = al->addr; d_al->comm = NULL; d_al->filtered = 0; + d_al->priv = NULL; } static struct addr_location *get_al(struct dlfilter *d) @@ -151,6 +154,11 @@ static char **dlfilter__args(void *ctx, int *dlargc) return d->dlargv; } +static bool has_priv(struct perf_dlfilter_al *d_al_p) +{ + return d_al_p->size >= offsetof(struct perf_dlfilter_al, priv) + sizeof(d_al_p->priv); +} + static __s32 dlfilter__resolve_address(void *ctx, __u64 address, struct perf_dlfilter_al *d_al_p) { struct dlfilter *d = (struct dlfilter *)ctx; @@ -166,6 +174,7 @@ static __s32 dlfilter__resolve_address(void *ctx, __u64 address, struct perf_dlf if (!thread) return -1; + addr_location__init(&al); thread__find_symbol_fb(thread, d->sample->cpumode, address, &al); al_to_d_al(&al, &d_al); @@ -176,9 +185,31 @@ static __s32 dlfilter__resolve_address(void *ctx, __u64 address, struct perf_dlf memcpy(d_al_p, &d_al, min((size_t)sz, sizeof(d_al))); d_al_p->size = sz; + if (has_priv(d_al_p)) + d_al_p->priv = memdup(&al, sizeof(al)); + else /* Avoid leak for v0 API */ + addr_location__exit(&al); + return 0; } +static void dlfilter__al_cleanup(void *ctx __maybe_unused, struct perf_dlfilter_al *d_al_p) +{ + struct addr_location *al; + + /* Ensure backward compatibility */ + if (!has_priv(d_al_p) || !d_al_p->priv) + return; + + al = d_al_p->priv; + + d_al_p->priv = NULL; + + addr_location__exit(al); + + free(al); +} + static const __u8 *dlfilter__insn(void *ctx, __u32 *len) { struct dlfilter *d = (struct dlfilter *)ctx; @@ -296,6 +327,7 @@ static const struct perf_dlfilter_fns perf_dlfilter_fns = { .resolve_addr = dlfilter__resolve_addr, .args = dlfilter__args, .resolve_address = dlfilter__resolve_address, + .al_cleanup = dlfilter__al_cleanup, .insn = dlfilter__insn, .srcline = dlfilter__srcline, .attr = dlfilter__attr, diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c index 9eabf3ec56e9..a164164001fb 100644 --- a/tools/perf/util/env.c +++ b/tools/perf/util/env.c @@ -324,11 +324,9 @@ int perf_env__read_pmu_mappings(struct perf_env *env) u32 pmu_num = 0; struct strbuf sb; - while ((pmu = perf_pmus__scan(pmu))) { - if (!pmu->name) - continue; + while ((pmu = perf_pmus__scan(pmu))) pmu_num++; - } + if (!pmu_num) { pr_debug("pmu mappings not available\n"); return -ENOENT; @@ -339,8 +337,6 @@ int perf_env__read_pmu_mappings(struct perf_env *env) return -ENOMEM; while ((pmu = perf_pmus__scan(pmu))) { - if (!pmu->name) - continue; if (strbuf_addf(&sb, "%u:%s", pmu->type, pmu->name) < 0) goto error; /* include a NULL character at the end */ diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c index 4cbb092e0684..923c0fb15122 100644 --- a/tools/perf/util/event.c +++ b/tools/perf/util/event.c @@ -93,8 +93,8 @@ struct process_symbol_args { u64 start; }; -static int find_symbol_cb(void *arg, const char *name, char type, - u64 start) +static int find_func_symbol_cb(void *arg, const char *name, char type, + u64 start) { struct process_symbol_args *args = arg; @@ -110,12 +110,36 @@ static int find_symbol_cb(void *arg, const char *name, char type, return 1; } +static int find_any_symbol_cb(void *arg, const char *name, + char type __maybe_unused, u64 start) +{ + struct process_symbol_args *args = arg; + + if (strcmp(name, args->name)) + return 0; + + args->start = start; + return 1; +} + int kallsyms__get_function_start(const char *kallsyms_filename, const char *symbol_name, u64 *addr) { struct process_symbol_args args = { .name = symbol_name, }; - if (kallsyms__parse(kallsyms_filename, &args, find_symbol_cb) <= 0) + if (kallsyms__parse(kallsyms_filename, &args, find_func_symbol_cb) <= 0) + return -1; + + *addr = args.start; + return 0; +} + +int kallsyms__get_symbol_start(const char *kallsyms_filename, + const char *symbol_name, u64 *addr) +{ + struct process_symbol_args args = { .name = symbol_name, }; + + if (kallsyms__parse(kallsyms_filename, &args, find_any_symbol_cb) <= 0) return -1; *addr = args.start; diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h index de20e01c9d72..d8bcee2e9b93 100644 --- a/tools/perf/util/event.h +++ b/tools/perf/util/event.h @@ -360,6 +360,8 @@ size_t perf_event__fprintf(union perf_event *event, struct machine *machine, FIL int kallsyms__get_function_start(const char *kallsyms_filename, const char *symbol_name, u64 *addr); +int kallsyms__get_symbol_start(const char *kallsyms_filename, + const char *symbol_name, u64 *addr); void event_attr_init(struct perf_event_attr *attr); diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 762e2b2634a5..a8a5ff87cc1f 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -845,6 +845,7 @@ static void __evsel__config_callchain(struct evsel *evsel, struct record_opts *o { bool function = evsel__is_function_event(evsel); struct perf_event_attr *attr = &evsel->core.attr; + const char *arch = perf_env__arch(evsel__env(evsel)); evsel__set_sample_bit(evsel, CALLCHAIN); @@ -877,8 +878,9 @@ static void __evsel__config_callchain(struct evsel *evsel, struct record_opts *o if (!function) { evsel__set_sample_bit(evsel, REGS_USER); evsel__set_sample_bit(evsel, STACK_USER); - if (opts->sample_user_regs && DWARF_MINIMAL_REGS != PERF_REGS_MASK) { - attr->sample_regs_user |= DWARF_MINIMAL_REGS; + if (opts->sample_user_regs && + DWARF_MINIMAL_REGS(arch) != arch__user_reg_mask()) { + attr->sample_regs_user |= DWARF_MINIMAL_REGS(arch); pr_warning("WARNING: The use of --call-graph=dwarf may require all the user registers, " "specifying a subset with --user-regs may render DWARF unwinding unreliable, " "so the minimal registers set (IP, SP) is explicitly forced.\n"); @@ -1474,6 +1476,7 @@ void evsel__exit(struct evsel *evsel) perf_thread_map__put(evsel->core.threads); zfree(&evsel->group_name); zfree(&evsel->name); + zfree(&evsel->filter); zfree(&evsel->pmu_name); zfree(&evsel->group_pmu_name); zfree(&evsel->unit); @@ -2826,9 +2829,6 @@ u64 evsel__intval(struct evsel *evsel, struct perf_sample *sample, const char *n { struct tep_format_field *field = evsel__field(evsel, name); - if (!field) - return 0; - return field ? format_field__intval(field, sample, evsel->needs_swap) : 0; } #endif diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c index 4814262e3805..4488f306de78 100644 --- a/tools/perf/util/expr.c +++ b/tools/perf/util/expr.c @@ -10,9 +10,11 @@ #include "debug.h" #include "evlist.h" #include "expr.h" -#include "expr-bison.h" -#include "expr-flex.h" +#include <util/expr-bison.h> +#include <util/expr-flex.h> #include "util/hashmap.h" +#include "util/header.h" +#include "util/pmu.h" #include "smt.h" #include "tsc.h" #include <api/fs/fs.h> @@ -425,6 +427,13 @@ double expr__get_literal(const char *literal, const struct expr_scanner_ctx *ctx result = cpu__max_present_cpu().cpu; goto out; } + if (!strcmp("#num_cpus_online", literal)) { + struct perf_cpu_map *online = cpu_map__online(); + + if (online) + result = perf_cpu_map__nr(online); + goto out; + } if (!strcasecmp("#system_tsc_freq", literal)) { result = arch_get_tsc_freq(); @@ -495,3 +504,19 @@ double expr__has_event(const struct expr_parse_ctx *ctx, bool compute_ids, const evlist__delete(tmp); return ret; } + +double expr__strcmp_cpuid_str(const struct expr_parse_ctx *ctx __maybe_unused, + bool compute_ids __maybe_unused, const char *test_id) +{ + double ret; + struct perf_pmu *pmu = pmu__find_core_pmu(); + char *cpuid = perf_pmu__getcpuid(pmu); + + if (!cpuid) + return NAN; + + ret = !strcmp_cpuid_str(test_id, cpuid); + + free(cpuid); + return ret; +} diff --git a/tools/perf/util/expr.h b/tools/perf/util/expr.h index 3c1e49b3e35d..c0cec29ddc29 100644 --- a/tools/perf/util/expr.h +++ b/tools/perf/util/expr.h @@ -55,5 +55,6 @@ double expr_id_data__value(const struct expr_id_data *data); double expr_id_data__source_count(const struct expr_id_data *data); double expr__get_literal(const char *literal, const struct expr_scanner_ctx *ctx); double expr__has_event(const struct expr_parse_ctx *ctx, bool compute_ids, const char *id); +double expr__strcmp_cpuid_str(const struct expr_parse_ctx *ctx, bool compute_ids, const char *id); #endif diff --git a/tools/perf/util/expr.l b/tools/perf/util/expr.l index dbb117414710..0feef0726c48 100644 --- a/tools/perf/util/expr.l +++ b/tools/perf/util/expr.l @@ -114,6 +114,7 @@ if { return IF; } else { return ELSE; } source_count { return SOURCE_COUNT; } has_event { return HAS_EVENT; } +strcmp_cpuid_str { return STRCMP_CPUID_STR; } {literal} { return literal(yyscanner, sctx); } {number} { return value(yyscanner); } {symbol} { return str(yyscanner, ID, sctx->runtime); } diff --git a/tools/perf/util/expr.y b/tools/perf/util/expr.y index dd504afd8f36..6c93b358cc2d 100644 --- a/tools/perf/util/expr.y +++ b/tools/perf/util/expr.y @@ -7,6 +7,8 @@ #include "util/debug.h" #define IN_EXPR_Y 1 #include "expr.h" +#include "expr-bison.h" +int expr_lex(YYSTYPE * yylval_param , void *yyscanner); %} %define api.pure full @@ -37,7 +39,7 @@ } ids; } -%token ID NUMBER MIN MAX IF ELSE LITERAL D_RATIO SOURCE_COUNT HAS_EVENT EXPR_ERROR +%token ID NUMBER MIN MAX IF ELSE LITERAL D_RATIO SOURCE_COUNT HAS_EVENT STRCMP_CPUID_STR EXPR_ERROR %left MIN MAX IF %left '|' %left '^' @@ -56,7 +58,7 @@ static void expr_error(double *final_val __maybe_unused, struct expr_parse_ctx *ctx __maybe_unused, bool compute_ids __maybe_unused, - void *scanner, + void *scanner __maybe_unused, const char *s) { pr_debug("%s\n", s); @@ -205,6 +207,12 @@ expr: NUMBER $$.ids = NULL; free($3); } +| STRCMP_CPUID_STR '(' ID ')' +{ + $$.val = expr__strcmp_cpuid_str(ctx, compute_ids, $3); + $$.ids = NULL; + free($3); +} | expr '|' expr { if (is_const($1.val) && is_const($3.val)) { diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index 52fbf526fe74..d812e1e371a7 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -456,6 +456,8 @@ static int write_cpudesc(struct feat_fd *ff, #define CPUINFO_PROC { "Processor", } #elif defined(__xtensa__) #define CPUINFO_PROC { "core ID", } +#elif defined(__loongarch__) +#define CPUINFO_PROC { "Model Name", } #else #define CPUINFO_PROC { "model name", } #endif @@ -746,20 +748,14 @@ static int write_pmu_mappings(struct feat_fd *ff, * Do a first pass to count number of pmu to avoid lseek so this * works in pipe mode as well. */ - while ((pmu = perf_pmus__scan(pmu))) { - if (!pmu->name) - continue; + while ((pmu = perf_pmus__scan(pmu))) pmu_num++; - } ret = do_write(ff, &pmu_num, sizeof(pmu_num)); if (ret < 0) return ret; while ((pmu = perf_pmus__scan(pmu))) { - if (!pmu->name) - continue; - ret = do_write(ff, &pmu->type, sizeof(pmu->type)); if (ret < 0) return ret; @@ -1605,8 +1601,15 @@ static int write_pmu_caps(struct feat_fd *ff, int ret; while ((pmu = perf_pmus__scan(pmu))) { - if (!pmu->name || !strcmp(pmu->name, "cpu") || - perf_pmu__caps_parse(pmu) <= 0) + if (!strcmp(pmu->name, "cpu")) { + /* + * The "cpu" PMU is special and covered by + * HEADER_CPU_PMU_CAPS. Note, core PMUs are + * counted/written here for ARM, s390 and Intel hybrid. + */ + continue; + } + if (perf_pmu__caps_parse(pmu) <= 0) continue; nr_pmu++; } @@ -1619,23 +1622,17 @@ static int write_pmu_caps(struct feat_fd *ff, return 0; /* - * Write hybrid pmu caps first to maintain compatibility with - * older perf tool. + * Note older perf tools assume core PMUs come first, this is a property + * of perf_pmus__scan. */ - if (perf_pmus__num_core_pmus() > 1) { - pmu = NULL; - while ((pmu = perf_pmus__scan_core(pmu))) { - ret = __write_pmu_caps(ff, pmu, true); - if (ret < 0) - return ret; - } - } - pmu = NULL; while ((pmu = perf_pmus__scan(pmu))) { - if (pmu->is_core || !pmu->nr_caps) + if (!strcmp(pmu->name, "cpu")) { + /* Skip as above. */ + continue; + } + if (perf_pmu__caps_parse(pmu) <= 0) continue; - ret = __write_pmu_caps(ff, pmu, true); if (ret < 0) return ret; @@ -4381,7 +4378,8 @@ int perf_event__process_attr(struct perf_tool *tool __maybe_unused, union perf_event *event, struct evlist **pevlist) { - u32 i, ids, n_ids; + u32 i, n_ids; + u64 *ids; struct evsel *evsel; struct evlist *evlist = *pevlist; @@ -4397,9 +4395,8 @@ int perf_event__process_attr(struct perf_tool *tool __maybe_unused, evlist__add(evlist, evsel); - ids = event->header.size; - ids -= (void *)&event->attr.id - (void *)event; - n_ids = ids / sizeof(u64); + n_ids = event->header.size - sizeof(event->header) - event->attr.attr.size; + n_ids = n_ids / sizeof(u64); /* * We don't have the cpu and thread maps on the header, so * for allocating the perf_sample_id table we fake 1 cpu and @@ -4408,8 +4405,9 @@ int perf_event__process_attr(struct perf_tool *tool __maybe_unused, if (perf_evsel__alloc_id(&evsel->core, 1, n_ids)) return -ENOMEM; + ids = perf_record_header_attr_id(event); for (i = 0; i < n_ids; i++) { - perf_evlist__id_add(&evlist->core, &evsel->core, 0, i, event->attr.id[i]); + perf_evlist__id_add(&evlist->core, &evsel->core, 0, i, ids[i]); } return 0; diff --git a/tools/perf/util/libunwind/arm64.c b/tools/perf/util/libunwind/arm64.c index 014d82159656..37ecef0c53b9 100644 --- a/tools/perf/util/libunwind/arm64.c +++ b/tools/perf/util/libunwind/arm64.c @@ -18,8 +18,6 @@ * defined before including "unwind.h" */ #define LIBUNWIND__ARCH_REG_ID(regnum) libunwind__arm64_reg_id(regnum) -#define LIBUNWIND__ARCH_REG_IP PERF_REG_ARM64_PC -#define LIBUNWIND__ARCH_REG_SP PERF_REG_ARM64_SP #include "unwind.h" #include "libunwind-aarch64.h" diff --git a/tools/perf/util/libunwind/x86_32.c b/tools/perf/util/libunwind/x86_32.c index b2b92d030aef..1697dece1b74 100644 --- a/tools/perf/util/libunwind/x86_32.c +++ b/tools/perf/util/libunwind/x86_32.c @@ -18,8 +18,6 @@ * defined before including "unwind.h" */ #define LIBUNWIND__ARCH_REG_ID(regnum) libunwind__x86_reg_id(regnum) -#define LIBUNWIND__ARCH_REG_IP PERF_REG_X86_IP -#define LIBUNWIND__ARCH_REG_SP PERF_REG_X86_SP #include "unwind.h" #include "libunwind-x86.h" diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c deleted file mode 100644 index c6c9c2228578..000000000000 --- a/tools/perf/util/llvm-utils.c +++ /dev/null @@ -1,612 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (C) 2015, Wang Nan <wangnan0@huawei.com> - * Copyright (C) 2015, Huawei Inc. - */ - -#include <errno.h> -#include <limits.h> -#include <stdio.h> -#include <stdlib.h> -#include <unistd.h> -#include <linux/err.h> -#include <linux/string.h> -#include <linux/zalloc.h> -#include "debug.h" -#include "llvm-utils.h" -#include "config.h" -#include "util.h" -#include <sys/wait.h> -#include <subcmd/exec-cmd.h> - -#define CLANG_BPF_CMD_DEFAULT_TEMPLATE \ - "$CLANG_EXEC -D__KERNEL__ -D__NR_CPUS__=$NR_CPUS "\ - "-DLINUX_VERSION_CODE=$LINUX_VERSION_CODE " \ - "$CLANG_OPTIONS $PERF_BPF_INC_OPTIONS $KERNEL_INC_OPTIONS " \ - "-Wno-unused-value -Wno-pointer-sign " \ - "-working-directory $WORKING_DIR " \ - "-c \"$CLANG_SOURCE\" --target=bpf $CLANG_EMIT_LLVM -g -O2 -o - $LLVM_OPTIONS_PIPE" - -struct llvm_param llvm_param = { - .clang_path = "clang", - .llc_path = "llc", - .clang_bpf_cmd_template = CLANG_BPF_CMD_DEFAULT_TEMPLATE, - .clang_opt = NULL, - .opts = NULL, - .kbuild_dir = NULL, - .kbuild_opts = NULL, - .user_set_param = false, -}; - -static void version_notice(void); - -int perf_llvm_config(const char *var, const char *value) -{ - if (!strstarts(var, "llvm.")) - return 0; - var += sizeof("llvm.") - 1; - - if (!strcmp(var, "clang-path")) - llvm_param.clang_path = strdup(value); - else if (!strcmp(var, "clang-bpf-cmd-template")) - llvm_param.clang_bpf_cmd_template = strdup(value); - else if (!strcmp(var, "clang-opt")) - llvm_param.clang_opt = strdup(value); - else if (!strcmp(var, "kbuild-dir")) - llvm_param.kbuild_dir = strdup(value); - else if (!strcmp(var, "kbuild-opts")) - llvm_param.kbuild_opts = strdup(value); - else if (!strcmp(var, "dump-obj")) - llvm_param.dump_obj = !!perf_config_bool(var, value); - else if (!strcmp(var, "opts")) - llvm_param.opts = strdup(value); - else { - pr_debug("Invalid LLVM config option: %s\n", value); - return -1; - } - llvm_param.user_set_param = true; - return 0; -} - -static int -search_program(const char *def, const char *name, - char *output) -{ - char *env, *path, *tmp = NULL; - char buf[PATH_MAX]; - int ret; - - output[0] = '\0'; - if (def && def[0] != '\0') { - if (def[0] == '/') { - if (access(def, F_OK) == 0) { - strlcpy(output, def, PATH_MAX); - return 0; - } - } else if (def[0] != '\0') - name = def; - } - - env = getenv("PATH"); - if (!env) - return -1; - env = strdup(env); - if (!env) - return -1; - - ret = -ENOENT; - path = strtok_r(env, ":", &tmp); - while (path) { - scnprintf(buf, sizeof(buf), "%s/%s", path, name); - if (access(buf, F_OK) == 0) { - strlcpy(output, buf, PATH_MAX); - ret = 0; - break; - } - path = strtok_r(NULL, ":", &tmp); - } - - free(env); - return ret; -} - -static int search_program_and_warn(const char *def, const char *name, - char *output) -{ - int ret = search_program(def, name, output); - - if (ret) { - pr_err("ERROR:\tunable to find %s.\n" - "Hint:\tTry to install latest clang/llvm to support BPF. Check your $PATH\n" - " \tand '%s-path' option in [llvm] section of ~/.perfconfig.\n", - name, name); - version_notice(); - } - return ret; -} - -#define READ_SIZE 4096 -static int -read_from_pipe(const char *cmd, void **p_buf, size_t *p_read_sz) -{ - int err = 0; - void *buf = NULL; - FILE *file = NULL; - size_t read_sz = 0, buf_sz = 0; - char serr[STRERR_BUFSIZE]; - - file = popen(cmd, "r"); - if (!file) { - pr_err("ERROR: unable to popen cmd: %s\n", - str_error_r(errno, serr, sizeof(serr))); - return -EINVAL; - } - - while (!feof(file) && !ferror(file)) { - /* - * Make buf_sz always have obe byte extra space so we - * can put '\0' there. - */ - if (buf_sz - read_sz < READ_SIZE + 1) { - void *new_buf; - - buf_sz = read_sz + READ_SIZE + 1; - new_buf = realloc(buf, buf_sz); - - if (!new_buf) { - pr_err("ERROR: failed to realloc memory\n"); - err = -ENOMEM; - goto errout; - } - - buf = new_buf; - } - read_sz += fread(buf + read_sz, 1, READ_SIZE, file); - } - - if (buf_sz - read_sz < 1) { - pr_err("ERROR: internal error\n"); - err = -EINVAL; - goto errout; - } - - if (ferror(file)) { - pr_err("ERROR: error occurred when reading from pipe: %s\n", - str_error_r(errno, serr, sizeof(serr))); - err = -EIO; - goto errout; - } - - err = WEXITSTATUS(pclose(file)); - file = NULL; - if (err) { - err = -EINVAL; - goto errout; - } - - /* - * If buf is string, give it terminal '\0' to make our life - * easier. If buf is not string, that '\0' is out of space - * indicated by read_sz so caller won't even notice it. - */ - ((char *)buf)[read_sz] = '\0'; - - if (!p_buf) - free(buf); - else - *p_buf = buf; - - if (p_read_sz) - *p_read_sz = read_sz; - return 0; - -errout: - if (file) - pclose(file); - free(buf); - if (p_buf) - *p_buf = NULL; - if (p_read_sz) - *p_read_sz = 0; - return err; -} - -static inline void -force_set_env(const char *var, const char *value) -{ - if (value) { - setenv(var, value, 1); - pr_debug("set env: %s=%s\n", var, value); - } else { - unsetenv(var); - pr_debug("unset env: %s\n", var); - } -} - -static void -version_notice(void) -{ - pr_err( -" \tLLVM 3.7 or newer is required. Which can be found from http://llvm.org\n" -" \tYou may want to try git trunk:\n" -" \t\tgit clone http://llvm.org/git/llvm.git\n" -" \t\t and\n" -" \t\tgit clone http://llvm.org/git/clang.git\n\n" -" \tOr fetch the latest clang/llvm 3.7 from pre-built llvm packages for\n" -" \tdebian/ubuntu:\n" -" \t\thttps://apt.llvm.org/\n\n" -" \tIf you are using old version of clang, change 'clang-bpf-cmd-template'\n" -" \toption in [llvm] section of ~/.perfconfig to:\n\n" -" \t \"$CLANG_EXEC $CLANG_OPTIONS $KERNEL_INC_OPTIONS $PERF_BPF_INC_OPTIONS \\\n" -" \t -working-directory $WORKING_DIR -c $CLANG_SOURCE \\\n" -" \t -emit-llvm -o - | /path/to/llc -march=bpf -filetype=obj -o -\"\n" -" \t(Replace /path/to/llc with path to your llc)\n\n" -); -} - -static int detect_kbuild_dir(char **kbuild_dir) -{ - const char *test_dir = llvm_param.kbuild_dir; - const char *prefix_dir = ""; - const char *suffix_dir = ""; - - /* _UTSNAME_LENGTH is 65 */ - char release[128]; - - char *autoconf_path; - - int err; - - if (!test_dir) { - err = fetch_kernel_version(NULL, release, - sizeof(release)); - if (err) - return -EINVAL; - - test_dir = release; - prefix_dir = "/lib/modules/"; - suffix_dir = "/build"; - } - - err = asprintf(&autoconf_path, "%s%s%s/include/generated/autoconf.h", - prefix_dir, test_dir, suffix_dir); - if (err < 0) - return -ENOMEM; - - if (access(autoconf_path, R_OK) == 0) { - free(autoconf_path); - - err = asprintf(kbuild_dir, "%s%s%s", prefix_dir, test_dir, - suffix_dir); - if (err < 0) - return -ENOMEM; - return 0; - } - pr_debug("%s: Couldn't find \"%s\", missing kernel-devel package?.\n", - __func__, autoconf_path); - free(autoconf_path); - return -ENOENT; -} - -static const char *kinc_fetch_script = -"#!/usr/bin/env sh\n" -"if ! test -d \"$KBUILD_DIR\"\n" -"then\n" -" exit 1\n" -"fi\n" -"if ! test -f \"$KBUILD_DIR/include/generated/autoconf.h\"\n" -"then\n" -" exit 1\n" -"fi\n" -"TMPDIR=`mktemp -d`\n" -"if test -z \"$TMPDIR\"\n" -"then\n" -" exit 1\n" -"fi\n" -"cat << EOF > $TMPDIR/Makefile\n" -"obj-y := dummy.o\n" -"\\$(obj)/%.o: \\$(src)/%.c\n" -"\t@echo -n \"\\$(NOSTDINC_FLAGS) \\$(LINUXINCLUDE) \\$(EXTRA_CFLAGS)\"\n" -"\t\\$(CC) -c -o \\$@ \\$<\n" -"EOF\n" -"touch $TMPDIR/dummy.c\n" -"make -s -C $KBUILD_DIR M=$TMPDIR $KBUILD_OPTS dummy.o 2>/dev/null\n" -"RET=$?\n" -"rm -rf $TMPDIR\n" -"exit $RET\n"; - -void llvm__get_kbuild_opts(char **kbuild_dir, char **kbuild_include_opts) -{ - static char *saved_kbuild_dir; - static char *saved_kbuild_include_opts; - int err; - - if (!kbuild_dir || !kbuild_include_opts) - return; - - *kbuild_dir = NULL; - *kbuild_include_opts = NULL; - - if (saved_kbuild_dir && saved_kbuild_include_opts && - !IS_ERR(saved_kbuild_dir) && !IS_ERR(saved_kbuild_include_opts)) { - *kbuild_dir = strdup(saved_kbuild_dir); - *kbuild_include_opts = strdup(saved_kbuild_include_opts); - - if (*kbuild_dir && *kbuild_include_opts) - return; - - zfree(kbuild_dir); - zfree(kbuild_include_opts); - /* - * Don't fall through: it may breaks saved_kbuild_dir and - * saved_kbuild_include_opts if detect them again when - * memory is low. - */ - return; - } - - if (llvm_param.kbuild_dir && !llvm_param.kbuild_dir[0]) { - pr_debug("[llvm.kbuild-dir] is set to \"\" deliberately.\n"); - pr_debug("Skip kbuild options detection.\n"); - goto errout; - } - - err = detect_kbuild_dir(kbuild_dir); - if (err) { - pr_warning( -"WARNING:\tunable to get correct kernel building directory.\n" -"Hint:\tSet correct kbuild directory using 'kbuild-dir' option in [llvm]\n" -" \tsection of ~/.perfconfig or set it to \"\" to suppress kbuild\n" -" \tdetection.\n\n"); - goto errout; - } - - pr_debug("Kernel build dir is set to %s\n", *kbuild_dir); - force_set_env("KBUILD_DIR", *kbuild_dir); - force_set_env("KBUILD_OPTS", llvm_param.kbuild_opts); - err = read_from_pipe(kinc_fetch_script, - (void **)kbuild_include_opts, - NULL); - if (err) { - pr_warning( -"WARNING:\tunable to get kernel include directories from '%s'\n" -"Hint:\tTry set clang include options using 'clang-bpf-cmd-template'\n" -" \toption in [llvm] section of ~/.perfconfig and set 'kbuild-dir'\n" -" \toption in [llvm] to \"\" to suppress this detection.\n\n", - *kbuild_dir); - - zfree(kbuild_dir); - goto errout; - } - - pr_debug("include option is set to %s\n", *kbuild_include_opts); - - saved_kbuild_dir = strdup(*kbuild_dir); - saved_kbuild_include_opts = strdup(*kbuild_include_opts); - - if (!saved_kbuild_dir || !saved_kbuild_include_opts) { - zfree(&saved_kbuild_dir); - zfree(&saved_kbuild_include_opts); - } - return; -errout: - saved_kbuild_dir = ERR_PTR(-EINVAL); - saved_kbuild_include_opts = ERR_PTR(-EINVAL); -} - -int llvm__get_nr_cpus(void) -{ - static int nr_cpus_avail = 0; - char serr[STRERR_BUFSIZE]; - - if (nr_cpus_avail > 0) - return nr_cpus_avail; - - nr_cpus_avail = sysconf(_SC_NPROCESSORS_CONF); - if (nr_cpus_avail <= 0) { - pr_err( -"WARNING:\tunable to get available CPUs in this system: %s\n" -" \tUse 128 instead.\n", str_error_r(errno, serr, sizeof(serr))); - nr_cpus_avail = 128; - } - return nr_cpus_avail; -} - -void llvm__dump_obj(const char *path, void *obj_buf, size_t size) -{ - char *obj_path = strdup(path); - FILE *fp; - char *p; - - if (!obj_path) { - pr_warning("WARNING: Not enough memory, skip object dumping\n"); - return; - } - - p = strrchr(obj_path, '.'); - if (!p || (strcmp(p, ".c") != 0)) { - pr_warning("WARNING: invalid llvm source path: '%s', skip object dumping\n", - obj_path); - goto out; - } - - p[1] = 'o'; - fp = fopen(obj_path, "wb"); - if (!fp) { - pr_warning("WARNING: failed to open '%s': %s, skip object dumping\n", - obj_path, strerror(errno)); - goto out; - } - - pr_debug("LLVM: dumping %s\n", obj_path); - if (fwrite(obj_buf, size, 1, fp) != 1) - pr_debug("WARNING: failed to write to file '%s': %s, skip object dumping\n", obj_path, strerror(errno)); - fclose(fp); -out: - free(obj_path); -} - -int llvm__compile_bpf(const char *path, void **p_obj_buf, - size_t *p_obj_buf_sz) -{ - size_t obj_buf_sz; - void *obj_buf = NULL; - int err, nr_cpus_avail; - unsigned int kernel_version; - char linux_version_code_str[64]; - const char *clang_opt = llvm_param.clang_opt; - char clang_path[PATH_MAX], llc_path[PATH_MAX], abspath[PATH_MAX], nr_cpus_avail_str[64]; - char serr[STRERR_BUFSIZE]; - char *kbuild_dir = NULL, *kbuild_include_opts = NULL, - *perf_bpf_include_opts = NULL; - const char *template = llvm_param.clang_bpf_cmd_template; - char *pipe_template = NULL; - const char *opts = llvm_param.opts; - char *command_echo = NULL, *command_out; - char *libbpf_include_dir = system_path(LIBBPF_INCLUDE_DIR); - - if (path[0] != '-' && realpath(path, abspath) == NULL) { - err = errno; - pr_err("ERROR: problems with path %s: %s\n", - path, str_error_r(err, serr, sizeof(serr))); - return -err; - } - - if (!template) - template = CLANG_BPF_CMD_DEFAULT_TEMPLATE; - - err = search_program_and_warn(llvm_param.clang_path, - "clang", clang_path); - if (err) - return -ENOENT; - - /* - * This is an optional work. Even it fail we can continue our - * work. Needn't check error return. - */ - llvm__get_kbuild_opts(&kbuild_dir, &kbuild_include_opts); - - nr_cpus_avail = llvm__get_nr_cpus(); - snprintf(nr_cpus_avail_str, sizeof(nr_cpus_avail_str), "%d", - nr_cpus_avail); - - if (fetch_kernel_version(&kernel_version, NULL, 0)) - kernel_version = 0; - - snprintf(linux_version_code_str, sizeof(linux_version_code_str), - "0x%x", kernel_version); - if (asprintf(&perf_bpf_include_opts, "-I%s/", libbpf_include_dir) < 0) - goto errout; - force_set_env("NR_CPUS", nr_cpus_avail_str); - force_set_env("LINUX_VERSION_CODE", linux_version_code_str); - force_set_env("CLANG_EXEC", clang_path); - force_set_env("CLANG_OPTIONS", clang_opt); - force_set_env("KERNEL_INC_OPTIONS", kbuild_include_opts); - force_set_env("PERF_BPF_INC_OPTIONS", perf_bpf_include_opts); - force_set_env("WORKING_DIR", kbuild_dir ? : "."); - - if (opts) { - err = search_program_and_warn(llvm_param.llc_path, "llc", llc_path); - if (err) - goto errout; - - err = -ENOMEM; - if (asprintf(&pipe_template, "%s -emit-llvm | %s -march=bpf %s -filetype=obj -o -", - template, llc_path, opts) < 0) { - pr_err("ERROR:\tnot enough memory to setup command line\n"); - goto errout; - } - - template = pipe_template; - - } - - /* - * Since we may reset clang's working dir, path of source file - * should be transferred into absolute path, except we want - * stdin to be source file (testing). - */ - force_set_env("CLANG_SOURCE", - (path[0] == '-') ? path : abspath); - - pr_debug("llvm compiling command template: %s\n", template); - - /* - * Below, substitute control characters for values that can cause the - * echo to misbehave, then substitute the values back. - */ - err = -ENOMEM; - if (asprintf(&command_echo, "echo -n \a%s\a", template) < 0) - goto errout; - -#define SWAP_CHAR(a, b) do { if (*p == a) *p = b; } while (0) - for (char *p = command_echo; *p; p++) { - SWAP_CHAR('<', '\001'); - SWAP_CHAR('>', '\002'); - SWAP_CHAR('"', '\003'); - SWAP_CHAR('\'', '\004'); - SWAP_CHAR('|', '\005'); - SWAP_CHAR('&', '\006'); - SWAP_CHAR('\a', '"'); - } - err = read_from_pipe(command_echo, (void **) &command_out, NULL); - if (err) - goto errout; - - for (char *p = command_out; *p; p++) { - SWAP_CHAR('\001', '<'); - SWAP_CHAR('\002', '>'); - SWAP_CHAR('\003', '"'); - SWAP_CHAR('\004', '\''); - SWAP_CHAR('\005', '|'); - SWAP_CHAR('\006', '&'); - } -#undef SWAP_CHAR - pr_debug("llvm compiling command : %s\n", command_out); - - err = read_from_pipe(template, &obj_buf, &obj_buf_sz); - if (err) { - pr_err("ERROR:\tunable to compile %s\n", path); - pr_err("Hint:\tCheck error message shown above.\n"); - pr_err("Hint:\tYou can also pre-compile it into .o using:\n"); - pr_err(" \t\tclang --target=bpf -O2 -c %s\n", path); - pr_err(" \twith proper -I and -D options.\n"); - goto errout; - } - - free(command_echo); - free(command_out); - free(kbuild_dir); - free(kbuild_include_opts); - free(perf_bpf_include_opts); - free(libbpf_include_dir); - - if (!p_obj_buf) - free(obj_buf); - else - *p_obj_buf = obj_buf; - - if (p_obj_buf_sz) - *p_obj_buf_sz = obj_buf_sz; - return 0; -errout: - free(command_echo); - free(kbuild_dir); - free(kbuild_include_opts); - free(obj_buf); - free(perf_bpf_include_opts); - free(libbpf_include_dir); - free(pipe_template); - if (p_obj_buf) - *p_obj_buf = NULL; - if (p_obj_buf_sz) - *p_obj_buf_sz = 0; - return err; -} - -int llvm__search_clang(void) -{ - char clang_path[PATH_MAX]; - - return search_program_and_warn(llvm_param.clang_path, "clang", clang_path); -} diff --git a/tools/perf/util/llvm-utils.h b/tools/perf/util/llvm-utils.h deleted file mode 100644 index 7878a0e3fa98..000000000000 --- a/tools/perf/util/llvm-utils.h +++ /dev/null @@ -1,69 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2015, Wang Nan <wangnan0@huawei.com> - * Copyright (C) 2015, Huawei Inc. - */ -#ifndef __LLVM_UTILS_H -#define __LLVM_UTILS_H - -#include <stdbool.h> - -struct llvm_param { - /* Path of clang executable */ - const char *clang_path; - /* Path of llc executable */ - const char *llc_path; - /* - * Template of clang bpf compiling. 5 env variables - * can be used: - * $CLANG_EXEC: Path to clang. - * $CLANG_OPTIONS: Extra options to clang. - * $KERNEL_INC_OPTIONS: Kernel include directories. - * $WORKING_DIR: Kernel source directory. - * $CLANG_SOURCE: Source file to be compiled. - */ - const char *clang_bpf_cmd_template; - /* Will be filled in $CLANG_OPTIONS */ - const char *clang_opt; - /* - * If present it'll add -emit-llvm to $CLANG_OPTIONS to pipe - * the clang output to llc, useful for new llvm options not - * yet selectable via 'clang -mllvm option', such as -mattr=dwarfris - * in clang 6.0/llvm 7 - */ - const char *opts; - /* Where to find kbuild system */ - const char *kbuild_dir; - /* - * Arguments passed to make, like 'ARCH=arm' if doing cross - * compiling. Should not be used for dynamic compiling. - */ - const char *kbuild_opts; - /* - * Default is false. If set to true, write compiling result - * to object file. - */ - bool dump_obj; - /* - * Default is false. If one of the above fields is set by user - * explicitly then user_set_llvm is set to true. This is used - * for perf test. If user doesn't set anything in .perfconfig - * and clang is not found, don't trigger llvm test. - */ - bool user_set_param; -}; - -extern struct llvm_param llvm_param; -int perf_llvm_config(const char *var, const char *value); - -int llvm__compile_bpf(const char *path, void **p_obj_buf, size_t *p_obj_buf_sz); - -/* This function is for test__llvm() use only */ -int llvm__search_clang(void); - -/* Following functions are reused by builtin clang support */ -void llvm__get_kbuild_opts(char **kbuild_dir, char **kbuild_include_opts); -int llvm__get_nr_cpus(void); - -void llvm__dump_obj(const char *path, void *obj_buf, size_t size); -#endif diff --git a/tools/perf/util/lzma.c b/tools/perf/util/lzma.c index 51424cdc3b68..af9a97612f9d 100644 --- a/tools/perf/util/lzma.c +++ b/tools/perf/util/lzma.c @@ -45,15 +45,13 @@ int lzma_decompress_to_file(const char *input, int output_fd) infile = fopen(input, "rb"); if (!infile) { - pr_err("lzma: fopen failed on %s: '%s'\n", - input, strerror(errno)); + pr_debug("lzma: fopen failed on %s: '%s'\n", input, strerror(errno)); return -1; } ret = lzma_stream_decoder(&strm, UINT64_MAX, LZMA_CONCATENATED); if (ret != LZMA_OK) { - pr_err("lzma: lzma_stream_decoder failed %s (%d)\n", - lzma_strerror(ret), ret); + pr_debug("lzma: lzma_stream_decoder failed %s (%d)\n", lzma_strerror(ret), ret); goto err_fclose; } @@ -68,7 +66,7 @@ int lzma_decompress_to_file(const char *input, int output_fd) strm.avail_in = fread(buf_in, 1, sizeof(buf_in), infile); if (ferror(infile)) { - pr_err("lzma: read error: %s\n", strerror(errno)); + pr_debug("lzma: read error: %s\n", strerror(errno)); goto err_lzma_end; } @@ -82,7 +80,7 @@ int lzma_decompress_to_file(const char *input, int output_fd) ssize_t write_size = sizeof(buf_out) - strm.avail_out; if (writen(output_fd, buf_out, write_size) != write_size) { - pr_err("lzma: write error: %s\n", strerror(errno)); + pr_debug("lzma: write error: %s\n", strerror(errno)); goto err_lzma_end; } @@ -94,7 +92,7 @@ int lzma_decompress_to_file(const char *input, int output_fd) if (ret == LZMA_STREAM_END) break; - pr_err("lzma: failed %s\n", lzma_strerror(ret)); + pr_debug("lzma: failed %s\n", lzma_strerror(ret)); goto err_lzma_end; } } diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index f4cb41ee23cd..88f31b3a63ac 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -1215,7 +1215,9 @@ static int machine__get_running_kernel_start(struct machine *machine, *start = addr; - err = kallsyms__get_function_start(filename, "_etext", &addr); + err = kallsyms__get_symbol_start(filename, "_edata", &addr); + if (err) + err = kallsyms__get_function_start(filename, "_etext", &addr); if (!err) *end = addr; diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c index c07fe3a90722..39ffe8ceb380 100644 --- a/tools/perf/util/mem-events.c +++ b/tools/perf/util/mem-events.c @@ -37,7 +37,7 @@ struct perf_mem_event * __weak perf_mem_events__ptr(int i) return &perf_mem_events[i]; } -char * __weak perf_mem_events__name(int i, char *pmu_name __maybe_unused) +const char * __weak perf_mem_events__name(int i, const char *pmu_name __maybe_unused) { struct perf_mem_event *e = perf_mem_events__ptr(i); @@ -53,7 +53,7 @@ char * __weak perf_mem_events__name(int i, char *pmu_name __maybe_unused) return mem_loads_name; } - return (char *)e->name; + return e->name; } __weak bool is_mem_loads_aux_event(struct evsel *leader __maybe_unused) @@ -186,7 +186,6 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr, int i = *argv_nr, k = 0; struct perf_mem_event *e; struct perf_pmu *pmu; - char *s; for (int j = 0; j < PERF_MEM_EVENTS__MAX; j++) { e = perf_mem_events__ptr(j); @@ -209,15 +208,16 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr, } while ((pmu = perf_pmus__scan(pmu)) != NULL) { + const char *s = perf_mem_events__name(j, pmu->name); + rec_argv[i++] = "-e"; - s = perf_mem_events__name(j, pmu->name); if (s) { - s = strdup(s); - if (!s) + char *copy = strdup(s); + if (!copy) return -1; - rec_argv[i++] = s; - rec_tmp[k++] = s; + rec_argv[i++] = copy; + rec_tmp[k++] = copy; } } } diff --git a/tools/perf/util/mem-events.h b/tools/perf/util/mem-events.h index 12372309d60e..b40ad6ea93fc 100644 --- a/tools/perf/util/mem-events.h +++ b/tools/perf/util/mem-events.h @@ -38,7 +38,7 @@ extern unsigned int perf_mem_events__loads_ldlat; int perf_mem_events__parse(const char *str); int perf_mem_events__init(void); -char *perf_mem_events__name(int i, char *pmu_name); +const char *perf_mem_events__name(int i, const char *pmu_name); struct perf_mem_event *perf_mem_events__ptr(int i); bool is_mem_loads_aux_event(struct evsel *leader); diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c index a6a5ed44a679..6231044a491e 100644 --- a/tools/perf/util/metricgroup.c +++ b/tools/perf/util/metricgroup.c @@ -527,7 +527,7 @@ void metricgroup__print(const struct print_callbacks *print_cb, void *print_stat groups.node_delete = mep_delete; table = pmu_metrics_table__find(); if (table) { - pmu_metrics_table_for_each_metric(table, + pmu_metrics_table__for_each_metric(table, metricgroup__add_to_mep_groups_callback, &groups); } @@ -1069,7 +1069,7 @@ static bool metricgroup__find_metric(const char *pmu, .pm = pm, }; - return pmu_metrics_table_for_each_metric(table, metricgroup__find_metric_callback, &data) + return pmu_metrics_table__for_each_metric(table, metricgroup__find_metric_callback, &data) ? true : false; } @@ -1255,7 +1255,7 @@ static int metricgroup__add_metric(const char *pmu, const char *metric_name, con * Iterate over all metrics seeing if metric matches either the * name or group. When it does add the metric to the list. */ - ret = pmu_metrics_table_for_each_metric(table, metricgroup__add_metric_callback, + ret = pmu_metrics_table__for_each_metric(table, metricgroup__add_metric_callback, &data); if (ret) goto out; @@ -1740,7 +1740,7 @@ bool metricgroup__has_metric(const char *pmu, const char *metric) if (!table) return false; - return pmu_metrics_table_for_each_metric(table, metricgroup__has_metric_callback, &data) + return pmu_metrics_table__for_each_metric(table, metricgroup__has_metric_callback, &data) ? true : false; } @@ -1770,7 +1770,7 @@ unsigned int metricgroups__topdown_max_level(void) if (!table) return false; - pmu_metrics_table_for_each_metric(table, metricgroup__topdown_max_level_callback, + pmu_metrics_table__for_each_metric(table, metricgroup__topdown_max_level_callback, &max_level); return max_level; } diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index c9ec0cafb69d..65608a3cba81 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -13,13 +13,12 @@ #include <subcmd/parse-options.h> #include "parse-events.h" #include "string2.h" -#include "strlist.h" -#include "bpf-loader.h" +#include "strbuf.h" #include "debug.h" #include <api/fs/tracing_path.h> #include <perf/cpumap.h> -#include "parse-events-bison.h" -#include "parse-events-flex.h" +#include <util/parse-events-bison.h> +#include <util/parse-events-flex.h> #include "pmu.h" #include "pmus.h" #include "asm/bug.h" @@ -35,7 +34,6 @@ #ifdef PARSER_DEBUG extern int parse_events_debug; #endif -int parse_events_parse(void *parse_state, void *scanner); static int get_config_terms(struct list_head *head_config, struct list_head *head_terms __maybe_unused); @@ -155,7 +153,7 @@ const char *event_type(int type) return "unknown"; } -static char *get_config_str(struct list_head *head_terms, int type_term) +static char *get_config_str(struct list_head *head_terms, enum parse_events__term_type type_term) { struct parse_events_term *term; @@ -195,38 +193,31 @@ static void fix_raw(struct list_head *config_terms, struct perf_pmu *pmu) struct parse_events_term *term; list_for_each_entry(term, config_terms, list) { - struct perf_pmu_alias *alias; - bool matched = false; + u64 num; if (term->type_term != PARSE_EVENTS__TERM_TYPE_RAW) continue; - list_for_each_entry(alias, &pmu->aliases, list) { - if (!strcmp(alias->name, term->val.str)) { - free(term->config); - term->config = term->val.str; - term->type_val = PARSE_EVENTS__TERM_TYPE_NUM; - term->type_term = PARSE_EVENTS__TERM_TYPE_USER; - term->val.num = 1; - term->no_value = true; - matched = true; - break; - } - } - if (!matched) { - u64 num; - - free(term->config); - term->config = strdup("config"); - errno = 0; - num = strtoull(term->val.str + 1, NULL, 16); - assert(errno == 0); - free(term->val.str); + if (perf_pmu__have_event(pmu, term->val.str)) { + zfree(&term->config); + term->config = term->val.str; term->type_val = PARSE_EVENTS__TERM_TYPE_NUM; - term->type_term = PARSE_EVENTS__TERM_TYPE_CONFIG; - term->val.num = num; - term->no_value = false; + term->type_term = PARSE_EVENTS__TERM_TYPE_USER; + term->val.num = 1; + term->no_value = true; + continue; } + + zfree(&term->config); + term->config = strdup("config"); + errno = 0; + num = strtoull(term->val.str + 1, NULL, 16); + assert(errno == 0); + free(term->val.str); + term->type_val = PARSE_EVENTS__TERM_TYPE_NUM; + term->type_term = PARSE_EVENTS__TERM_TYPE_CONFIG; + term->val.num = num; + term->no_value = false; } } @@ -271,7 +262,7 @@ __add_event(struct list_head *list, int *idx, evsel->core.is_pmu_core = pmu ? pmu->is_core : false; evsel->auto_merge_stats = auto_merge_stats; evsel->pmu = pmu; - evsel->pmu_name = pmu && pmu->name ? strdup(pmu->name) : NULL; + evsel->pmu_name = pmu ? strdup(pmu->name) : NULL; if (name) evsel->name = strdup(name); @@ -446,9 +437,6 @@ bool parse_events__filter_pmu(const struct parse_events_state *parse_state, if (parse_state->pmu_filter == NULL) return false; - if (pmu->name == NULL) - return true; - return strcmp(parse_state->pmu_filter, pmu->name) != 0; } @@ -499,7 +487,7 @@ int parse_events_add_cache(struct list_head *list, int *idx, const char *name, #ifdef HAVE_LIBTRACEEVENT static void tracepoint_error(struct parse_events_error *e, int err, - const char *sys, const char *name) + const char *sys, const char *name, int column) { const char *str; char help[BUFSIZ]; @@ -526,18 +514,19 @@ static void tracepoint_error(struct parse_events_error *e, int err, } tracing_path__strerror_open_tp(err, help, sizeof(help), sys, name); - parse_events_error__handle(e, 0, strdup(str), strdup(help)); + parse_events_error__handle(e, column, strdup(str), strdup(help)); } static int add_tracepoint(struct list_head *list, int *idx, const char *sys_name, const char *evt_name, struct parse_events_error *err, - struct list_head *head_config) + struct list_head *head_config, void *loc_) { + YYLTYPE *loc = loc_; struct evsel *evsel = evsel__newtp_idx(sys_name, evt_name, (*idx)++); if (IS_ERR(evsel)) { - tracepoint_error(err, PTR_ERR(evsel), sys_name, evt_name); + tracepoint_error(err, PTR_ERR(evsel), sys_name, evt_name, loc->first_column); return PTR_ERR(evsel); } @@ -556,7 +545,7 @@ static int add_tracepoint(struct list_head *list, int *idx, static int add_tracepoint_multi_event(struct list_head *list, int *idx, const char *sys_name, const char *evt_name, struct parse_events_error *err, - struct list_head *head_config) + struct list_head *head_config, YYLTYPE *loc) { char *evt_path; struct dirent *evt_ent; @@ -565,13 +554,13 @@ static int add_tracepoint_multi_event(struct list_head *list, int *idx, evt_path = get_events_file(sys_name); if (!evt_path) { - tracepoint_error(err, errno, sys_name, evt_name); + tracepoint_error(err, errno, sys_name, evt_name, loc->first_column); return -1; } evt_dir = opendir(evt_path); if (!evt_dir) { put_events_file(evt_path); - tracepoint_error(err, errno, sys_name, evt_name); + tracepoint_error(err, errno, sys_name, evt_name, loc->first_column); return -1; } @@ -588,11 +577,11 @@ static int add_tracepoint_multi_event(struct list_head *list, int *idx, found++; ret = add_tracepoint(list, idx, sys_name, evt_ent->d_name, - err, head_config); + err, head_config, loc); } if (!found) { - tracepoint_error(err, ENOENT, sys_name, evt_name); + tracepoint_error(err, ENOENT, sys_name, evt_name, loc->first_column); ret = -1; } @@ -604,19 +593,19 @@ static int add_tracepoint_multi_event(struct list_head *list, int *idx, static int add_tracepoint_event(struct list_head *list, int *idx, const char *sys_name, const char *evt_name, struct parse_events_error *err, - struct list_head *head_config) + struct list_head *head_config, YYLTYPE *loc) { return strpbrk(evt_name, "*?") ? - add_tracepoint_multi_event(list, idx, sys_name, evt_name, - err, head_config) : - add_tracepoint(list, idx, sys_name, evt_name, - err, head_config); + add_tracepoint_multi_event(list, idx, sys_name, evt_name, + err, head_config, loc) : + add_tracepoint(list, idx, sys_name, evt_name, + err, head_config, loc); } static int add_tracepoint_multi_sys(struct list_head *list, int *idx, const char *sys_name, const char *evt_name, struct parse_events_error *err, - struct list_head *head_config) + struct list_head *head_config, YYLTYPE *loc) { struct dirent *events_ent; DIR *events_dir; @@ -624,7 +613,7 @@ static int add_tracepoint_multi_sys(struct list_head *list, int *idx, events_dir = tracing_events__opendir(); if (!events_dir) { - tracepoint_error(err, errno, sys_name, evt_name); + tracepoint_error(err, errno, sys_name, evt_name, loc->first_column); return -1; } @@ -640,7 +629,7 @@ static int add_tracepoint_multi_sys(struct list_head *list, int *idx, continue; ret = add_tracepoint_event(list, idx, events_ent->d_name, - evt_name, err, head_config); + evt_name, err, head_config, loc); } closedir(events_dir); @@ -648,264 +637,6 @@ static int add_tracepoint_multi_sys(struct list_head *list, int *idx, } #endif /* HAVE_LIBTRACEEVENT */ -#ifdef HAVE_LIBBPF_SUPPORT -struct __add_bpf_event_param { - struct parse_events_state *parse_state; - struct list_head *list; - struct list_head *head_config; -}; - -static int add_bpf_event(const char *group, const char *event, int fd, struct bpf_object *obj, - void *_param) -{ - LIST_HEAD(new_evsels); - struct __add_bpf_event_param *param = _param; - struct parse_events_state *parse_state = param->parse_state; - struct list_head *list = param->list; - struct evsel *pos; - int err; - /* - * Check if we should add the event, i.e. if it is a TP but starts with a '!', - * then don't add the tracepoint, this will be used for something else, like - * adding to a BPF_MAP_TYPE_PROG_ARRAY. - * - * See tools/perf/examples/bpf/augmented_raw_syscalls.c - */ - if (group[0] == '!') - return 0; - - pr_debug("add bpf event %s:%s and attach bpf program %d\n", - group, event, fd); - - err = parse_events_add_tracepoint(&new_evsels, &parse_state->idx, group, - event, parse_state->error, - param->head_config); - if (err) { - struct evsel *evsel, *tmp; - - pr_debug("Failed to add BPF event %s:%s\n", - group, event); - list_for_each_entry_safe(evsel, tmp, &new_evsels, core.node) { - list_del_init(&evsel->core.node); - evsel__delete(evsel); - } - return err; - } - pr_debug("adding %s:%s\n", group, event); - - list_for_each_entry(pos, &new_evsels, core.node) { - pr_debug("adding %s:%s to %p\n", - group, event, pos); - pos->bpf_fd = fd; - pos->bpf_obj = obj; - } - list_splice(&new_evsels, list); - return 0; -} - -int parse_events_load_bpf_obj(struct parse_events_state *parse_state, - struct list_head *list, - struct bpf_object *obj, - struct list_head *head_config) -{ - int err; - char errbuf[BUFSIZ]; - struct __add_bpf_event_param param = {parse_state, list, head_config}; - static bool registered_unprobe_atexit = false; - - if (IS_ERR(obj) || !obj) { - snprintf(errbuf, sizeof(errbuf), - "Internal error: load bpf obj with NULL"); - err = -EINVAL; - goto errout; - } - - /* - * Register atexit handler before calling bpf__probe() so - * bpf__probe() don't need to unprobe probe points its already - * created when failure. - */ - if (!registered_unprobe_atexit) { - atexit(bpf__clear); - registered_unprobe_atexit = true; - } - - err = bpf__probe(obj); - if (err) { - bpf__strerror_probe(obj, err, errbuf, sizeof(errbuf)); - goto errout; - } - - err = bpf__load(obj); - if (err) { - bpf__strerror_load(obj, err, errbuf, sizeof(errbuf)); - goto errout; - } - - err = bpf__foreach_event(obj, add_bpf_event, ¶m); - if (err) { - snprintf(errbuf, sizeof(errbuf), - "Attach events in BPF object failed"); - goto errout; - } - - return 0; -errout: - parse_events_error__handle(parse_state->error, 0, - strdup(errbuf), strdup("(add -v to see detail)")); - return err; -} - -static int -parse_events_config_bpf(struct parse_events_state *parse_state, - struct bpf_object *obj, - struct list_head *head_config) -{ - struct parse_events_term *term; - int error_pos; - - if (!head_config || list_empty(head_config)) - return 0; - - list_for_each_entry(term, head_config, list) { - int err; - - if (term->type_term != PARSE_EVENTS__TERM_TYPE_USER) { - parse_events_error__handle(parse_state->error, term->err_term, - strdup("Invalid config term for BPF object"), - NULL); - return -EINVAL; - } - - err = bpf__config_obj(obj, term, parse_state->evlist, &error_pos); - if (err) { - char errbuf[BUFSIZ]; - int idx; - - bpf__strerror_config_obj(obj, term, parse_state->evlist, - &error_pos, err, errbuf, - sizeof(errbuf)); - - if (err == -BPF_LOADER_ERRNO__OBJCONF_MAP_VALUE) - idx = term->err_val; - else - idx = term->err_term + error_pos; - - parse_events_error__handle(parse_state->error, idx, - strdup(errbuf), - strdup( -"Hint:\tValid config terms:\n" -" \tmap:[<arraymap>].value<indices>=[value]\n" -" \tmap:[<eventmap>].event<indices>=[event]\n" -"\n" -" \twhere <indices> is something like [0,3...5] or [all]\n" -" \t(add -v to see detail)")); - return err; - } - } - return 0; -} - -/* - * Split config terms: - * perf record -e bpf.c/call-graph=fp,map:array.value[0]=1/ ... - * 'call-graph=fp' is 'evt config', should be applied to each - * events in bpf.c. - * 'map:array.value[0]=1' is 'obj config', should be processed - * with parse_events_config_bpf. - * - * Move object config terms from the first list to obj_head_config. - */ -static void -split_bpf_config_terms(struct list_head *evt_head_config, - struct list_head *obj_head_config) -{ - struct parse_events_term *term, *temp; - - /* - * Currently, all possible user config term - * belong to bpf object. parse_events__is_hardcoded_term() - * happens to be a good flag. - * - * See parse_events_config_bpf() and - * config_term_tracepoint(). - */ - list_for_each_entry_safe(term, temp, evt_head_config, list) - if (!parse_events__is_hardcoded_term(term)) - list_move_tail(&term->list, obj_head_config); -} - -int parse_events_load_bpf(struct parse_events_state *parse_state, - struct list_head *list, - char *bpf_file_name, - bool source, - struct list_head *head_config) -{ - int err; - struct bpf_object *obj; - LIST_HEAD(obj_head_config); - - if (head_config) - split_bpf_config_terms(head_config, &obj_head_config); - - obj = bpf__prepare_load(bpf_file_name, source); - if (IS_ERR(obj)) { - char errbuf[BUFSIZ]; - - err = PTR_ERR(obj); - - if (err == -ENOTSUP) - snprintf(errbuf, sizeof(errbuf), - "BPF support is not compiled"); - else - bpf__strerror_prepare_load(bpf_file_name, - source, - -err, errbuf, - sizeof(errbuf)); - - parse_events_error__handle(parse_state->error, 0, - strdup(errbuf), strdup("(add -v to see detail)")); - return err; - } - - err = parse_events_load_bpf_obj(parse_state, list, obj, head_config); - if (err) - return err; - err = parse_events_config_bpf(parse_state, obj, &obj_head_config); - - /* - * Caller doesn't know anything about obj_head_config, - * so combine them together again before returning. - */ - if (head_config) - list_splice_tail(&obj_head_config, head_config); - return err; -} -#else // HAVE_LIBBPF_SUPPORT -int parse_events_load_bpf_obj(struct parse_events_state *parse_state, - struct list_head *list __maybe_unused, - struct bpf_object *obj __maybe_unused, - struct list_head *head_config __maybe_unused) -{ - parse_events_error__handle(parse_state->error, 0, - strdup("BPF support is not compiled"), - strdup("Make sure libbpf-devel is available at build time.")); - return -ENOTSUP; -} - -int parse_events_load_bpf(struct parse_events_state *parse_state, - struct list_head *list __maybe_unused, - char *bpf_file_name __maybe_unused, - bool source __maybe_unused, - struct list_head *head_config __maybe_unused) -{ - parse_events_error__handle(parse_state->error, 0, - strdup("BPF support is not compiled"), - strdup("Make sure libbpf-devel is available at build time.")); - return -ENOTSUP; -} -#endif // HAVE_LIBBPF_SUPPORT - static int parse_breakpoint_type(const char *type, struct perf_event_attr *attr) { @@ -991,7 +722,7 @@ int parse_events_add_breakpoint(struct parse_events_state *parse_state, static int check_type_val(struct parse_events_term *term, struct parse_events_error *err, - int type) + enum parse_events__term_val_type type) { if (type == term->type_val) return 0; @@ -1006,42 +737,49 @@ static int check_type_val(struct parse_events_term *term, return -EINVAL; } -/* - * Update according to parse-events.l - */ -static const char *config_term_names[__PARSE_EVENTS__TERM_TYPE_NR] = { - [PARSE_EVENTS__TERM_TYPE_USER] = "<sysfs term>", - [PARSE_EVENTS__TERM_TYPE_CONFIG] = "config", - [PARSE_EVENTS__TERM_TYPE_CONFIG1] = "config1", - [PARSE_EVENTS__TERM_TYPE_CONFIG2] = "config2", - [PARSE_EVENTS__TERM_TYPE_CONFIG3] = "config3", - [PARSE_EVENTS__TERM_TYPE_NAME] = "name", - [PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD] = "period", - [PARSE_EVENTS__TERM_TYPE_SAMPLE_FREQ] = "freq", - [PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE] = "branch_type", - [PARSE_EVENTS__TERM_TYPE_TIME] = "time", - [PARSE_EVENTS__TERM_TYPE_CALLGRAPH] = "call-graph", - [PARSE_EVENTS__TERM_TYPE_STACKSIZE] = "stack-size", - [PARSE_EVENTS__TERM_TYPE_NOINHERIT] = "no-inherit", - [PARSE_EVENTS__TERM_TYPE_INHERIT] = "inherit", - [PARSE_EVENTS__TERM_TYPE_MAX_STACK] = "max-stack", - [PARSE_EVENTS__TERM_TYPE_MAX_EVENTS] = "nr", - [PARSE_EVENTS__TERM_TYPE_OVERWRITE] = "overwrite", - [PARSE_EVENTS__TERM_TYPE_NOOVERWRITE] = "no-overwrite", - [PARSE_EVENTS__TERM_TYPE_DRV_CFG] = "driver-config", - [PARSE_EVENTS__TERM_TYPE_PERCORE] = "percore", - [PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT] = "aux-output", - [PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE] = "aux-sample-size", - [PARSE_EVENTS__TERM_TYPE_METRIC_ID] = "metric-id", - [PARSE_EVENTS__TERM_TYPE_RAW] = "raw", - [PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE] = "legacy-cache", - [PARSE_EVENTS__TERM_TYPE_HARDWARE] = "hardware", -}; - static bool config_term_shrinked; +static const char *config_term_name(enum parse_events__term_type term_type) +{ + /* + * Update according to parse-events.l + */ + static const char *config_term_names[__PARSE_EVENTS__TERM_TYPE_NR] = { + [PARSE_EVENTS__TERM_TYPE_USER] = "<sysfs term>", + [PARSE_EVENTS__TERM_TYPE_CONFIG] = "config", + [PARSE_EVENTS__TERM_TYPE_CONFIG1] = "config1", + [PARSE_EVENTS__TERM_TYPE_CONFIG2] = "config2", + [PARSE_EVENTS__TERM_TYPE_CONFIG3] = "config3", + [PARSE_EVENTS__TERM_TYPE_NAME] = "name", + [PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD] = "period", + [PARSE_EVENTS__TERM_TYPE_SAMPLE_FREQ] = "freq", + [PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE] = "branch_type", + [PARSE_EVENTS__TERM_TYPE_TIME] = "time", + [PARSE_EVENTS__TERM_TYPE_CALLGRAPH] = "call-graph", + [PARSE_EVENTS__TERM_TYPE_STACKSIZE] = "stack-size", + [PARSE_EVENTS__TERM_TYPE_NOINHERIT] = "no-inherit", + [PARSE_EVENTS__TERM_TYPE_INHERIT] = "inherit", + [PARSE_EVENTS__TERM_TYPE_MAX_STACK] = "max-stack", + [PARSE_EVENTS__TERM_TYPE_MAX_EVENTS] = "nr", + [PARSE_EVENTS__TERM_TYPE_OVERWRITE] = "overwrite", + [PARSE_EVENTS__TERM_TYPE_NOOVERWRITE] = "no-overwrite", + [PARSE_EVENTS__TERM_TYPE_DRV_CFG] = "driver-config", + [PARSE_EVENTS__TERM_TYPE_PERCORE] = "percore", + [PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT] = "aux-output", + [PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE] = "aux-sample-size", + [PARSE_EVENTS__TERM_TYPE_METRIC_ID] = "metric-id", + [PARSE_EVENTS__TERM_TYPE_RAW] = "raw", + [PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE] = "legacy-cache", + [PARSE_EVENTS__TERM_TYPE_HARDWARE] = "hardware", + }; + if ((unsigned int)term_type >= __PARSE_EVENTS__TERM_TYPE_NR) + return "unknown term"; + + return config_term_names[term_type]; +} + static bool -config_term_avail(int term_type, struct parse_events_error *err) +config_term_avail(enum parse_events__term_type term_type, struct parse_events_error *err) { char *err_str; @@ -1063,13 +801,31 @@ config_term_avail(int term_type, struct parse_events_error *err) case PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD: case PARSE_EVENTS__TERM_TYPE_PERCORE: return true; + case PARSE_EVENTS__TERM_TYPE_USER: + case PARSE_EVENTS__TERM_TYPE_SAMPLE_FREQ: + case PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE: + case PARSE_EVENTS__TERM_TYPE_TIME: + case PARSE_EVENTS__TERM_TYPE_CALLGRAPH: + case PARSE_EVENTS__TERM_TYPE_STACKSIZE: + case PARSE_EVENTS__TERM_TYPE_NOINHERIT: + case PARSE_EVENTS__TERM_TYPE_INHERIT: + case PARSE_EVENTS__TERM_TYPE_MAX_STACK: + case PARSE_EVENTS__TERM_TYPE_MAX_EVENTS: + case PARSE_EVENTS__TERM_TYPE_NOOVERWRITE: + case PARSE_EVENTS__TERM_TYPE_OVERWRITE: + case PARSE_EVENTS__TERM_TYPE_DRV_CFG: + case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: + case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: + case PARSE_EVENTS__TERM_TYPE_RAW: + case PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE: + case PARSE_EVENTS__TERM_TYPE_HARDWARE: default: if (!err) return false; /* term_type is validated so indexing is safe */ if (asprintf(&err_str, "'%s' is not usable in 'perf stat'", - config_term_names[term_type]) >= 0) + config_term_name(term_type)) >= 0) parse_events_error__handle(err, -1, err_str, NULL); return false; } @@ -1187,10 +943,14 @@ do { \ return -EINVAL; } break; + case PARSE_EVENTS__TERM_TYPE_DRV_CFG: + case PARSE_EVENTS__TERM_TYPE_USER: + case PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE: + case PARSE_EVENTS__TERM_TYPE_HARDWARE: default: parse_events_error__handle(err, term->err_term, - strdup("unknown term"), - parse_events_formats_error_string(NULL)); + strdup(config_term_name(term->type_term)), + parse_events_formats_error_string(NULL)); return -EINVAL; } @@ -1276,10 +1036,26 @@ static int config_term_tracepoint(struct perf_event_attr *attr, case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: return config_term_common(attr, term, err); + case PARSE_EVENTS__TERM_TYPE_USER: + case PARSE_EVENTS__TERM_TYPE_CONFIG: + case PARSE_EVENTS__TERM_TYPE_CONFIG1: + case PARSE_EVENTS__TERM_TYPE_CONFIG2: + case PARSE_EVENTS__TERM_TYPE_CONFIG3: + case PARSE_EVENTS__TERM_TYPE_NAME: + case PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD: + case PARSE_EVENTS__TERM_TYPE_SAMPLE_FREQ: + case PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE: + case PARSE_EVENTS__TERM_TYPE_TIME: + case PARSE_EVENTS__TERM_TYPE_DRV_CFG: + case PARSE_EVENTS__TERM_TYPE_PERCORE: + case PARSE_EVENTS__TERM_TYPE_METRIC_ID: + case PARSE_EVENTS__TERM_TYPE_RAW: + case PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE: + case PARSE_EVENTS__TERM_TYPE_HARDWARE: default: if (err) { parse_events_error__handle(err, term->err_term, - strdup("unknown term"), + strdup(config_term_name(term->type_term)), strdup("valid terms: call-graph,stack-size\n")); } return -EINVAL; @@ -1397,6 +1173,16 @@ do { \ ADD_CONFIG_TERM_VAL(AUX_SAMPLE_SIZE, aux_sample_size, term->val.num, term->weak); break; + case PARSE_EVENTS__TERM_TYPE_USER: + case PARSE_EVENTS__TERM_TYPE_CONFIG: + case PARSE_EVENTS__TERM_TYPE_CONFIG1: + case PARSE_EVENTS__TERM_TYPE_CONFIG2: + case PARSE_EVENTS__TERM_TYPE_CONFIG3: + case PARSE_EVENTS__TERM_TYPE_NAME: + case PARSE_EVENTS__TERM_TYPE_METRIC_ID: + case PARSE_EVENTS__TERM_TYPE_RAW: + case PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE: + case PARSE_EVENTS__TERM_TYPE_HARDWARE: default: break; } @@ -1418,14 +1204,38 @@ static int get_config_chgs(struct perf_pmu *pmu, struct list_head *head_config, list_for_each_entry(term, head_config, list) { switch (term->type_term) { case PARSE_EVENTS__TERM_TYPE_USER: - type = perf_pmu__format_type(&pmu->format, term->config); + type = perf_pmu__format_type(pmu, term->config); if (type != PERF_PMU_FORMAT_VALUE_CONFIG) continue; - bits |= perf_pmu__format_bits(&pmu->format, term->config); + bits |= perf_pmu__format_bits(pmu, term->config); break; case PARSE_EVENTS__TERM_TYPE_CONFIG: bits = ~(u64)0; break; + case PARSE_EVENTS__TERM_TYPE_CONFIG1: + case PARSE_EVENTS__TERM_TYPE_CONFIG2: + case PARSE_EVENTS__TERM_TYPE_CONFIG3: + case PARSE_EVENTS__TERM_TYPE_NAME: + case PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD: + case PARSE_EVENTS__TERM_TYPE_SAMPLE_FREQ: + case PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE: + case PARSE_EVENTS__TERM_TYPE_TIME: + case PARSE_EVENTS__TERM_TYPE_CALLGRAPH: + case PARSE_EVENTS__TERM_TYPE_STACKSIZE: + case PARSE_EVENTS__TERM_TYPE_NOINHERIT: + case PARSE_EVENTS__TERM_TYPE_INHERIT: + case PARSE_EVENTS__TERM_TYPE_MAX_STACK: + case PARSE_EVENTS__TERM_TYPE_MAX_EVENTS: + case PARSE_EVENTS__TERM_TYPE_NOOVERWRITE: + case PARSE_EVENTS__TERM_TYPE_OVERWRITE: + case PARSE_EVENTS__TERM_TYPE_DRV_CFG: + case PARSE_EVENTS__TERM_TYPE_PERCORE: + case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: + case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: + case PARSE_EVENTS__TERM_TYPE_METRIC_ID: + case PARSE_EVENTS__TERM_TYPE_RAW: + case PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE: + case PARSE_EVENTS__TERM_TYPE_HARDWARE: default: break; } @@ -1441,8 +1251,9 @@ static int get_config_chgs(struct perf_pmu *pmu, struct list_head *head_config, int parse_events_add_tracepoint(struct list_head *list, int *idx, const char *sys, const char *event, struct parse_events_error *err, - struct list_head *head_config) + struct list_head *head_config, void *loc_) { + YYLTYPE *loc = loc_; #ifdef HAVE_LIBTRACEEVENT if (head_config) { struct perf_event_attr attr; @@ -1454,17 +1265,17 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx, if (strpbrk(sys, "*?")) return add_tracepoint_multi_sys(list, idx, sys, event, - err, head_config); + err, head_config, loc); else return add_tracepoint_event(list, idx, sys, event, - err, head_config); + err, head_config, loc); #else (void)list; (void)idx; (void)sys; (void)event; (void)head_config; - parse_events_error__handle(err, 0, strdup("unsupported tracepoint"), + parse_events_error__handle(err, loc->first_column, strdup("unsupported tracepoint"), strdup("libtraceevent is necessary for tracepoint support")); return -1; #endif @@ -1557,41 +1368,44 @@ static bool config_term_percore(struct list_head *config_terms) } int parse_events_add_pmu(struct parse_events_state *parse_state, - struct list_head *list, char *name, + struct list_head *list, const char *name, struct list_head *head_config, - bool auto_merge_stats) + bool auto_merge_stats, void *loc_) { struct perf_event_attr attr; struct perf_pmu_info info; struct perf_pmu *pmu; struct evsel *evsel; struct parse_events_error *err = parse_state->error; + YYLTYPE *loc = loc_; LIST_HEAD(config_terms); pmu = parse_state->fake_pmu ?: perf_pmus__find(name); - if (verbose > 1 && !(pmu && pmu->selectable)) { - fprintf(stderr, "Attempting to add event pmu '%s' with '", - name); - if (head_config) { - struct parse_events_term *term; - - list_for_each_entry(term, head_config, list) { - fprintf(stderr, "%s,", term->config); - } - } - fprintf(stderr, "' that may result in non-fatal errors\n"); - } - if (!pmu) { char *err_str; if (asprintf(&err_str, "Cannot find PMU `%s'. Missing kernel support?", name) >= 0) - parse_events_error__handle(err, 0, err_str, NULL); + parse_events_error__handle(err, loc->first_column, err_str, NULL); return -EINVAL; } + + if (verbose > 1) { + struct strbuf sb; + + strbuf_init(&sb, /*hint=*/ 0); + if (pmu->selectable && !head_config) { + strbuf_addf(&sb, "%s//", name); + } else { + strbuf_addf(&sb, "%s/", name); + parse_events_term__to_strbuf(head_config, &sb); + strbuf_addch(&sb, '/'); + } + fprintf(stderr, "Attempt to add: %s\n", sb.buf); + strbuf_release(&sb); + } if (head_config) fix_raw(head_config, pmu); @@ -1612,20 +1426,16 @@ int parse_events_add_pmu(struct parse_events_state *parse_state, return evsel ? 0 : -ENOMEM; } - if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, head_config, &info)) + if (!parse_state->fake_pmu && perf_pmu__check_alias(pmu, head_config, &info, err)) return -EINVAL; if (verbose > 1) { - fprintf(stderr, "After aliases, add event pmu '%s' with '", - name); - if (head_config) { - struct parse_events_term *term; + struct strbuf sb; - list_for_each_entry(term, head_config, list) { - fprintf(stderr, "%s,", term->config); - } - } - fprintf(stderr, "' that may result in non-fatal errors\n"); + strbuf_init(&sb, /*hint=*/ 0); + parse_events_term__to_strbuf(head_config, &sb); + fprintf(stderr, "..after resolving event: %s/%s/\n", name, sb.buf); + strbuf_release(&sb); } /* @@ -1675,14 +1485,15 @@ int parse_events_add_pmu(struct parse_events_state *parse_state, int parse_events_multi_pmu_add(struct parse_events_state *parse_state, char *str, struct list_head *head, - struct list_head **listp) + struct list_head **listp, void *loc_) { struct parse_events_term *term; struct list_head *list = NULL; struct list_head *orig_head = NULL; struct perf_pmu *pmu = NULL; + YYLTYPE *loc = loc_; int ok = 0; - char *config; + const char *config; *listp = NULL; @@ -1699,9 +1510,9 @@ int parse_events_multi_pmu_add(struct parse_events_state *parse_state, if (parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_USER, - config, 1, false, NULL, - NULL) < 0) { - free(config); + config, /*num=*/1, /*novalue=*/true, + loc, /*loc_val=*/NULL) < 0) { + zfree(&config); goto out_err; } list_add_tail(&term->list, head); @@ -1714,33 +1525,38 @@ int parse_events_multi_pmu_add(struct parse_events_state *parse_state, INIT_LIST_HEAD(list); while ((pmu = perf_pmus__scan(pmu)) != NULL) { - struct perf_pmu_alias *alias; bool auto_merge_stats; if (parse_events__filter_pmu(parse_state, pmu)) continue; - auto_merge_stats = perf_pmu__auto_merge_stats(pmu); + if (!perf_pmu__have_event(pmu, str)) + continue; - list_for_each_entry(alias, &pmu->aliases, list) { - if (!strcasecmp(alias->name, str)) { - parse_events_copy_term_list(head, &orig_head); - if (!parse_events_add_pmu(parse_state, list, - pmu->name, orig_head, - auto_merge_stats)) { - pr_debug("%s -> %s/%s/\n", str, - pmu->name, alias->str); - ok++; - } - parse_events_terms__delete(orig_head); - } + auto_merge_stats = perf_pmu__auto_merge_stats(pmu); + parse_events_copy_term_list(head, &orig_head); + if (!parse_events_add_pmu(parse_state, list, pmu->name, + orig_head, auto_merge_stats, loc)) { + struct strbuf sb; + + strbuf_init(&sb, /*hint=*/ 0); + parse_events_term__to_strbuf(orig_head, &sb); + pr_debug("%s -> %s/%s/\n", str, pmu->name, sb.buf); + strbuf_release(&sb); + ok++; } + parse_events_terms__delete(orig_head); } if (parse_state->fake_pmu) { if (!parse_events_add_pmu(parse_state, list, str, head, - /*auto_merge_stats=*/true)) { - pr_debug("%s -> %s/%s/\n", str, "fake_pmu", str); + /*auto_merge_stats=*/true, loc)) { + struct strbuf sb; + + strbuf_init(&sb, /*hint=*/ 0); + parse_events_term__to_strbuf(head, &sb); + pr_debug("%s -> %s/%s/\n", str, "fake_pmu", sb.buf); + strbuf_release(&sb); ok++; } } @@ -1972,14 +1788,18 @@ int parse_events_name(struct list_head *list, const char *name) struct evsel *evsel; __evlist__for_each_entry(list, evsel) { - if (!evsel->name) + if (!evsel->name) { evsel->name = strdup(name); + if (!evsel->name) + return -ENOMEM; + } } return 0; } static int parse_events__scanner(const char *str, + FILE *input, struct parse_events_state *parse_state) { YY_BUFFER_STATE buffer; @@ -1990,7 +1810,10 @@ static int parse_events__scanner(const char *str, if (ret) return ret; - buffer = parse_events__scan_string(str, scanner); + if (str) + buffer = parse_events__scan_string(str, scanner); + else + parse_events_set_in(input, scanner); #ifdef PARSER_DEBUG parse_events_debug = 1; @@ -1998,8 +1821,10 @@ static int parse_events__scanner(const char *str, #endif ret = parse_events_parse(parse_state, scanner); - parse_events__flush_buffer(buffer, scanner); - parse_events__delete_buffer(buffer, scanner); + if (str) { + parse_events__flush_buffer(buffer, scanner); + parse_events__delete_buffer(buffer, scanner); + } parse_events_lex_destroy(scanner); return ret; } @@ -2007,7 +1832,7 @@ static int parse_events__scanner(const char *str, /* * parse event config string, return a list of event terms. */ -int parse_events_terms(struct list_head *terms, const char *str) +int parse_events_terms(struct list_head *terms, const char *str, FILE *input) { struct parse_events_state parse_state = { .terms = NULL, @@ -2015,7 +1840,7 @@ int parse_events_terms(struct list_head *terms, const char *str) }; int ret; - ret = parse_events__scanner(str, &parse_state); + ret = parse_events__scanner(str, input, &parse_state); if (!ret) { list_splice(parse_state.terms, terms); @@ -2259,7 +2084,6 @@ int __parse_events(struct evlist *evlist, const char *str, const char *pmu_filte .list = LIST_HEAD_INIT(parse_state.list), .idx = evlist->core.nr_entries, .error = err, - .evlist = evlist, .stoken = PE_START_EVENTS, .fake_pmu = fake_pmu, .pmu_filter = pmu_filter, @@ -2267,7 +2091,7 @@ int __parse_events(struct evlist *evlist, const char *str, const char *pmu_filte }; int ret, ret2; - ret = parse_events__scanner(str, &parse_state); + ret = parse_events__scanner(str, /*input=*/ NULL, &parse_state); if (!ret && list_empty(&parse_state.list)) { WARN_ONCE(true, "WARNING: event parser found nothing\n"); @@ -2348,7 +2172,7 @@ void parse_events_error__handle(struct parse_events_error *err, int idx, break; default: pr_debug("Multiple errors dropping message: %s (%s)\n", - err->str, err->help); + err->str, err->help ?: "<no help>"); free(err->str); err->str = str; free(err->help); @@ -2641,7 +2465,8 @@ static int new_term(struct parse_events_term **_term, } int parse_events_term__num(struct parse_events_term **term, - int type_term, char *config, u64 num, + enum parse_events__term_type type_term, + const char *config, u64 num, bool no_value, void *loc_term_, void *loc_val_) { @@ -2651,17 +2476,18 @@ int parse_events_term__num(struct parse_events_term **term, struct parse_events_term temp = { .type_val = PARSE_EVENTS__TERM_TYPE_NUM, .type_term = type_term, - .config = config ? : strdup(config_term_names[type_term]), + .config = config ? : strdup(config_term_name(type_term)), .no_value = no_value, .err_term = loc_term ? loc_term->first_column : 0, .err_val = loc_val ? loc_val->first_column : 0, }; - return new_term(term, &temp, NULL, num); + return new_term(term, &temp, /*str=*/NULL, num); } int parse_events_term__str(struct parse_events_term **term, - int type_term, char *config, char *str, + enum parse_events__term_type type_term, + char *config, char *str, void *loc_term_, void *loc_val_) { YYLTYPE *loc_term = loc_term_; @@ -2675,15 +2501,16 @@ int parse_events_term__str(struct parse_events_term **term, .err_val = loc_val ? loc_val->first_column : 0, }; - return new_term(term, &temp, str, 0); + return new_term(term, &temp, str, /*num=*/0); } int parse_events_term__term(struct parse_events_term **term, - int term_lhs, int term_rhs, + enum parse_events__term_type term_lhs, + enum parse_events__term_type term_rhs, void *loc_term, void *loc_val) { return parse_events_term__str(term, term_lhs, NULL, - strdup(config_term_names[term_rhs]), + strdup(config_term_name(term_rhs)), loc_term, loc_val); } @@ -2691,33 +2518,25 @@ int parse_events_term__clone(struct parse_events_term **new, struct parse_events_term *term) { char *str; - struct parse_events_term temp = { - .type_val = term->type_val, - .type_term = term->type_term, - .config = NULL, - .err_term = term->err_term, - .err_val = term->err_val, - }; + struct parse_events_term temp = *term; + temp.used = false; if (term->config) { temp.config = strdup(term->config); if (!temp.config) return -ENOMEM; } if (term->type_val == PARSE_EVENTS__TERM_TYPE_NUM) - return new_term(new, &temp, NULL, term->val.num); + return new_term(new, &temp, /*str=*/NULL, term->val.num); str = strdup(term->val.str); if (!str) return -ENOMEM; - return new_term(new, &temp, str, 0); + return new_term(new, &temp, str, /*num=*/0); } void parse_events_term__delete(struct parse_events_term *term) { - if (term->array.nr_ranges) - zfree(&term->array.ranges); - if (term->type_val != PARSE_EVENTS__TERM_TYPE_NUM) zfree(&term->val.str); @@ -2768,9 +2587,47 @@ void parse_events_terms__delete(struct list_head *terms) free(terms); } -void parse_events__clear_array(struct parse_events_array *a) +int parse_events_term__to_strbuf(struct list_head *term_list, struct strbuf *sb) { - zfree(&a->ranges); + struct parse_events_term *term; + bool first = true; + + if (!term_list) + return 0; + + list_for_each_entry(term, term_list, list) { + int ret; + + if (!first) { + ret = strbuf_addch(sb, ','); + if (ret < 0) + return ret; + } + first = false; + + if (term->type_val == PARSE_EVENTS__TERM_TYPE_NUM) + if (term->no_value) { + assert(term->val.num == 1); + ret = strbuf_addf(sb, "%s", term->config); + } else + ret = strbuf_addf(sb, "%s=%#"PRIx64, term->config, term->val.num); + else if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) { + if (term->config) { + ret = strbuf_addf(sb, "%s=", term->config); + if (ret < 0) + return ret; + } else if ((unsigned int)term->type_term < __PARSE_EVENTS__TERM_TYPE_NR) { + ret = strbuf_addf(sb, "%s=", config_term_name(term->type_term)); + if (ret < 0) + return ret; + } + assert(!term->no_value); + ret = strbuf_addf(sb, "%s", term->val.str); + } + if (ret < 0) + return ret; + } + return 0; } void parse_events_evlist_error(struct parse_events_state *parse_state, @@ -2789,7 +2646,7 @@ static void config_terms_list(char *buf, size_t buf_sz) buf[0] = '\0'; for (i = 0; i < __PARSE_EVENTS__TERM_TYPE_NR; i++) { - const char *name = config_term_names[i]; + const char *name = config_term_name(i); if (!config_term_avail(i, NULL)) continue; diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h index b0eb95f93e9c..594e5d2dc67f 100644 --- a/tools/perf/util/parse-events.h +++ b/tools/perf/util/parse-events.h @@ -9,6 +9,7 @@ #include <stdbool.h> #include <linux/types.h> #include <linux/perf_event.h> +#include <stdio.h> #include <string.h> struct evsel; @@ -17,6 +18,7 @@ struct parse_events_error; struct option; struct perf_pmu; +struct strbuf; const char *event_type(int type); @@ -42,16 +44,16 @@ static inline int parse_events(struct evlist *evlist, const char *str, int parse_event(struct evlist *evlist, const char *str); -int parse_events_terms(struct list_head *terms, const char *str); +int parse_events_terms(struct list_head *terms, const char *str, FILE *input); int parse_filter(const struct option *opt, const char *str, int unset); int exclude_perf(const struct option *opt, const char *arg, int unset); -enum { +enum parse_events__term_val_type { PARSE_EVENTS__TERM_TYPE_NUM, PARSE_EVENTS__TERM_TYPE_STR, }; -enum { +enum parse_events__term_type { PARSE_EVENTS__TERM_TYPE_USER, PARSE_EVENTS__TERM_TYPE_CONFIG, PARSE_EVENTS__TERM_TYPE_CONFIG1, @@ -78,36 +80,54 @@ enum { PARSE_EVENTS__TERM_TYPE_RAW, PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE, PARSE_EVENTS__TERM_TYPE_HARDWARE, - __PARSE_EVENTS__TERM_TYPE_NR, -}; - -struct parse_events_array { - size_t nr_ranges; - struct { - unsigned int start; - size_t length; - } *ranges; +#define __PARSE_EVENTS__TERM_TYPE_NR (PARSE_EVENTS__TERM_TYPE_HARDWARE + 1) }; struct parse_events_term { - char *config; - struct parse_events_array array; + /** @list: The term list the term is a part of. */ + struct list_head list; + /** + * @config: The left-hand side of a term assignment, so the term + * "event=8" would have the config be "event" + */ + const char *config; + /** + * @val: The right-hand side of a term assignment that can either be a + * string or a number depending on type_val. + */ union { char *str; u64 num; } val; - int type_val; - int type_term; - struct list_head list; - bool used; - bool no_value; - - /* error string indexes for within parsed string */ + /** @type_val: The union variable in val to be used for the term. */ + enum parse_events__term_val_type type_val; + /** + * @type_term: A predefined term type or PARSE_EVENTS__TERM_TYPE_USER + * when not inbuilt. + */ + enum parse_events__term_type type_term; + /** + * @err_term: The column index of the term from parsing, used during + * error output. + */ int err_term; + /** + * @err_val: The column index of the val from parsing, used during error + * output. + */ int err_val; - - /* Coming from implicit alias */ + /** @used: Was the term used during parameterized-eval. */ + bool used; + /** + * @weak: A term from the sysfs or json encoding of an event that + * shouldn't override terms coming from the command line. + */ bool weak; + /** + * @no_value: Is there no value. If a numeric term has no value then the + * value is assumed to be 1. An event name also has no value. + */ + bool no_value; }; struct parse_events_error { @@ -121,17 +141,23 @@ struct parse_events_error { }; struct parse_events_state { + /* The list parsed events are placed on. */ struct list_head list; + /* The updated index used by entries as they are added. */ int idx; + /* Error information. */ struct parse_events_error *error; - struct evlist *evlist; + /* Holds returned terms for term parsing. */ struct list_head *terms; + /* Start token. */ int stoken; + /* Special fake PMU marker for testing. */ struct perf_pmu *fake_pmu; /* If non-null, when wildcard matching only match the given PMU. */ const char *pmu_filter; /* Should PE_LEGACY_NAME tokens be generated for config terms? */ bool match_legacy_cache_terms; + /* Were multiple PMUs scanned to find events? */ bool wild_card_pmus; }; @@ -140,39 +166,31 @@ bool parse_events__filter_pmu(const struct parse_events_state *parse_state, void parse_events__shrink_config_terms(void); int parse_events__is_hardcoded_term(struct parse_events_term *term); int parse_events_term__num(struct parse_events_term **term, - int type_term, char *config, u64 num, + enum parse_events__term_type type_term, + const char *config, u64 num, bool novalue, void *loc_term, void *loc_val); int parse_events_term__str(struct parse_events_term **term, - int type_term, char *config, char *str, + enum parse_events__term_type type_term, + char *config, char *str, void *loc_term, void *loc_val); int parse_events_term__term(struct parse_events_term **term, - int term_lhs, int term_rhs, + enum parse_events__term_type term_lhs, + enum parse_events__term_type term_rhs, void *loc_term, void *loc_val); int parse_events_term__clone(struct parse_events_term **new, struct parse_events_term *term); void parse_events_term__delete(struct parse_events_term *term); void parse_events_terms__delete(struct list_head *terms); void parse_events_terms__purge(struct list_head *terms); -void parse_events__clear_array(struct parse_events_array *a); +int parse_events_term__to_strbuf(struct list_head *term_list, struct strbuf *sb); int parse_events__modifier_event(struct list_head *list, char *str, bool add); int parse_events__modifier_group(struct list_head *list, char *event_mod); int parse_events_name(struct list_head *list, const char *name); int parse_events_add_tracepoint(struct list_head *list, int *idx, const char *sys, const char *event, struct parse_events_error *error, - struct list_head *head_config); -int parse_events_load_bpf(struct parse_events_state *parse_state, - struct list_head *list, - char *bpf_file_name, - bool source, - struct list_head *head_config); -/* Provide this function for perf test */ -struct bpf_object; -int parse_events_load_bpf_obj(struct parse_events_state *parse_state, - struct list_head *list, - struct bpf_object *obj, - struct list_head *head_config); + struct list_head *head_config, void *loc); int parse_events_add_numeric(struct parse_events_state *parse_state, struct list_head *list, u32 type, u64 config, @@ -190,9 +208,9 @@ int parse_events_add_breakpoint(struct parse_events_state *parse_state, u64 addr, char *type, u64 len, struct list_head *head_config); int parse_events_add_pmu(struct parse_events_state *parse_state, - struct list_head *list, char *name, + struct list_head *list, const char *name, struct list_head *head_config, - bool auto_merge_stats); + bool auto_merge_stats, void *loc); struct evsel *parse_events__add_event(int idx, struct perf_event_attr *attr, const char *name, const char *metric_id, @@ -201,7 +219,7 @@ struct evsel *parse_events__add_event(int idx, struct perf_event_attr *attr, int parse_events_multi_pmu_add(struct parse_events_state *parse_state, char *str, struct list_head *head_config, - struct list_head **listp); + struct list_head **listp, void *loc); int parse_events_copy_term_list(struct list_head *old, struct list_head **new); diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l index 99335ec586ae..4ef4b6f171a0 100644 --- a/tools/perf/util/parse-events.l +++ b/tools/perf/util/parse-events.l @@ -68,31 +68,6 @@ static int lc_str(yyscan_t scanner, const struct parse_events_state *state) return str(scanner, state->match_legacy_cache_terms ? PE_LEGACY_CACHE : PE_NAME); } -static bool isbpf_suffix(char *text) -{ - int len = strlen(text); - - if (len < 2) - return false; - if ((text[len - 1] == 'c' || text[len - 1] == 'o') && - text[len - 2] == '.') - return true; - if (len > 4 && !strcmp(text + len - 4, ".obj")) - return true; - return false; -} - -static bool isbpf(yyscan_t scanner) -{ - char *text = parse_events_get_text(scanner); - struct stat st; - - if (!isbpf_suffix(text)) - return false; - - return stat(text, &st) == 0; -} - /* * This function is called when the parser gets two kind of input: * @@ -141,7 +116,7 @@ static int tool(yyscan_t scanner, enum perf_tool_event event) return PE_VALUE_SYM_TOOL; } -static int term(yyscan_t scanner, int type) +static int term(yyscan_t scanner, enum parse_events__term_type type) { YYSTYPE *yylval = parse_events_get_lval(scanner); @@ -175,13 +150,10 @@ do { \ %x mem %s config %x event -%x array group [^,{}/]*[{][^}]*[}][^,{}/]* event_pmu [^,{}/]+[/][^/]*[/][^,{}/]* event [^,{}/]+ -bpf_object [^,{}]+\.(o|bpf)[a-zA-Z0-9._]* -bpf_source [^,{}]+\.c[a-zA-Z0-9._]* num_dec [0-9]+ num_hex 0x[a-fA-F0-9]+ @@ -234,8 +206,6 @@ non_digit [^0-9] } {event_pmu} | -{bpf_object} | -{bpf_source} | {event} { BEGIN(INITIAL); REWIND(1); @@ -251,14 +221,6 @@ non_digit [^0-9] } } -<array>{ -"]" { BEGIN(config); return ']'; } -{num_dec} { return value(yyscanner, 10); } -{num_hex} { return value(yyscanner, 16); } -, { return ','; } -"\.\.\." { return PE_ARRAY_RANGE; } -} - <config>{ /* * Please update config_term_names when new static term is added. @@ -302,8 +264,6 @@ r0x{num_raw_hex} { return str(yyscanner, PE_RAW); } {lc_type}-{lc_op_result} { return lc_str(yyscanner, _parse_state); } {lc_type}-{lc_op_result}-{lc_op_result} { return lc_str(yyscanner, _parse_state); } {name_minus} { return str(yyscanner, PE_NAME); } -\[all\] { return PE_ARRAY_ALL; } -"[" { BEGIN(array); return '['; } @{drv_cfg_term} { return drv_str(yyscanner, PE_DRV_CFG_TERM); } } @@ -374,8 +334,6 @@ r{num_raw_hex} { return str(yyscanner, PE_RAW); } {num_hex} { return value(yyscanner, 16); } {modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); } -{bpf_object} { if (!isbpf(yyscanner)) { USER_REJECT }; return str(yyscanner, PE_BPF_OBJECT); } -{bpf_source} { if (!isbpf(yyscanner)) { USER_REJECT }; return str(yyscanner, PE_BPF_SOURCE); } {name} { return str(yyscanner, PE_NAME); } {name_tag} { return str(yyscanner, PE_NAME); } "/" { BEGIN(config); return '/'; } diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y index 9f28d4b5502f..21bfe7e0d944 100644 --- a/tools/perf/util/parse-events.y +++ b/tools/perf/util/parse-events.y @@ -20,12 +20,14 @@ #include "parse-events.h" #include "parse-events-bison.h" +int parse_events_lex(YYSTYPE * yylval_param, YYLTYPE * yylloc_param , void *yyscanner); void parse_events_error(YYLTYPE *loc, void *parse_state, void *scanner, char const *msg); -#define ABORT_ON(val) \ +#define PE_ABORT(val) \ do { \ - if (val) \ - YYABORT; \ + if (val == -ENOMEM) \ + YYNOMEM; \ + YYABORT; \ } while (0) static struct list_head* alloc_list(void) @@ -58,13 +60,10 @@ static void free_list_evsel(struct list_head* list_evsel) %token PE_VALUE_SYM_TOOL %token PE_EVENT_NAME %token PE_RAW PE_NAME -%token PE_BPF_OBJECT PE_BPF_SOURCE %token PE_MODIFIER_EVENT PE_MODIFIER_BP PE_BP_COLON PE_BP_SLASH %token PE_LEGACY_CACHE -%token PE_PREFIX_MEM PE_PREFIX_RAW PE_PREFIX_GROUP +%token PE_PREFIX_MEM %token PE_ERROR -%token PE_KERNEL_PMU_EVENT PE_PMU_EVENT_FAKE -%token PE_ARRAY_ALL PE_ARRAY_RANGE %token PE_DRV_CFG_TERM %token PE_TERM_HW %type <num> PE_VALUE @@ -75,13 +74,10 @@ static void free_list_evsel(struct list_head* list_evsel) %type <num> value_sym %type <str> PE_RAW %type <str> PE_NAME -%type <str> PE_BPF_OBJECT -%type <str> PE_BPF_SOURCE %type <str> PE_LEGACY_CACHE %type <str> PE_MODIFIER_EVENT %type <str> PE_MODIFIER_BP %type <str> PE_EVENT_NAME -%type <str> PE_KERNEL_PMU_EVENT PE_PMU_EVENT_FAKE %type <str> PE_DRV_CFG_TERM %type <str> name_or_raw name_or_legacy %destructor { free ($$); } <str> @@ -98,7 +94,6 @@ static void free_list_evsel(struct list_head* list_evsel) %type <list_evsel> event_legacy_tracepoint %type <list_evsel> event_legacy_numeric %type <list_evsel> event_legacy_raw -%type <list_evsel> event_bpf_file %type <list_evsel> event_def %type <list_evsel> event_mod %type <list_evsel> event_name @@ -109,11 +104,6 @@ static void free_list_evsel(struct list_head* list_evsel) %type <list_evsel> groups %destructor { free_list_evsel ($$); } <list_evsel> %type <tracepoint_name> tracepoint_name -%destructor { free ($$.sys); free ($$.event); } <tracepoint_name> -%type <array> array -%type <array> array_term -%type <array> array_terms -%destructor { free ($$.ranges); } <array> %type <hardware_term> PE_TERM_HW %destructor { free ($$.str); } <hardware_term> @@ -128,7 +118,6 @@ static void free_list_evsel(struct list_head* list_evsel) char *sys; char *event; } tracepoint_name; - struct parse_events_array array; struct hardware_term { char *str; u64 num; @@ -265,7 +254,7 @@ PE_EVENT_NAME event_def free($1); if (err) { free_list_evsel($2); - YYABORT; + YYNOMEM; } $$ = $2; } @@ -278,47 +267,47 @@ event_def: event_pmu | event_legacy_mem sep_dc | event_legacy_tracepoint sep_dc | event_legacy_numeric sep_dc | - event_legacy_raw sep_dc | - event_bpf_file + event_legacy_raw sep_dc event_pmu: PE_NAME opt_pmu_config { struct parse_events_state *parse_state = _parse_state; - struct parse_events_error *error = parse_state->error; struct list_head *list = NULL, *orig_terms = NULL, *terms= NULL; char *pattern = NULL; -#define CLEANUP_YYABORT \ +#define CLEANUP \ do { \ parse_events_terms__delete($2); \ parse_events_terms__delete(orig_terms); \ free(list); \ free($1); \ free(pattern); \ - YYABORT; \ } while(0) - if (parse_events_copy_term_list($2, &orig_terms)) - CLEANUP_YYABORT; - - if (error) - error->idx = @1.first_column; + if (parse_events_copy_term_list($2, &orig_terms)) { + CLEANUP; + YYNOMEM; + } list = alloc_list(); - if (!list) - CLEANUP_YYABORT; + if (!list) { + CLEANUP; + YYNOMEM; + } /* Attempt to add to list assuming $1 is a PMU name. */ - if (parse_events_add_pmu(parse_state, list, $1, $2, /*auto_merge_stats=*/false)) { + if (parse_events_add_pmu(parse_state, list, $1, $2, /*auto_merge_stats=*/false, &@1)) { struct perf_pmu *pmu = NULL; int ok = 0; /* Failure to add, try wildcard expansion of $1 as a PMU name. */ - if (asprintf(&pattern, "%s*", $1) < 0) - CLEANUP_YYABORT; + if (asprintf(&pattern, "%s*", $1) < 0) { + CLEANUP; + YYNOMEM; + } while ((pmu = perf_pmus__scan(pmu)) != NULL) { - char *name = pmu->name; + const char *name = pmu->name; if (parse_events__filter_pmu(parse_state, pmu)) continue; @@ -330,10 +319,12 @@ PE_NAME opt_pmu_config !perf_pmu__match(pattern, pmu->alias_name, $1)) { bool auto_merge_stats = perf_pmu__auto_merge_stats(pmu); - if (parse_events_copy_term_list(orig_terms, &terms)) - CLEANUP_YYABORT; + if (parse_events_copy_term_list(orig_terms, &terms)) { + CLEANUP; + YYNOMEM; + } if (!parse_events_add_pmu(parse_state, list, pmu->name, terms, - auto_merge_stats)) { + auto_merge_stats, &@1)) { ok++; parse_state->wild_card_pmus = true; } @@ -344,30 +335,26 @@ PE_NAME opt_pmu_config if (!ok) { /* Failure to add, assume $1 is an event name. */ zfree(&list); - ok = !parse_events_multi_pmu_add(parse_state, $1, $2, &list); + ok = !parse_events_multi_pmu_add(parse_state, $1, $2, &list, &@1); $2 = NULL; } - if (!ok) - CLEANUP_YYABORT; + if (!ok) { + struct parse_events_error *error = parse_state->error; + char *help; + + if (asprintf(&help, "Unable to find PMU or event on a PMU of '%s'", $1) < 0) + help = NULL; + parse_events_error__handle(error, @1.first_column, + strdup("Bad event or PMU"), + help); + CLEANUP; + YYABORT; + } } - parse_events_terms__delete($2); - parse_events_terms__delete(orig_terms); - free(pattern); - free($1); - $$ = list; -#undef CLEANUP_YYABORT -} -| -PE_KERNEL_PMU_EVENT sep_dc -{ - struct list_head *list; - int err; - - err = parse_events_multi_pmu_add(_parse_state, $1, NULL, &list); - free($1); - if (err < 0) - YYABORT; $$ = list; + list = NULL; + CLEANUP; +#undef CLEANUP } | PE_NAME sep_dc @@ -375,61 +362,19 @@ PE_NAME sep_dc struct list_head *list; int err; - err = parse_events_multi_pmu_add(_parse_state, $1, NULL, &list); - free($1); - if (err < 0) - YYABORT; - $$ = list; -} -| -PE_KERNEL_PMU_EVENT opt_pmu_config -{ - struct list_head *list; - int err; - - /* frees $2 */ - err = parse_events_multi_pmu_add(_parse_state, $1, $2, &list); - free($1); - if (err < 0) - YYABORT; - $$ = list; -} -| -PE_PMU_EVENT_FAKE sep_dc -{ - struct list_head *list; - int err; - - list = alloc_list(); - if (!list) - YYABORT; - - err = parse_events_add_pmu(_parse_state, list, $1, /*head_config=*/NULL, - /*auto_merge_stats=*/false); - free($1); + err = parse_events_multi_pmu_add(_parse_state, $1, NULL, &list, &@1); if (err < 0) { - free(list); - YYABORT; - } - $$ = list; -} -| -PE_PMU_EVENT_FAKE opt_pmu_config -{ - struct list_head *list; - int err; - - list = alloc_list(); - if (!list) - YYABORT; + struct parse_events_state *parse_state = _parse_state; + struct parse_events_error *error = parse_state->error; + char *help; - err = parse_events_add_pmu(_parse_state, list, $1, $2, /*auto_merge_stats=*/false); - free($1); - parse_events_terms__delete($2); - if (err < 0) { - free(list); - YYABORT; + if (asprintf(&help, "Unable to find event on a PMU of '%s'", $1) < 0) + help = NULL; + parse_events_error__handle(error, @1.first_column, strdup("Bad event name"), help); + free($1); + PE_ABORT(err); } + free($1); $$ = list; } @@ -448,12 +393,13 @@ value_sym '/' event_config '/' bool wildcard = (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_HW_CACHE); list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; err = parse_events_add_numeric(_parse_state, list, type, config, $3, wildcard); parse_events_terms__delete($3); if (err) { free_list_evsel(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -464,21 +410,28 @@ value_sym sep_slash_slash_dc int type = $1 >> 16; int config = $1 & 255; bool wildcard = (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_HW_CACHE); + int err; list = alloc_list(); - ABORT_ON(!list); - ABORT_ON(parse_events_add_numeric(_parse_state, list, type, config, - /*head_config=*/NULL, wildcard)); + if (!list) + YYNOMEM; + err = parse_events_add_numeric(_parse_state, list, type, config, /*head_config=*/NULL, wildcard); + if (err) + PE_ABORT(err); $$ = list; } | PE_VALUE_SYM_TOOL sep_slash_slash_dc { struct list_head *list; + int err; list = alloc_list(); - ABORT_ON(!list); - ABORT_ON(parse_events_add_tool(_parse_state, list, $1)); + if (!list) + YYNOMEM; + err = parse_events_add_tool(_parse_state, list, $1); + if (err) + YYNOMEM; $$ = list; } @@ -490,14 +443,16 @@ PE_LEGACY_CACHE opt_event_config int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; + err = parse_events_add_cache(list, &parse_state->idx, $1, parse_state, $2); parse_events_terms__delete($2); free($1); if (err) { free_list_evsel(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -509,14 +464,16 @@ PE_PREFIX_MEM PE_VALUE PE_BP_SLASH PE_VALUE PE_BP_COLON PE_MODIFIER_BP opt_event int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; + err = parse_events_add_breakpoint(_parse_state, list, $2, $6, $4, $7); parse_events_terms__delete($7); free($6); if (err) { free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -527,13 +484,15 @@ PE_PREFIX_MEM PE_VALUE PE_BP_SLASH PE_VALUE opt_event_config int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; + err = parse_events_add_breakpoint(_parse_state, list, $2, NULL, $4, $5); parse_events_terms__delete($5); if (err) { free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -544,14 +503,16 @@ PE_PREFIX_MEM PE_VALUE PE_BP_COLON PE_MODIFIER_BP opt_event_config int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; + err = parse_events_add_breakpoint(_parse_state, list, $2, $4, 0, $5); parse_events_terms__delete($5); free($4); if (err) { free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -562,13 +523,14 @@ PE_PREFIX_MEM PE_VALUE opt_event_config int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; err = parse_events_add_breakpoint(_parse_state, list, $2, NULL, 0, $3); parse_events_terms__delete($3); if (err) { free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -582,19 +544,20 @@ tracepoint_name opt_event_config int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; if (error) error->idx = @1.first_column; err = parse_events_add_tracepoint(list, &parse_state->idx, $1.sys, $1.event, - error, $2); + error, $2, &@1); parse_events_terms__delete($2); free($1.sys); free($1.event); if (err) { free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -614,13 +577,14 @@ PE_VALUE ':' PE_VALUE opt_event_config int err; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; err = parse_events_add_numeric(_parse_state, list, (u32)$1, $3, $4, /*wildcard=*/false); parse_events_terms__delete($4); if (err) { free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -633,52 +597,20 @@ PE_RAW opt_event_config u64 num; list = alloc_list(); - ABORT_ON(!list); + if (!list) + YYNOMEM; errno = 0; num = strtoull($1 + 1, NULL, 16); - ABORT_ON(errno); + /* Given the lexer will only give [a-fA-F0-9]+ a failure here should be impossible. */ + if (errno) + YYABORT; free($1); err = parse_events_add_numeric(_parse_state, list, PERF_TYPE_RAW, num, $2, /*wildcard=*/false); parse_events_terms__delete($2); if (err) { free(list); - YYABORT; - } - $$ = list; -} - -event_bpf_file: -PE_BPF_OBJECT opt_event_config -{ - struct parse_events_state *parse_state = _parse_state; - struct list_head *list; - int err; - - list = alloc_list(); - ABORT_ON(!list); - err = parse_events_load_bpf(parse_state, list, $1, false, $2); - parse_events_terms__delete($2); - free($1); - if (err) { - free(list); - YYABORT; - } - $$ = list; -} -| -PE_BPF_SOURCE opt_event_config -{ - struct list_head *list; - int err; - - list = alloc_list(); - ABORT_ON(!list); - err = parse_events_load_bpf(_parse_state, list, $1, true, $2); - parse_events_terms__delete($2); - if (err) { - free(list); - YYABORT; + PE_ABORT(err); } $$ = list; } @@ -738,7 +670,8 @@ event_term struct list_head *head = malloc(sizeof(*head)); struct parse_events_term *term = $1; - ABORT_ON(!head); + if (!head) + YYNOMEM; INIT_LIST_HEAD(head); list_add_tail(&term->list, head); $$ = head; @@ -752,11 +685,12 @@ event_term: PE_RAW { struct parse_events_term *term; + int err = parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_RAW, + strdup("raw"), $1, &@1, &@1); - if (parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_RAW, - strdup("raw"), $1, &@1, &@1)) { + if (err) { free($1); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -764,12 +698,12 @@ PE_RAW name_or_raw '=' name_or_legacy { struct parse_events_term *term; + int err = parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_USER, $1, $3, &@1, &@3); - if (parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_USER, - $1, $3, &@1, &@3)) { + if (err) { free($1); free($3); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -777,11 +711,12 @@ name_or_raw '=' name_or_legacy name_or_raw '=' PE_VALUE { struct parse_events_term *term; + int err = parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_USER, + $1, $3, /*novalue=*/false, &@1, &@3); - if (parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_USER, - $1, $3, false, &@1, &@3)) { + if (err) { free($1); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -789,12 +724,13 @@ name_or_raw '=' PE_VALUE name_or_raw '=' PE_TERM_HW { struct parse_events_term *term; + int err = parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_USER, + $1, $3.str, &@1, &@3); - if (parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_USER, - $1, $3.str, &@1, &@3)) { + if (err) { free($1); free($3.str); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -802,11 +738,12 @@ name_or_raw '=' PE_TERM_HW PE_LEGACY_CACHE { struct parse_events_term *term; + int err = parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE, + $1, /*num=*/1, /*novalue=*/true, &@1, /*loc_val=*/NULL); - if (parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE, - $1, 1, true, &@1, NULL)) { + if (err) { free($1); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -814,11 +751,12 @@ PE_LEGACY_CACHE PE_NAME { struct parse_events_term *term; + int err = parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_USER, + $1, /*num=*/1, /*novalue=*/true, &@1, /*loc_val=*/NULL); - if (parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_USER, - $1, 1, true, &@1, NULL)) { + if (err) { free($1); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -826,11 +764,13 @@ PE_NAME PE_TERM_HW { struct parse_events_term *term; + int err = parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_HARDWARE, + $1.str, $1.num & 255, /*novalue=*/false, + &@1, /*loc_val=*/NULL); - if (parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_HARDWARE, - $1.str, $1.num & 255, false, &@1, NULL)) { + if (err) { free($1.str); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -838,10 +778,12 @@ PE_TERM_HW PE_TERM '=' name_or_legacy { struct parse_events_term *term; + int err = parse_events_term__str(&term, (enum parse_events__term_type)$1, + /*config=*/NULL, $3, &@1, &@3); - if (parse_events_term__str(&term, (int)$1, NULL, $3, &@1, &@3)) { + if (err) { free($3); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -849,10 +791,12 @@ PE_TERM '=' name_or_legacy PE_TERM '=' PE_TERM_HW { struct parse_events_term *term; + int err = parse_events_term__str(&term, (enum parse_events__term_type)$1, + /*config=*/NULL, $3.str, &@1, &@3); - if (parse_events_term__str(&term, (int)$1, NULL, $3.str, &@1, &@3)) { + if (err) { free($3.str); - YYABORT; + PE_ABORT(err); } $$ = term; } @@ -860,53 +804,39 @@ PE_TERM '=' PE_TERM_HW PE_TERM '=' PE_TERM { struct parse_events_term *term; + int err = parse_events_term__term(&term, + (enum parse_events__term_type)$1, + (enum parse_events__term_type)$3, + &@1, &@3); + + if (err) + PE_ABORT(err); - ABORT_ON(parse_events_term__term(&term, (int)$1, (int)$3, &@1, &@3)); $$ = term; } | PE_TERM '=' PE_VALUE { struct parse_events_term *term; + int err = parse_events_term__num(&term, (enum parse_events__term_type)$1, + /*config=*/NULL, $3, /*novalue=*/false, &@1, &@3); + + if (err) + PE_ABORT(err); - ABORT_ON(parse_events_term__num(&term, (int)$1, NULL, $3, false, &@1, &@3)); $$ = term; } | PE_TERM { struct parse_events_term *term; + int err = parse_events_term__num(&term, (enum parse_events__term_type)$1, + /*config=*/NULL, /*num=*/1, /*novalue=*/true, + &@1, /*loc_val=*/NULL); - ABORT_ON(parse_events_term__num(&term, (int)$1, NULL, 1, true, &@1, NULL)); - $$ = term; -} -| -name_or_raw array '=' name_or_legacy -{ - struct parse_events_term *term; + if (err) + PE_ABORT(err); - if (parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_USER, - $1, $4, &@1, &@4)) { - free($1); - free($4); - free($2.ranges); - YYABORT; - } - term->array = $2; - $$ = term; -} -| -name_or_raw array '=' PE_VALUE -{ - struct parse_events_term *term; - - if (parse_events_term__num(&term, PARSE_EVENTS__TERM_TYPE_USER, - $1, $4, false, &@1, &@4)) { - free($1); - free($2.ranges); - YYABORT; - } - term->array = $2; $$ = term; } | @@ -914,73 +844,19 @@ PE_DRV_CFG_TERM { struct parse_events_term *term; char *config = strdup($1); + int err; - ABORT_ON(!config); - if (parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_DRV_CFG, - config, $1, &@1, NULL)) { + if (!config) + YYNOMEM; + err = parse_events_term__str(&term, PARSE_EVENTS__TERM_TYPE_DRV_CFG, config, $1, &@1, NULL); + if (err) { free($1); free(config); - YYABORT; + PE_ABORT(err); } $$ = term; } -array: -'[' array_terms ']' -{ - $$ = $2; -} -| -PE_ARRAY_ALL -{ - $$.nr_ranges = 0; - $$.ranges = NULL; -} - -array_terms: -array_terms ',' array_term -{ - struct parse_events_array new_array; - - new_array.nr_ranges = $1.nr_ranges + $3.nr_ranges; - new_array.ranges = realloc($1.ranges, - sizeof(new_array.ranges[0]) * - new_array.nr_ranges); - ABORT_ON(!new_array.ranges); - memcpy(&new_array.ranges[$1.nr_ranges], $3.ranges, - $3.nr_ranges * sizeof(new_array.ranges[0])); - free($3.ranges); - $$ = new_array; -} -| -array_term - -array_term: -PE_VALUE -{ - struct parse_events_array array; - - array.nr_ranges = 1; - array.ranges = malloc(sizeof(array.ranges[0])); - ABORT_ON(!array.ranges); - array.ranges[0].start = $1; - array.ranges[0].length = 1; - $$ = array; -} -| -PE_VALUE PE_ARRAY_RANGE PE_VALUE -{ - struct parse_events_array array; - - ABORT_ON($3 < $1); - array.nr_ranges = 1; - array.ranges = malloc(sizeof(array.ranges[0])); - ABORT_ON(!array.ranges); - array.ranges[0].start = $1; - array.ranges[0].length = $3 - $1 + 1; - $$ = array; -} - sep_dc: ':' | sep_slash_slash_dc: '/' '/' | ':' | diff --git a/tools/perf/util/perf-regs-arch/Build b/tools/perf/util/perf-regs-arch/Build new file mode 100644 index 000000000000..d9d596d330a7 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/Build @@ -0,0 +1,9 @@ +perf-y += perf_regs_aarch64.o +perf-y += perf_regs_arm.o +perf-y += perf_regs_csky.o +perf-y += perf_regs_loongarch.o +perf-y += perf_regs_mips.o +perf-y += perf_regs_powerpc.o +perf-y += perf_regs_riscv.o +perf-y += perf_regs_s390.o +perf-y += perf_regs_x86.o diff --git a/tools/perf/util/perf-regs-arch/perf_regs_aarch64.c b/tools/perf/util/perf-regs-arch/perf_regs_aarch64.c new file mode 100644 index 000000000000..696566c54768 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_aarch64.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/arm64/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_arm64(int id) +{ + switch (id) { + case PERF_REG_ARM64_X0: + return "x0"; + case PERF_REG_ARM64_X1: + return "x1"; + case PERF_REG_ARM64_X2: + return "x2"; + case PERF_REG_ARM64_X3: + return "x3"; + case PERF_REG_ARM64_X4: + return "x4"; + case PERF_REG_ARM64_X5: + return "x5"; + case PERF_REG_ARM64_X6: + return "x6"; + case PERF_REG_ARM64_X7: + return "x7"; + case PERF_REG_ARM64_X8: + return "x8"; + case PERF_REG_ARM64_X9: + return "x9"; + case PERF_REG_ARM64_X10: + return "x10"; + case PERF_REG_ARM64_X11: + return "x11"; + case PERF_REG_ARM64_X12: + return "x12"; + case PERF_REG_ARM64_X13: + return "x13"; + case PERF_REG_ARM64_X14: + return "x14"; + case PERF_REG_ARM64_X15: + return "x15"; + case PERF_REG_ARM64_X16: + return "x16"; + case PERF_REG_ARM64_X17: + return "x17"; + case PERF_REG_ARM64_X18: + return "x18"; + case PERF_REG_ARM64_X19: + return "x19"; + case PERF_REG_ARM64_X20: + return "x20"; + case PERF_REG_ARM64_X21: + return "x21"; + case PERF_REG_ARM64_X22: + return "x22"; + case PERF_REG_ARM64_X23: + return "x23"; + case PERF_REG_ARM64_X24: + return "x24"; + case PERF_REG_ARM64_X25: + return "x25"; + case PERF_REG_ARM64_X26: + return "x26"; + case PERF_REG_ARM64_X27: + return "x27"; + case PERF_REG_ARM64_X28: + return "x28"; + case PERF_REG_ARM64_X29: + return "x29"; + case PERF_REG_ARM64_SP: + return "sp"; + case PERF_REG_ARM64_LR: + return "lr"; + case PERF_REG_ARM64_PC: + return "pc"; + case PERF_REG_ARM64_VG: + return "vg"; + default: + return NULL; + } + + return NULL; +} + +uint64_t __perf_reg_ip_arm64(void) +{ + return PERF_REG_ARM64_PC; +} + +uint64_t __perf_reg_sp_arm64(void) +{ + return PERF_REG_ARM64_SP; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_arm.c b/tools/perf/util/perf-regs-arch/perf_regs_arm.c new file mode 100644 index 000000000000..700fd07cd2aa --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_arm.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/arm/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_arm(int id) +{ + switch (id) { + case PERF_REG_ARM_R0: + return "r0"; + case PERF_REG_ARM_R1: + return "r1"; + case PERF_REG_ARM_R2: + return "r2"; + case PERF_REG_ARM_R3: + return "r3"; + case PERF_REG_ARM_R4: + return "r4"; + case PERF_REG_ARM_R5: + return "r5"; + case PERF_REG_ARM_R6: + return "r6"; + case PERF_REG_ARM_R7: + return "r7"; + case PERF_REG_ARM_R8: + return "r8"; + case PERF_REG_ARM_R9: + return "r9"; + case PERF_REG_ARM_R10: + return "r10"; + case PERF_REG_ARM_FP: + return "fp"; + case PERF_REG_ARM_IP: + return "ip"; + case PERF_REG_ARM_SP: + return "sp"; + case PERF_REG_ARM_LR: + return "lr"; + case PERF_REG_ARM_PC: + return "pc"; + default: + return NULL; + } + + return NULL; +} + +uint64_t __perf_reg_ip_arm(void) +{ + return PERF_REG_ARM_PC; +} + +uint64_t __perf_reg_sp_arm(void) +{ + return PERF_REG_ARM_SP; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_csky.c b/tools/perf/util/perf-regs-arch/perf_regs_csky.c new file mode 100644 index 000000000000..a2841094e096 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_csky.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../arch/csky/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_csky(int id) +{ + switch (id) { + case PERF_REG_CSKY_A0: + return "a0"; + case PERF_REG_CSKY_A1: + return "a1"; + case PERF_REG_CSKY_A2: + return "a2"; + case PERF_REG_CSKY_A3: + return "a3"; + case PERF_REG_CSKY_REGS0: + return "regs0"; + case PERF_REG_CSKY_REGS1: + return "regs1"; + case PERF_REG_CSKY_REGS2: + return "regs2"; + case PERF_REG_CSKY_REGS3: + return "regs3"; + case PERF_REG_CSKY_REGS4: + return "regs4"; + case PERF_REG_CSKY_REGS5: + return "regs5"; + case PERF_REG_CSKY_REGS6: + return "regs6"; + case PERF_REG_CSKY_REGS7: + return "regs7"; + case PERF_REG_CSKY_REGS8: + return "regs8"; + case PERF_REG_CSKY_REGS9: + return "regs9"; + case PERF_REG_CSKY_SP: + return "sp"; + case PERF_REG_CSKY_LR: + return "lr"; + case PERF_REG_CSKY_PC: + return "pc"; +#if defined(__CSKYABIV2__) + case PERF_REG_CSKY_EXREGS0: + return "exregs0"; + case PERF_REG_CSKY_EXREGS1: + return "exregs1"; + case PERF_REG_CSKY_EXREGS2: + return "exregs2"; + case PERF_REG_CSKY_EXREGS3: + return "exregs3"; + case PERF_REG_CSKY_EXREGS4: + return "exregs4"; + case PERF_REG_CSKY_EXREGS5: + return "exregs5"; + case PERF_REG_CSKY_EXREGS6: + return "exregs6"; + case PERF_REG_CSKY_EXREGS7: + return "exregs7"; + case PERF_REG_CSKY_EXREGS8: + return "exregs8"; + case PERF_REG_CSKY_EXREGS9: + return "exregs9"; + case PERF_REG_CSKY_EXREGS10: + return "exregs10"; + case PERF_REG_CSKY_EXREGS11: + return "exregs11"; + case PERF_REG_CSKY_EXREGS12: + return "exregs12"; + case PERF_REG_CSKY_EXREGS13: + return "exregs13"; + case PERF_REG_CSKY_EXREGS14: + return "exregs14"; + case PERF_REG_CSKY_TLS: + return "tls"; + case PERF_REG_CSKY_HI: + return "hi"; + case PERF_REG_CSKY_LO: + return "lo"; +#endif + default: + return NULL; + } + + return NULL; +} + +uint64_t __perf_reg_ip_csky(void) +{ + return PERF_REG_CSKY_PC; +} + +uint64_t __perf_reg_sp_csky(void) +{ + return PERF_REG_CSKY_SP; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_loongarch.c b/tools/perf/util/perf-regs-arch/perf_regs_loongarch.c new file mode 100644 index 000000000000..a9ba0f934123 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_loongarch.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/loongarch/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_loongarch(int id) +{ + switch (id) { + case PERF_REG_LOONGARCH_PC: + return "PC"; + case PERF_REG_LOONGARCH_R1: + return "%r1"; + case PERF_REG_LOONGARCH_R2: + return "%r2"; + case PERF_REG_LOONGARCH_R3: + return "%r3"; + case PERF_REG_LOONGARCH_R4: + return "%r4"; + case PERF_REG_LOONGARCH_R5: + return "%r5"; + case PERF_REG_LOONGARCH_R6: + return "%r6"; + case PERF_REG_LOONGARCH_R7: + return "%r7"; + case PERF_REG_LOONGARCH_R8: + return "%r8"; + case PERF_REG_LOONGARCH_R9: + return "%r9"; + case PERF_REG_LOONGARCH_R10: + return "%r10"; + case PERF_REG_LOONGARCH_R11: + return "%r11"; + case PERF_REG_LOONGARCH_R12: + return "%r12"; + case PERF_REG_LOONGARCH_R13: + return "%r13"; + case PERF_REG_LOONGARCH_R14: + return "%r14"; + case PERF_REG_LOONGARCH_R15: + return "%r15"; + case PERF_REG_LOONGARCH_R16: + return "%r16"; + case PERF_REG_LOONGARCH_R17: + return "%r17"; + case PERF_REG_LOONGARCH_R18: + return "%r18"; + case PERF_REG_LOONGARCH_R19: + return "%r19"; + case PERF_REG_LOONGARCH_R20: + return "%r20"; + case PERF_REG_LOONGARCH_R21: + return "%r21"; + case PERF_REG_LOONGARCH_R22: + return "%r22"; + case PERF_REG_LOONGARCH_R23: + return "%r23"; + case PERF_REG_LOONGARCH_R24: + return "%r24"; + case PERF_REG_LOONGARCH_R25: + return "%r25"; + case PERF_REG_LOONGARCH_R26: + return "%r26"; + case PERF_REG_LOONGARCH_R27: + return "%r27"; + case PERF_REG_LOONGARCH_R28: + return "%r28"; + case PERF_REG_LOONGARCH_R29: + return "%r29"; + case PERF_REG_LOONGARCH_R30: + return "%r30"; + case PERF_REG_LOONGARCH_R31: + return "%r31"; + default: + break; + } + return NULL; +} + +uint64_t __perf_reg_ip_loongarch(void) +{ + return PERF_REG_LOONGARCH_PC; +} + +uint64_t __perf_reg_sp_loongarch(void) +{ + return PERF_REG_LOONGARCH_R3; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_mips.c b/tools/perf/util/perf-regs-arch/perf_regs_mips.c new file mode 100644 index 000000000000..5a45830cfbf5 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_mips.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/mips/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_mips(int id) +{ + switch (id) { + case PERF_REG_MIPS_PC: + return "PC"; + case PERF_REG_MIPS_R1: + return "$1"; + case PERF_REG_MIPS_R2: + return "$2"; + case PERF_REG_MIPS_R3: + return "$3"; + case PERF_REG_MIPS_R4: + return "$4"; + case PERF_REG_MIPS_R5: + return "$5"; + case PERF_REG_MIPS_R6: + return "$6"; + case PERF_REG_MIPS_R7: + return "$7"; + case PERF_REG_MIPS_R8: + return "$8"; + case PERF_REG_MIPS_R9: + return "$9"; + case PERF_REG_MIPS_R10: + return "$10"; + case PERF_REG_MIPS_R11: + return "$11"; + case PERF_REG_MIPS_R12: + return "$12"; + case PERF_REG_MIPS_R13: + return "$13"; + case PERF_REG_MIPS_R14: + return "$14"; + case PERF_REG_MIPS_R15: + return "$15"; + case PERF_REG_MIPS_R16: + return "$16"; + case PERF_REG_MIPS_R17: + return "$17"; + case PERF_REG_MIPS_R18: + return "$18"; + case PERF_REG_MIPS_R19: + return "$19"; + case PERF_REG_MIPS_R20: + return "$20"; + case PERF_REG_MIPS_R21: + return "$21"; + case PERF_REG_MIPS_R22: + return "$22"; + case PERF_REG_MIPS_R23: + return "$23"; + case PERF_REG_MIPS_R24: + return "$24"; + case PERF_REG_MIPS_R25: + return "$25"; + case PERF_REG_MIPS_R28: + return "$28"; + case PERF_REG_MIPS_R29: + return "$29"; + case PERF_REG_MIPS_R30: + return "$30"; + case PERF_REG_MIPS_R31: + return "$31"; + default: + break; + } + return NULL; +} + +uint64_t __perf_reg_ip_mips(void) +{ + return PERF_REG_MIPS_PC; +} + +uint64_t __perf_reg_sp_mips(void) +{ + return PERF_REG_MIPS_R29; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_powerpc.c b/tools/perf/util/perf-regs-arch/perf_regs_powerpc.c new file mode 100644 index 000000000000..1f0d682db74a --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_powerpc.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/powerpc/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_powerpc(int id) +{ + switch (id) { + case PERF_REG_POWERPC_R0: + return "r0"; + case PERF_REG_POWERPC_R1: + return "r1"; + case PERF_REG_POWERPC_R2: + return "r2"; + case PERF_REG_POWERPC_R3: + return "r3"; + case PERF_REG_POWERPC_R4: + return "r4"; + case PERF_REG_POWERPC_R5: + return "r5"; + case PERF_REG_POWERPC_R6: + return "r6"; + case PERF_REG_POWERPC_R7: + return "r7"; + case PERF_REG_POWERPC_R8: + return "r8"; + case PERF_REG_POWERPC_R9: + return "r9"; + case PERF_REG_POWERPC_R10: + return "r10"; + case PERF_REG_POWERPC_R11: + return "r11"; + case PERF_REG_POWERPC_R12: + return "r12"; + case PERF_REG_POWERPC_R13: + return "r13"; + case PERF_REG_POWERPC_R14: + return "r14"; + case PERF_REG_POWERPC_R15: + return "r15"; + case PERF_REG_POWERPC_R16: + return "r16"; + case PERF_REG_POWERPC_R17: + return "r17"; + case PERF_REG_POWERPC_R18: + return "r18"; + case PERF_REG_POWERPC_R19: + return "r19"; + case PERF_REG_POWERPC_R20: + return "r20"; + case PERF_REG_POWERPC_R21: + return "r21"; + case PERF_REG_POWERPC_R22: + return "r22"; + case PERF_REG_POWERPC_R23: + return "r23"; + case PERF_REG_POWERPC_R24: + return "r24"; + case PERF_REG_POWERPC_R25: + return "r25"; + case PERF_REG_POWERPC_R26: + return "r26"; + case PERF_REG_POWERPC_R27: + return "r27"; + case PERF_REG_POWERPC_R28: + return "r28"; + case PERF_REG_POWERPC_R29: + return "r29"; + case PERF_REG_POWERPC_R30: + return "r30"; + case PERF_REG_POWERPC_R31: + return "r31"; + case PERF_REG_POWERPC_NIP: + return "nip"; + case PERF_REG_POWERPC_MSR: + return "msr"; + case PERF_REG_POWERPC_ORIG_R3: + return "orig_r3"; + case PERF_REG_POWERPC_CTR: + return "ctr"; + case PERF_REG_POWERPC_LINK: + return "link"; + case PERF_REG_POWERPC_XER: + return "xer"; + case PERF_REG_POWERPC_CCR: + return "ccr"; + case PERF_REG_POWERPC_SOFTE: + return "softe"; + case PERF_REG_POWERPC_TRAP: + return "trap"; + case PERF_REG_POWERPC_DAR: + return "dar"; + case PERF_REG_POWERPC_DSISR: + return "dsisr"; + case PERF_REG_POWERPC_SIER: + return "sier"; + case PERF_REG_POWERPC_MMCRA: + return "mmcra"; + case PERF_REG_POWERPC_MMCR0: + return "mmcr0"; + case PERF_REG_POWERPC_MMCR1: + return "mmcr1"; + case PERF_REG_POWERPC_MMCR2: + return "mmcr2"; + case PERF_REG_POWERPC_MMCR3: + return "mmcr3"; + case PERF_REG_POWERPC_SIER2: + return "sier2"; + case PERF_REG_POWERPC_SIER3: + return "sier3"; + case PERF_REG_POWERPC_PMC1: + return "pmc1"; + case PERF_REG_POWERPC_PMC2: + return "pmc2"; + case PERF_REG_POWERPC_PMC3: + return "pmc3"; + case PERF_REG_POWERPC_PMC4: + return "pmc4"; + case PERF_REG_POWERPC_PMC5: + return "pmc5"; + case PERF_REG_POWERPC_PMC6: + return "pmc6"; + case PERF_REG_POWERPC_SDAR: + return "sdar"; + case PERF_REG_POWERPC_SIAR: + return "siar"; + default: + break; + } + return NULL; +} + +uint64_t __perf_reg_ip_powerpc(void) +{ + return PERF_REG_POWERPC_NIP; +} + +uint64_t __perf_reg_sp_powerpc(void) +{ + return PERF_REG_POWERPC_R1; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_riscv.c b/tools/perf/util/perf-regs-arch/perf_regs_riscv.c new file mode 100644 index 000000000000..e432630be4c5 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_riscv.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/riscv/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_riscv(int id) +{ + switch (id) { + case PERF_REG_RISCV_PC: + return "pc"; + case PERF_REG_RISCV_RA: + return "ra"; + case PERF_REG_RISCV_SP: + return "sp"; + case PERF_REG_RISCV_GP: + return "gp"; + case PERF_REG_RISCV_TP: + return "tp"; + case PERF_REG_RISCV_T0: + return "t0"; + case PERF_REG_RISCV_T1: + return "t1"; + case PERF_REG_RISCV_T2: + return "t2"; + case PERF_REG_RISCV_S0: + return "s0"; + case PERF_REG_RISCV_S1: + return "s1"; + case PERF_REG_RISCV_A0: + return "a0"; + case PERF_REG_RISCV_A1: + return "a1"; + case PERF_REG_RISCV_A2: + return "a2"; + case PERF_REG_RISCV_A3: + return "a3"; + case PERF_REG_RISCV_A4: + return "a4"; + case PERF_REG_RISCV_A5: + return "a5"; + case PERF_REG_RISCV_A6: + return "a6"; + case PERF_REG_RISCV_A7: + return "a7"; + case PERF_REG_RISCV_S2: + return "s2"; + case PERF_REG_RISCV_S3: + return "s3"; + case PERF_REG_RISCV_S4: + return "s4"; + case PERF_REG_RISCV_S5: + return "s5"; + case PERF_REG_RISCV_S6: + return "s6"; + case PERF_REG_RISCV_S7: + return "s7"; + case PERF_REG_RISCV_S8: + return "s8"; + case PERF_REG_RISCV_S9: + return "s9"; + case PERF_REG_RISCV_S10: + return "s10"; + case PERF_REG_RISCV_S11: + return "s11"; + case PERF_REG_RISCV_T3: + return "t3"; + case PERF_REG_RISCV_T4: + return "t4"; + case PERF_REG_RISCV_T5: + return "t5"; + case PERF_REG_RISCV_T6: + return "t6"; + default: + return NULL; + } + + return NULL; +} + +uint64_t __perf_reg_ip_riscv(void) +{ + return PERF_REG_RISCV_PC; +} + +uint64_t __perf_reg_sp_riscv(void) +{ + return PERF_REG_RISCV_SP; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_s390.c b/tools/perf/util/perf-regs-arch/perf_regs_s390.c new file mode 100644 index 000000000000..1c7a46db778c --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_s390.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/s390/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_s390(int id) +{ + switch (id) { + case PERF_REG_S390_R0: + return "R0"; + case PERF_REG_S390_R1: + return "R1"; + case PERF_REG_S390_R2: + return "R2"; + case PERF_REG_S390_R3: + return "R3"; + case PERF_REG_S390_R4: + return "R4"; + case PERF_REG_S390_R5: + return "R5"; + case PERF_REG_S390_R6: + return "R6"; + case PERF_REG_S390_R7: + return "R7"; + case PERF_REG_S390_R8: + return "R8"; + case PERF_REG_S390_R9: + return "R9"; + case PERF_REG_S390_R10: + return "R10"; + case PERF_REG_S390_R11: + return "R11"; + case PERF_REG_S390_R12: + return "R12"; + case PERF_REG_S390_R13: + return "R13"; + case PERF_REG_S390_R14: + return "R14"; + case PERF_REG_S390_R15: + return "R15"; + case PERF_REG_S390_FP0: + return "FP0"; + case PERF_REG_S390_FP1: + return "FP1"; + case PERF_REG_S390_FP2: + return "FP2"; + case PERF_REG_S390_FP3: + return "FP3"; + case PERF_REG_S390_FP4: + return "FP4"; + case PERF_REG_S390_FP5: + return "FP5"; + case PERF_REG_S390_FP6: + return "FP6"; + case PERF_REG_S390_FP7: + return "FP7"; + case PERF_REG_S390_FP8: + return "FP8"; + case PERF_REG_S390_FP9: + return "FP9"; + case PERF_REG_S390_FP10: + return "FP10"; + case PERF_REG_S390_FP11: + return "FP11"; + case PERF_REG_S390_FP12: + return "FP12"; + case PERF_REG_S390_FP13: + return "FP13"; + case PERF_REG_S390_FP14: + return "FP14"; + case PERF_REG_S390_FP15: + return "FP15"; + case PERF_REG_S390_MASK: + return "MASK"; + case PERF_REG_S390_PC: + return "PC"; + default: + return NULL; + } + + return NULL; +} + +uint64_t __perf_reg_ip_s390(void) +{ + return PERF_REG_S390_PC; +} + +uint64_t __perf_reg_sp_s390(void) +{ + return PERF_REG_S390_R15; +} + +#endif diff --git a/tools/perf/util/perf-regs-arch/perf_regs_x86.c b/tools/perf/util/perf-regs-arch/perf_regs_x86.c new file mode 100644 index 000000000000..873c620f0634 --- /dev/null +++ b/tools/perf/util/perf-regs-arch/perf_regs_x86.c @@ -0,0 +1,98 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifdef HAVE_PERF_REGS_SUPPORT + +#include "../perf_regs.h" +#include "../../../arch/x86/include/uapi/asm/perf_regs.h" + +const char *__perf_reg_name_x86(int id) +{ + switch (id) { + case PERF_REG_X86_AX: + return "AX"; + case PERF_REG_X86_BX: + return "BX"; + case PERF_REG_X86_CX: + return "CX"; + case PERF_REG_X86_DX: + return "DX"; + case PERF_REG_X86_SI: + return "SI"; + case PERF_REG_X86_DI: + return "DI"; + case PERF_REG_X86_BP: + return "BP"; + case PERF_REG_X86_SP: + return "SP"; + case PERF_REG_X86_IP: + return "IP"; + case PERF_REG_X86_FLAGS: + return "FLAGS"; + case PERF_REG_X86_CS: + return "CS"; + case PERF_REG_X86_SS: + return "SS"; + case PERF_REG_X86_DS: + return "DS"; + case PERF_REG_X86_ES: + return "ES"; + case PERF_REG_X86_FS: + return "FS"; + case PERF_REG_X86_GS: + return "GS"; + case PERF_REG_X86_R8: + return "R8"; + case PERF_REG_X86_R9: + return "R9"; + case PERF_REG_X86_R10: + return "R10"; + case PERF_REG_X86_R11: + return "R11"; + case PERF_REG_X86_R12: + return "R12"; + case PERF_REG_X86_R13: + return "R13"; + case PERF_REG_X86_R14: + return "R14"; + case PERF_REG_X86_R15: + return "R15"; + +#define XMM(x) \ + case PERF_REG_X86_XMM ## x: \ + case PERF_REG_X86_XMM ## x + 1: \ + return "XMM" #x; + XMM(0) + XMM(1) + XMM(2) + XMM(3) + XMM(4) + XMM(5) + XMM(6) + XMM(7) + XMM(8) + XMM(9) + XMM(10) + XMM(11) + XMM(12) + XMM(13) + XMM(14) + XMM(15) +#undef XMM + default: + return NULL; + } + + return NULL; +} + +uint64_t __perf_reg_ip_x86(void) +{ + return PERF_REG_X86_IP; +} + +uint64_t __perf_reg_sp_x86(void) +{ + return PERF_REG_X86_SP; +} + +#endif diff --git a/tools/perf/util/perf_regs.c b/tools/perf/util/perf_regs.c index 9bdbaa37f813..e2275856b570 100644 --- a/tools/perf/util/perf_regs.c +++ b/tools/perf/util/perf_regs.c @@ -3,6 +3,7 @@ #include <string.h> #include "perf_regs.h" #include "util/sample.h" +#include "debug.h" int __weak arch_sdt_arg_parse_op(char *old_op __maybe_unused, char **new_op __maybe_unused) @@ -12,732 +13,16 @@ int __weak arch_sdt_arg_parse_op(char *old_op __maybe_unused, uint64_t __weak arch__intr_reg_mask(void) { - return PERF_REGS_MASK; + return 0; } uint64_t __weak arch__user_reg_mask(void) { - return PERF_REGS_MASK; + return 0; } #ifdef HAVE_PERF_REGS_SUPPORT -#define perf_event_arm_regs perf_event_arm64_regs -#include "../../arch/arm64/include/uapi/asm/perf_regs.h" -#undef perf_event_arm_regs - -#include "../../arch/arm/include/uapi/asm/perf_regs.h" -#include "../../arch/csky/include/uapi/asm/perf_regs.h" -#include "../../arch/loongarch/include/uapi/asm/perf_regs.h" -#include "../../arch/mips/include/uapi/asm/perf_regs.h" -#include "../../arch/powerpc/include/uapi/asm/perf_regs.h" -#include "../../arch/riscv/include/uapi/asm/perf_regs.h" -#include "../../arch/s390/include/uapi/asm/perf_regs.h" -#include "../../arch/x86/include/uapi/asm/perf_regs.h" - -static const char *__perf_reg_name_arm64(int id) -{ - switch (id) { - case PERF_REG_ARM64_X0: - return "x0"; - case PERF_REG_ARM64_X1: - return "x1"; - case PERF_REG_ARM64_X2: - return "x2"; - case PERF_REG_ARM64_X3: - return "x3"; - case PERF_REG_ARM64_X4: - return "x4"; - case PERF_REG_ARM64_X5: - return "x5"; - case PERF_REG_ARM64_X6: - return "x6"; - case PERF_REG_ARM64_X7: - return "x7"; - case PERF_REG_ARM64_X8: - return "x8"; - case PERF_REG_ARM64_X9: - return "x9"; - case PERF_REG_ARM64_X10: - return "x10"; - case PERF_REG_ARM64_X11: - return "x11"; - case PERF_REG_ARM64_X12: - return "x12"; - case PERF_REG_ARM64_X13: - return "x13"; - case PERF_REG_ARM64_X14: - return "x14"; - case PERF_REG_ARM64_X15: - return "x15"; - case PERF_REG_ARM64_X16: - return "x16"; - case PERF_REG_ARM64_X17: - return "x17"; - case PERF_REG_ARM64_X18: - return "x18"; - case PERF_REG_ARM64_X19: - return "x19"; - case PERF_REG_ARM64_X20: - return "x20"; - case PERF_REG_ARM64_X21: - return "x21"; - case PERF_REG_ARM64_X22: - return "x22"; - case PERF_REG_ARM64_X23: - return "x23"; - case PERF_REG_ARM64_X24: - return "x24"; - case PERF_REG_ARM64_X25: - return "x25"; - case PERF_REG_ARM64_X26: - return "x26"; - case PERF_REG_ARM64_X27: - return "x27"; - case PERF_REG_ARM64_X28: - return "x28"; - case PERF_REG_ARM64_X29: - return "x29"; - case PERF_REG_ARM64_SP: - return "sp"; - case PERF_REG_ARM64_LR: - return "lr"; - case PERF_REG_ARM64_PC: - return "pc"; - case PERF_REG_ARM64_VG: - return "vg"; - default: - return NULL; - } - - return NULL; -} - -static const char *__perf_reg_name_arm(int id) -{ - switch (id) { - case PERF_REG_ARM_R0: - return "r0"; - case PERF_REG_ARM_R1: - return "r1"; - case PERF_REG_ARM_R2: - return "r2"; - case PERF_REG_ARM_R3: - return "r3"; - case PERF_REG_ARM_R4: - return "r4"; - case PERF_REG_ARM_R5: - return "r5"; - case PERF_REG_ARM_R6: - return "r6"; - case PERF_REG_ARM_R7: - return "r7"; - case PERF_REG_ARM_R8: - return "r8"; - case PERF_REG_ARM_R9: - return "r9"; - case PERF_REG_ARM_R10: - return "r10"; - case PERF_REG_ARM_FP: - return "fp"; - case PERF_REG_ARM_IP: - return "ip"; - case PERF_REG_ARM_SP: - return "sp"; - case PERF_REG_ARM_LR: - return "lr"; - case PERF_REG_ARM_PC: - return "pc"; - default: - return NULL; - } - - return NULL; -} - -static const char *__perf_reg_name_csky(int id) -{ - switch (id) { - case PERF_REG_CSKY_A0: - return "a0"; - case PERF_REG_CSKY_A1: - return "a1"; - case PERF_REG_CSKY_A2: - return "a2"; - case PERF_REG_CSKY_A3: - return "a3"; - case PERF_REG_CSKY_REGS0: - return "regs0"; - case PERF_REG_CSKY_REGS1: - return "regs1"; - case PERF_REG_CSKY_REGS2: - return "regs2"; - case PERF_REG_CSKY_REGS3: - return "regs3"; - case PERF_REG_CSKY_REGS4: - return "regs4"; - case PERF_REG_CSKY_REGS5: - return "regs5"; - case PERF_REG_CSKY_REGS6: - return "regs6"; - case PERF_REG_CSKY_REGS7: - return "regs7"; - case PERF_REG_CSKY_REGS8: - return "regs8"; - case PERF_REG_CSKY_REGS9: - return "regs9"; - case PERF_REG_CSKY_SP: - return "sp"; - case PERF_REG_CSKY_LR: - return "lr"; - case PERF_REG_CSKY_PC: - return "pc"; -#if defined(__CSKYABIV2__) - case PERF_REG_CSKY_EXREGS0: - return "exregs0"; - case PERF_REG_CSKY_EXREGS1: - return "exregs1"; - case PERF_REG_CSKY_EXREGS2: - return "exregs2"; - case PERF_REG_CSKY_EXREGS3: - return "exregs3"; - case PERF_REG_CSKY_EXREGS4: - return "exregs4"; - case PERF_REG_CSKY_EXREGS5: - return "exregs5"; - case PERF_REG_CSKY_EXREGS6: - return "exregs6"; - case PERF_REG_CSKY_EXREGS7: - return "exregs7"; - case PERF_REG_CSKY_EXREGS8: - return "exregs8"; - case PERF_REG_CSKY_EXREGS9: - return "exregs9"; - case PERF_REG_CSKY_EXREGS10: - return "exregs10"; - case PERF_REG_CSKY_EXREGS11: - return "exregs11"; - case PERF_REG_CSKY_EXREGS12: - return "exregs12"; - case PERF_REG_CSKY_EXREGS13: - return "exregs13"; - case PERF_REG_CSKY_EXREGS14: - return "exregs14"; - case PERF_REG_CSKY_TLS: - return "tls"; - case PERF_REG_CSKY_HI: - return "hi"; - case PERF_REG_CSKY_LO: - return "lo"; -#endif - default: - return NULL; - } - - return NULL; -} - -static inline const char *__perf_reg_name_loongarch(int id) -{ - switch (id) { - case PERF_REG_LOONGARCH_PC: - return "PC"; - case PERF_REG_LOONGARCH_R1: - return "%r1"; - case PERF_REG_LOONGARCH_R2: - return "%r2"; - case PERF_REG_LOONGARCH_R3: - return "%r3"; - case PERF_REG_LOONGARCH_R4: - return "%r4"; - case PERF_REG_LOONGARCH_R5: - return "%r5"; - case PERF_REG_LOONGARCH_R6: - return "%r6"; - case PERF_REG_LOONGARCH_R7: - return "%r7"; - case PERF_REG_LOONGARCH_R8: - return "%r8"; - case PERF_REG_LOONGARCH_R9: - return "%r9"; - case PERF_REG_LOONGARCH_R10: - return "%r10"; - case PERF_REG_LOONGARCH_R11: - return "%r11"; - case PERF_REG_LOONGARCH_R12: - return "%r12"; - case PERF_REG_LOONGARCH_R13: - return "%r13"; - case PERF_REG_LOONGARCH_R14: - return "%r14"; - case PERF_REG_LOONGARCH_R15: - return "%r15"; - case PERF_REG_LOONGARCH_R16: - return "%r16"; - case PERF_REG_LOONGARCH_R17: - return "%r17"; - case PERF_REG_LOONGARCH_R18: - return "%r18"; - case PERF_REG_LOONGARCH_R19: - return "%r19"; - case PERF_REG_LOONGARCH_R20: - return "%r20"; - case PERF_REG_LOONGARCH_R21: - return "%r21"; - case PERF_REG_LOONGARCH_R22: - return "%r22"; - case PERF_REG_LOONGARCH_R23: - return "%r23"; - case PERF_REG_LOONGARCH_R24: - return "%r24"; - case PERF_REG_LOONGARCH_R25: - return "%r25"; - case PERF_REG_LOONGARCH_R26: - return "%r26"; - case PERF_REG_LOONGARCH_R27: - return "%r27"; - case PERF_REG_LOONGARCH_R28: - return "%r28"; - case PERF_REG_LOONGARCH_R29: - return "%r29"; - case PERF_REG_LOONGARCH_R30: - return "%r30"; - case PERF_REG_LOONGARCH_R31: - return "%r31"; - default: - break; - } - return NULL; -} - -static const char *__perf_reg_name_mips(int id) -{ - switch (id) { - case PERF_REG_MIPS_PC: - return "PC"; - case PERF_REG_MIPS_R1: - return "$1"; - case PERF_REG_MIPS_R2: - return "$2"; - case PERF_REG_MIPS_R3: - return "$3"; - case PERF_REG_MIPS_R4: - return "$4"; - case PERF_REG_MIPS_R5: - return "$5"; - case PERF_REG_MIPS_R6: - return "$6"; - case PERF_REG_MIPS_R7: - return "$7"; - case PERF_REG_MIPS_R8: - return "$8"; - case PERF_REG_MIPS_R9: - return "$9"; - case PERF_REG_MIPS_R10: - return "$10"; - case PERF_REG_MIPS_R11: - return "$11"; - case PERF_REG_MIPS_R12: - return "$12"; - case PERF_REG_MIPS_R13: - return "$13"; - case PERF_REG_MIPS_R14: - return "$14"; - case PERF_REG_MIPS_R15: - return "$15"; - case PERF_REG_MIPS_R16: - return "$16"; - case PERF_REG_MIPS_R17: - return "$17"; - case PERF_REG_MIPS_R18: - return "$18"; - case PERF_REG_MIPS_R19: - return "$19"; - case PERF_REG_MIPS_R20: - return "$20"; - case PERF_REG_MIPS_R21: - return "$21"; - case PERF_REG_MIPS_R22: - return "$22"; - case PERF_REG_MIPS_R23: - return "$23"; - case PERF_REG_MIPS_R24: - return "$24"; - case PERF_REG_MIPS_R25: - return "$25"; - case PERF_REG_MIPS_R28: - return "$28"; - case PERF_REG_MIPS_R29: - return "$29"; - case PERF_REG_MIPS_R30: - return "$30"; - case PERF_REG_MIPS_R31: - return "$31"; - default: - break; - } - return NULL; -} - -static const char *__perf_reg_name_powerpc(int id) -{ - switch (id) { - case PERF_REG_POWERPC_R0: - return "r0"; - case PERF_REG_POWERPC_R1: - return "r1"; - case PERF_REG_POWERPC_R2: - return "r2"; - case PERF_REG_POWERPC_R3: - return "r3"; - case PERF_REG_POWERPC_R4: - return "r4"; - case PERF_REG_POWERPC_R5: - return "r5"; - case PERF_REG_POWERPC_R6: - return "r6"; - case PERF_REG_POWERPC_R7: - return "r7"; - case PERF_REG_POWERPC_R8: - return "r8"; - case PERF_REG_POWERPC_R9: - return "r9"; - case PERF_REG_POWERPC_R10: - return "r10"; - case PERF_REG_POWERPC_R11: - return "r11"; - case PERF_REG_POWERPC_R12: - return "r12"; - case PERF_REG_POWERPC_R13: - return "r13"; - case PERF_REG_POWERPC_R14: - return "r14"; - case PERF_REG_POWERPC_R15: - return "r15"; - case PERF_REG_POWERPC_R16: - return "r16"; - case PERF_REG_POWERPC_R17: - return "r17"; - case PERF_REG_POWERPC_R18: - return "r18"; - case PERF_REG_POWERPC_R19: - return "r19"; - case PERF_REG_POWERPC_R20: - return "r20"; - case PERF_REG_POWERPC_R21: - return "r21"; - case PERF_REG_POWERPC_R22: - return "r22"; - case PERF_REG_POWERPC_R23: - return "r23"; - case PERF_REG_POWERPC_R24: - return "r24"; - case PERF_REG_POWERPC_R25: - return "r25"; - case PERF_REG_POWERPC_R26: - return "r26"; - case PERF_REG_POWERPC_R27: - return "r27"; - case PERF_REG_POWERPC_R28: - return "r28"; - case PERF_REG_POWERPC_R29: - return "r29"; - case PERF_REG_POWERPC_R30: - return "r30"; - case PERF_REG_POWERPC_R31: - return "r31"; - case PERF_REG_POWERPC_NIP: - return "nip"; - case PERF_REG_POWERPC_MSR: - return "msr"; - case PERF_REG_POWERPC_ORIG_R3: - return "orig_r3"; - case PERF_REG_POWERPC_CTR: - return "ctr"; - case PERF_REG_POWERPC_LINK: - return "link"; - case PERF_REG_POWERPC_XER: - return "xer"; - case PERF_REG_POWERPC_CCR: - return "ccr"; - case PERF_REG_POWERPC_SOFTE: - return "softe"; - case PERF_REG_POWERPC_TRAP: - return "trap"; - case PERF_REG_POWERPC_DAR: - return "dar"; - case PERF_REG_POWERPC_DSISR: - return "dsisr"; - case PERF_REG_POWERPC_SIER: - return "sier"; - case PERF_REG_POWERPC_MMCRA: - return "mmcra"; - case PERF_REG_POWERPC_MMCR0: - return "mmcr0"; - case PERF_REG_POWERPC_MMCR1: - return "mmcr1"; - case PERF_REG_POWERPC_MMCR2: - return "mmcr2"; - case PERF_REG_POWERPC_MMCR3: - return "mmcr3"; - case PERF_REG_POWERPC_SIER2: - return "sier2"; - case PERF_REG_POWERPC_SIER3: - return "sier3"; - case PERF_REG_POWERPC_PMC1: - return "pmc1"; - case PERF_REG_POWERPC_PMC2: - return "pmc2"; - case PERF_REG_POWERPC_PMC3: - return "pmc3"; - case PERF_REG_POWERPC_PMC4: - return "pmc4"; - case PERF_REG_POWERPC_PMC5: - return "pmc5"; - case PERF_REG_POWERPC_PMC6: - return "pmc6"; - case PERF_REG_POWERPC_SDAR: - return "sdar"; - case PERF_REG_POWERPC_SIAR: - return "siar"; - default: - break; - } - return NULL; -} - -static const char *__perf_reg_name_riscv(int id) -{ - switch (id) { - case PERF_REG_RISCV_PC: - return "pc"; - case PERF_REG_RISCV_RA: - return "ra"; - case PERF_REG_RISCV_SP: - return "sp"; - case PERF_REG_RISCV_GP: - return "gp"; - case PERF_REG_RISCV_TP: - return "tp"; - case PERF_REG_RISCV_T0: - return "t0"; - case PERF_REG_RISCV_T1: - return "t1"; - case PERF_REG_RISCV_T2: - return "t2"; - case PERF_REG_RISCV_S0: - return "s0"; - case PERF_REG_RISCV_S1: - return "s1"; - case PERF_REG_RISCV_A0: - return "a0"; - case PERF_REG_RISCV_A1: - return "a1"; - case PERF_REG_RISCV_A2: - return "a2"; - case PERF_REG_RISCV_A3: - return "a3"; - case PERF_REG_RISCV_A4: - return "a4"; - case PERF_REG_RISCV_A5: - return "a5"; - case PERF_REG_RISCV_A6: - return "a6"; - case PERF_REG_RISCV_A7: - return "a7"; - case PERF_REG_RISCV_S2: - return "s2"; - case PERF_REG_RISCV_S3: - return "s3"; - case PERF_REG_RISCV_S4: - return "s4"; - case PERF_REG_RISCV_S5: - return "s5"; - case PERF_REG_RISCV_S6: - return "s6"; - case PERF_REG_RISCV_S7: - return "s7"; - case PERF_REG_RISCV_S8: - return "s8"; - case PERF_REG_RISCV_S9: - return "s9"; - case PERF_REG_RISCV_S10: - return "s10"; - case PERF_REG_RISCV_S11: - return "s11"; - case PERF_REG_RISCV_T3: - return "t3"; - case PERF_REG_RISCV_T4: - return "t4"; - case PERF_REG_RISCV_T5: - return "t5"; - case PERF_REG_RISCV_T6: - return "t6"; - default: - return NULL; - } - - return NULL; -} - -static const char *__perf_reg_name_s390(int id) -{ - switch (id) { - case PERF_REG_S390_R0: - return "R0"; - case PERF_REG_S390_R1: - return "R1"; - case PERF_REG_S390_R2: - return "R2"; - case PERF_REG_S390_R3: - return "R3"; - case PERF_REG_S390_R4: - return "R4"; - case PERF_REG_S390_R5: - return "R5"; - case PERF_REG_S390_R6: - return "R6"; - case PERF_REG_S390_R7: - return "R7"; - case PERF_REG_S390_R8: - return "R8"; - case PERF_REG_S390_R9: - return "R9"; - case PERF_REG_S390_R10: - return "R10"; - case PERF_REG_S390_R11: - return "R11"; - case PERF_REG_S390_R12: - return "R12"; - case PERF_REG_S390_R13: - return "R13"; - case PERF_REG_S390_R14: - return "R14"; - case PERF_REG_S390_R15: - return "R15"; - case PERF_REG_S390_FP0: - return "FP0"; - case PERF_REG_S390_FP1: - return "FP1"; - case PERF_REG_S390_FP2: - return "FP2"; - case PERF_REG_S390_FP3: - return "FP3"; - case PERF_REG_S390_FP4: - return "FP4"; - case PERF_REG_S390_FP5: - return "FP5"; - case PERF_REG_S390_FP6: - return "FP6"; - case PERF_REG_S390_FP7: - return "FP7"; - case PERF_REG_S390_FP8: - return "FP8"; - case PERF_REG_S390_FP9: - return "FP9"; - case PERF_REG_S390_FP10: - return "FP10"; - case PERF_REG_S390_FP11: - return "FP11"; - case PERF_REG_S390_FP12: - return "FP12"; - case PERF_REG_S390_FP13: - return "FP13"; - case PERF_REG_S390_FP14: - return "FP14"; - case PERF_REG_S390_FP15: - return "FP15"; - case PERF_REG_S390_MASK: - return "MASK"; - case PERF_REG_S390_PC: - return "PC"; - default: - return NULL; - } - - return NULL; -} - -static const char *__perf_reg_name_x86(int id) -{ - switch (id) { - case PERF_REG_X86_AX: - return "AX"; - case PERF_REG_X86_BX: - return "BX"; - case PERF_REG_X86_CX: - return "CX"; - case PERF_REG_X86_DX: - return "DX"; - case PERF_REG_X86_SI: - return "SI"; - case PERF_REG_X86_DI: - return "DI"; - case PERF_REG_X86_BP: - return "BP"; - case PERF_REG_X86_SP: - return "SP"; - case PERF_REG_X86_IP: - return "IP"; - case PERF_REG_X86_FLAGS: - return "FLAGS"; - case PERF_REG_X86_CS: - return "CS"; - case PERF_REG_X86_SS: - return "SS"; - case PERF_REG_X86_DS: - return "DS"; - case PERF_REG_X86_ES: - return "ES"; - case PERF_REG_X86_FS: - return "FS"; - case PERF_REG_X86_GS: - return "GS"; - case PERF_REG_X86_R8: - return "R8"; - case PERF_REG_X86_R9: - return "R9"; - case PERF_REG_X86_R10: - return "R10"; - case PERF_REG_X86_R11: - return "R11"; - case PERF_REG_X86_R12: - return "R12"; - case PERF_REG_X86_R13: - return "R13"; - case PERF_REG_X86_R14: - return "R14"; - case PERF_REG_X86_R15: - return "R15"; - -#define XMM(x) \ - case PERF_REG_X86_XMM ## x: \ - case PERF_REG_X86_XMM ## x + 1: \ - return "XMM" #x; - XMM(0) - XMM(1) - XMM(2) - XMM(3) - XMM(4) - XMM(5) - XMM(6) - XMM(7) - XMM(8) - XMM(9) - XMM(10) - XMM(11) - XMM(12) - XMM(13) - XMM(14) - XMM(15) -#undef XMM - default: - return NULL; - } - - return NULL; -} - const char *perf_reg_name(int id, const char *arch) { const char *reg_name = NULL; @@ -790,4 +75,55 @@ out: *valp = regs->cache_regs[id]; return 0; } + +uint64_t perf_arch_reg_ip(const char *arch) +{ + if (!strcmp(arch, "arm")) + return __perf_reg_ip_arm(); + else if (!strcmp(arch, "arm64")) + return __perf_reg_ip_arm64(); + else if (!strcmp(arch, "csky")) + return __perf_reg_ip_csky(); + else if (!strcmp(arch, "loongarch")) + return __perf_reg_ip_loongarch(); + else if (!strcmp(arch, "mips")) + return __perf_reg_ip_mips(); + else if (!strcmp(arch, "powerpc")) + return __perf_reg_ip_powerpc(); + else if (!strcmp(arch, "riscv")) + return __perf_reg_ip_riscv(); + else if (!strcmp(arch, "s390")) + return __perf_reg_ip_s390(); + else if (!strcmp(arch, "x86")) + return __perf_reg_ip_x86(); + + pr_err("Fail to find IP register for arch %s, returns 0\n", arch); + return 0; +} + +uint64_t perf_arch_reg_sp(const char *arch) +{ + if (!strcmp(arch, "arm")) + return __perf_reg_sp_arm(); + else if (!strcmp(arch, "arm64")) + return __perf_reg_sp_arm64(); + else if (!strcmp(arch, "csky")) + return __perf_reg_sp_csky(); + else if (!strcmp(arch, "loongarch")) + return __perf_reg_sp_loongarch(); + else if (!strcmp(arch, "mips")) + return __perf_reg_sp_mips(); + else if (!strcmp(arch, "powerpc")) + return __perf_reg_sp_powerpc(); + else if (!strcmp(arch, "riscv")) + return __perf_reg_sp_riscv(); + else if (!strcmp(arch, "s390")) + return __perf_reg_sp_s390(); + else if (!strcmp(arch, "x86")) + return __perf_reg_sp_x86(); + + pr_err("Fail to find SP register for arch %s, returns 0\n", arch); + return 0; +} + #endif diff --git a/tools/perf/util/perf_regs.h b/tools/perf/util/perf_regs.h index ce1127af05e4..ecd2a5362042 100644 --- a/tools/perf/util/perf_regs.h +++ b/tools/perf/util/perf_regs.h @@ -30,18 +30,49 @@ uint64_t arch__user_reg_mask(void); #ifdef HAVE_PERF_REGS_SUPPORT extern const struct sample_reg sample_reg_masks[]; -#include <perf_regs.h> - -#define DWARF_MINIMAL_REGS ((1ULL << PERF_REG_IP) | (1ULL << PERF_REG_SP)) - const char *perf_reg_name(int id, const char *arch); int perf_reg_value(u64 *valp, struct regs_dump *regs, int id); +uint64_t perf_arch_reg_ip(const char *arch); +uint64_t perf_arch_reg_sp(const char *arch); +const char *__perf_reg_name_arm64(int id); +uint64_t __perf_reg_ip_arm64(void); +uint64_t __perf_reg_sp_arm64(void); +const char *__perf_reg_name_arm(int id); +uint64_t __perf_reg_ip_arm(void); +uint64_t __perf_reg_sp_arm(void); +const char *__perf_reg_name_csky(int id); +uint64_t __perf_reg_ip_csky(void); +uint64_t __perf_reg_sp_csky(void); +const char *__perf_reg_name_loongarch(int id); +uint64_t __perf_reg_ip_loongarch(void); +uint64_t __perf_reg_sp_loongarch(void); +const char *__perf_reg_name_mips(int id); +uint64_t __perf_reg_ip_mips(void); +uint64_t __perf_reg_sp_mips(void); +const char *__perf_reg_name_powerpc(int id); +uint64_t __perf_reg_ip_powerpc(void); +uint64_t __perf_reg_sp_powerpc(void); +const char *__perf_reg_name_riscv(int id); +uint64_t __perf_reg_ip_riscv(void); +uint64_t __perf_reg_sp_riscv(void); +const char *__perf_reg_name_s390(int id); +uint64_t __perf_reg_ip_s390(void); +uint64_t __perf_reg_sp_s390(void); +const char *__perf_reg_name_x86(int id); +uint64_t __perf_reg_ip_x86(void); +uint64_t __perf_reg_sp_x86(void); + +static inline uint64_t DWARF_MINIMAL_REGS(const char *arch) +{ + return (1ULL << perf_arch_reg_ip(arch)) | (1ULL << perf_arch_reg_sp(arch)); +} #else -#define PERF_REGS_MASK 0 -#define PERF_REGS_MAX 0 -#define DWARF_MINIMAL_REGS PERF_REGS_MASK +static inline uint64_t DWARF_MINIMAL_REGS(const char *arch __maybe_unused) +{ + return 0; +} static inline const char *perf_reg_name(int id __maybe_unused, const char *arch __maybe_unused) { @@ -54,5 +85,16 @@ static inline int perf_reg_value(u64 *valp __maybe_unused, { return 0; } + +static inline uint64_t perf_arch_reg_ip(const char *arch __maybe_unused) +{ + return 0; +} + +static inline uint64_t perf_arch_reg_sp(const char *arch __maybe_unused) +{ + return 0; +} + #endif /* HAVE_PERF_REGS_SUPPORT */ #endif /* __PERF_REGS_H */ diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index 28380e7aa8d0..d85602aa4b9f 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -19,8 +19,8 @@ #include "evsel.h" #include "pmu.h" #include "pmus.h" -#include "pmu-bison.h" -#include "pmu-flex.h" +#include <util/pmu-bison.h> +#include <util/pmu-flex.h> #include "parse-events.h" #include "print-events.h" #include "header.h" @@ -29,7 +29,63 @@ #include "fncache.h" #include "util/evsel_config.h" -struct perf_pmu perf_pmu__fake; +struct perf_pmu perf_pmu__fake = { + .name = "fake", +}; + +#define UNIT_MAX_LEN 31 /* max length for event unit name */ + +/** + * struct perf_pmu_alias - An event either read from sysfs or builtin in + * pmu-events.c, created by parsing the pmu-events json files. + */ +struct perf_pmu_alias { + /** @name: Name of the event like "mem-loads". */ + char *name; + /** @desc: Optional short description of the event. */ + char *desc; + /** @long_desc: Optional long description. */ + char *long_desc; + /** + * @topic: Optional topic such as cache or pipeline, particularly for + * json events. + */ + char *topic; + /** @terms: Owned list of the original parsed parameters. */ + struct list_head terms; + /** @list: List element of struct perf_pmu aliases. */ + struct list_head list; + /** + * @pmu_name: The name copied from the json struct pmu_event. This can + * differ from the PMU name as it won't have suffixes. + */ + char *pmu_name; + /** @unit: Units for the event, such as bytes or cache lines. */ + char unit[UNIT_MAX_LEN+1]; + /** @scale: Value to scale read counter values by. */ + double scale; + /** + * @per_pkg: Does the file + * <sysfs>/bus/event_source/devices/<pmu_name>/events/<name>.per-pkg or + * equivalent json value exist and have the value 1. + */ + bool per_pkg; + /** + * @snapshot: Does the file + * <sysfs>/bus/event_source/devices/<pmu_name>/events/<name>.snapshot + * exist and have the value 1. + */ + bool snapshot; + /** + * @deprecated: Is the event hidden and so not shown in perf list by + * default. + */ + bool deprecated; + /** @from_sysfs: Was the alias from sysfs or a json event? */ + bool from_sysfs; + /** @info_loaded: Have the scale, unit and other values been read from disk? */ + bool info_loaded; +}; /** * struct perf_pmu_format - Values from a format file read from @@ -40,6 +96,10 @@ struct perf_pmu perf_pmu__fake; * value=PERF_PMU_FORMAT_VALUE_CONFIG and bits 0 to 7 will be set. */ struct perf_pmu_format { + /** @list: Element on list within struct perf_pmu. */ + struct list_head list; + /** @bits: Which config bits are set by this format value. */ + DECLARE_BITMAP(bits, PERF_PMU_FORMAT_BITS); /** @name: The modifier/file name. */ char *name; /** @@ -47,18 +107,81 @@ struct perf_pmu_format { * are from PERF_PMU_FORMAT_VALUE_CONFIG to * PERF_PMU_FORMAT_VALUE_CONFIG_END. */ - int value; - /** @bits: Which config bits are set by this format value. */ - DECLARE_BITMAP(bits, PERF_PMU_FORMAT_BITS); - /** @list: Element on list within struct perf_pmu. */ - struct list_head list; + u16 value; + /** @loaded: Has the contents been loaded/parsed. */ + bool loaded; }; +static int pmu_aliases_parse(struct perf_pmu *pmu); + +static struct perf_pmu_format *perf_pmu__new_format(struct list_head *list, char *name) +{ + struct perf_pmu_format *format; + + format = zalloc(sizeof(*format)); + if (!format) + return NULL; + + format->name = strdup(name); + if (!format->name) { + free(format); + return NULL; + } + list_add_tail(&format->list, list); + return format; +} + +/* Called at the end of parsing a format. */ +void perf_pmu_format__set_value(void *vformat, int config, unsigned long *bits) +{ + struct perf_pmu_format *format = vformat; + + format->value = config; + memcpy(format->bits, bits, sizeof(format->bits)); +} + +static void __perf_pmu_format__load(struct perf_pmu_format *format, FILE *file) +{ + void *scanner; + int ret; + + ret = perf_pmu_lex_init(&scanner); + if (ret) + return; + + perf_pmu_set_in(file, scanner); + ret = perf_pmu_parse(format, scanner); + perf_pmu_lex_destroy(scanner); + format->loaded = true; +} + +static void perf_pmu_format__load(struct perf_pmu *pmu, struct perf_pmu_format *format) +{ + char path[PATH_MAX]; + FILE *file = NULL; + + if (format->loaded) + return; + + if (!perf_pmu__pathname_scnprintf(path, sizeof(path), pmu->name, "format")) + return; + + assert(strlen(path) + strlen(format->name) + 2 < sizeof(path)); + strcat(path, "/"); + strcat(path, format->name); + + file = fopen(path, "r"); + if (!file) + return; + __perf_pmu_format__load(format, file); + fclose(file); +} + /* * Parse & process all the sysfs attributes located under * the directory specified in 'dir' parameter. */ -int perf_pmu__format_parse(int dirfd, struct list_head *head) +int perf_pmu__format_parse(struct perf_pmu *pmu, int dirfd, bool eager_load) { struct dirent *evt_ent; DIR *format_dir; @@ -68,37 +191,35 @@ int perf_pmu__format_parse(int dirfd, struct list_head *head) if (!format_dir) return -EINVAL; - while (!ret && (evt_ent = readdir(format_dir))) { + while ((evt_ent = readdir(format_dir)) != NULL) { + struct perf_pmu_format *format; char *name = evt_ent->d_name; - int fd; - void *scanner; - FILE *file; if (!strcmp(name, ".") || !strcmp(name, "..")) continue; - - ret = -EINVAL; - fd = openat(dirfd, name, O_RDONLY); - if (fd < 0) - break; - - file = fdopen(fd, "r"); - if (!file) { - close(fd); + format = perf_pmu__new_format(&pmu->format, name); + if (!format) { + ret = -ENOMEM; break; } - ret = perf_pmu_lex_init(&scanner); - if (ret) { + if (eager_load) { + FILE *file; + int fd = openat(dirfd, name, O_RDONLY); + + if (fd < 0) { + ret = -errno; + break; + } + file = fdopen(fd, "r"); + if (!file) { + close(fd); + break; + } + __perf_pmu_format__load(format, file); fclose(file); - break; } - - perf_pmu_set_in(file, scanner); - ret = perf_pmu_parse(head, name, scanner); - perf_pmu_lex_destroy(scanner); - fclose(file); } closedir(format_dir); @@ -110,7 +231,7 @@ int perf_pmu__format_parse(int dirfd, struct list_head *head) * located at: * /sys/bus/event_source/devices/<dev>/format as sysfs group attributes. */ -static int pmu_format(int dirfd, const char *name, struct list_head *format) +static int pmu_format(struct perf_pmu *pmu, int dirfd, const char *name) { int fd; @@ -119,7 +240,7 @@ static int pmu_format(int dirfd, const char *name, struct list_head *format) return 0; /* it'll close the fd */ - if (perf_pmu__format_parse(fd, format)) + if (perf_pmu__format_parse(pmu, fd, /*eager_load=*/false)) return -1; return 0; @@ -162,17 +283,21 @@ out: return ret; } -static int perf_pmu__parse_scale(struct perf_pmu_alias *alias, int dirfd, char *name) +static int perf_pmu__parse_scale(struct perf_pmu *pmu, struct perf_pmu_alias *alias) { struct stat st; ssize_t sret; + size_t len; char scale[128]; int fd, ret = -1; char path[PATH_MAX]; - scnprintf(path, PATH_MAX, "%s.scale", name); + len = perf_pmu__event_source_devices_scnprintf(path, sizeof(path)); + if (!len) + return 0; + scnprintf(path + len, sizeof(path) - len, "%s/%s.scale", pmu->name, alias->name); - fd = openat(dirfd, path, O_RDONLY); + fd = open(path, O_RDONLY); if (fd == -1) return -1; @@ -194,15 +319,20 @@ error: return ret; } -static int perf_pmu__parse_unit(struct perf_pmu_alias *alias, int dirfd, char *name) +static int perf_pmu__parse_unit(struct perf_pmu *pmu, struct perf_pmu_alias *alias) { char path[PATH_MAX]; + size_t len; ssize_t sret; int fd; - scnprintf(path, PATH_MAX, "%s.unit", name); - fd = openat(dirfd, path, O_RDONLY); + len = perf_pmu__event_source_devices_scnprintf(path, sizeof(path)); + if (!len) + return 0; + scnprintf(path + len, sizeof(path) - len, "%s/%s.unit", pmu->name, alias->name); + + fd = open(path, O_RDONLY); if (fd == -1) return -1; @@ -225,14 +355,18 @@ error: } static int -perf_pmu__parse_per_pkg(struct perf_pmu_alias *alias, int dirfd, char *name) +perf_pmu__parse_per_pkg(struct perf_pmu *pmu, struct perf_pmu_alias *alias) { char path[PATH_MAX]; + size_t len; int fd; - scnprintf(path, PATH_MAX, "%s.per-pkg", name); + len = perf_pmu__event_source_devices_scnprintf(path, sizeof(path)); + if (!len) + return 0; + scnprintf(path + len, sizeof(path) - len, "%s/%s.per-pkg", pmu->name, alias->name); - fd = openat(dirfd, path, O_RDONLY); + fd = open(path, O_RDONLY); if (fd == -1) return -1; @@ -242,15 +376,18 @@ perf_pmu__parse_per_pkg(struct perf_pmu_alias *alias, int dirfd, char *name) return 0; } -static int perf_pmu__parse_snapshot(struct perf_pmu_alias *alias, - int dirfd, char *name) +static int perf_pmu__parse_snapshot(struct perf_pmu *pmu, struct perf_pmu_alias *alias) { char path[PATH_MAX]; + size_t len; int fd; - scnprintf(path, PATH_MAX, "%s.snapshot", name); + len = perf_pmu__event_source_devices_scnprintf(path, sizeof(path)); + if (!len) + return 0; + scnprintf(path + len, sizeof(path) - len, "%s/%s.snapshot", pmu->name, alias->name); - fd = openat(dirfd, path, O_RDONLY); + fd = open(path, O_RDONLY); if (fd == -1) return -1; @@ -259,46 +396,13 @@ static int perf_pmu__parse_snapshot(struct perf_pmu_alias *alias, return 0; } -static void perf_pmu_assign_str(char *name, const char *field, char **old_str, - char **new_str) -{ - if (!*old_str) - goto set_new; - - if (*new_str) { /* Have new string, check with old */ - if (strcasecmp(*old_str, *new_str)) - pr_debug("alias %s differs in field '%s'\n", - name, field); - zfree(old_str); - } else /* Nothing new --> keep old string */ - return; -set_new: - *old_str = *new_str; - *new_str = NULL; -} - -static void perf_pmu_update_alias(struct perf_pmu_alias *old, - struct perf_pmu_alias *newalias) -{ - perf_pmu_assign_str(old->name, "desc", &old->desc, &newalias->desc); - perf_pmu_assign_str(old->name, "long_desc", &old->long_desc, - &newalias->long_desc); - perf_pmu_assign_str(old->name, "topic", &old->topic, &newalias->topic); - perf_pmu_assign_str(old->name, "value", &old->str, &newalias->str); - old->scale = newalias->scale; - old->per_pkg = newalias->per_pkg; - old->snapshot = newalias->snapshot; - memcpy(old->unit, newalias->unit, sizeof(old->unit)); -} - /* Delete an alias entry. */ -void perf_pmu_free_alias(struct perf_pmu_alias *newalias) +static void perf_pmu_free_alias(struct perf_pmu_alias *newalias) { zfree(&newalias->name); zfree(&newalias->desc); zfree(&newalias->long_desc); zfree(&newalias->topic); - zfree(&newalias->str); zfree(&newalias->pmu_name); parse_events_terms__purge(&newalias->terms); free(newalias); @@ -314,38 +418,99 @@ static void perf_pmu__del_aliases(struct perf_pmu *pmu) } } -/* Merge an alias, search in alias list. If this name is already - * present merge both of them to combine all information. - */ -static bool perf_pmu_merge_alias(struct perf_pmu_alias *newalias, - struct list_head *alist) +static struct perf_pmu_alias *perf_pmu__find_alias(struct perf_pmu *pmu, + const char *name, + bool load) { - struct perf_pmu_alias *a; + struct perf_pmu_alias *alias; - list_for_each_entry(a, alist, list) { - if (!strcasecmp(newalias->name, a->name)) { - if (newalias->pmu_name && a->pmu_name && - !strcasecmp(newalias->pmu_name, a->pmu_name)) { - continue; - } - perf_pmu_update_alias(a, newalias); - perf_pmu_free_alias(newalias); - return true; - } + if (load && !pmu->sysfs_aliases_loaded) + pmu_aliases_parse(pmu); + + list_for_each_entry(alias, &pmu->aliases, list) { + if (!strcasecmp(alias->name, name)) + return alias; } - return false; + return NULL; } -static int __perf_pmu__new_alias(struct list_head *list, int dirfd, char *name, - char *desc, char *val, const struct pmu_event *pe) +static bool assign_str(const char *name, const char *field, char **old_str, + const char *new_str) +{ + if (!*old_str && new_str) { + *old_str = strdup(new_str); + return true; + } + + if (!new_str || !strcasecmp(*old_str, new_str)) + return false; /* Nothing to update. */ + + pr_debug("alias %s differs in field '%s' ('%s' != '%s')\n", + name, field, *old_str, new_str); + zfree(old_str); + *old_str = strdup(new_str); + return true; +} + +static void read_alias_info(struct perf_pmu *pmu, struct perf_pmu_alias *alias) +{ + if (!alias->from_sysfs || alias->info_loaded) + return; + + /* + * load unit name and scale if available + */ + perf_pmu__parse_unit(pmu, alias); + perf_pmu__parse_scale(pmu, alias); + perf_pmu__parse_per_pkg(pmu, alias); + perf_pmu__parse_snapshot(pmu, alias); +} + +struct update_alias_data { + struct perf_pmu *pmu; + struct perf_pmu_alias *alias; +}; + +static int update_alias(const struct pmu_event *pe, + const struct pmu_events_table *table __maybe_unused, + void *vdata) +{ + struct update_alias_data *data = vdata; + int ret = 0; + + read_alias_info(data->pmu, data->alias); + assign_str(pe->name, "desc", &data->alias->desc, pe->desc); + assign_str(pe->name, "long_desc", &data->alias->long_desc, pe->long_desc); + assign_str(pe->name, "topic", &data->alias->topic, pe->topic); + data->alias->per_pkg = pe->perpkg; + if (pe->event) { + parse_events_terms__purge(&data->alias->terms); + ret = parse_events_terms(&data->alias->terms, pe->event, /*input=*/NULL); + } + if (!ret && pe->unit) { + char *unit; + + ret = perf_pmu__convert_scale(pe->unit, &unit, &data->alias->scale); + if (!ret) + snprintf(data->alias->unit, sizeof(data->alias->unit), "%s", unit); + } + return ret; +} + +static int perf_pmu__new_alias(struct perf_pmu *pmu, const char *name, + const char *desc, const char *val, FILE *val_fd, + const struct pmu_event *pe) { - struct parse_events_term *term; struct perf_pmu_alias *alias; int ret; - char newval[256]; const char *long_desc = NULL, *topic = NULL, *unit = NULL, *pmu_name = NULL; bool deprecated = false, perpkg = false; + if (perf_pmu__find_alias(pmu, name, /*load=*/ false)) { + /* Alias was already created/loaded. */ + return 0; + } + if (pe) { long_desc = pe->long_desc; topic = pe->topic; @@ -366,80 +531,49 @@ static int __perf_pmu__new_alias(struct list_head *list, int dirfd, char *name, alias->snapshot = false; alias->deprecated = deprecated; - ret = parse_events_terms(&alias->terms, val); + ret = parse_events_terms(&alias->terms, val, val_fd); if (ret) { pr_err("Cannot parse alias %s: %d\n", val, ret); free(alias); return ret; } - /* Scan event and remove leading zeroes, spaces, newlines, some - * platforms have terms specified as - * event=0x0091 (read from files ../<PMU>/events/<FILE> - * and terms specified as event=0x91 (read from JSON files). - * - * Rebuild string to make alias->str member comparable. - */ - memset(newval, 0, sizeof(newval)); - ret = 0; - list_for_each_entry(term, &alias->terms, list) { - if (ret) - ret += scnprintf(newval + ret, sizeof(newval) - ret, - ","); - if (term->type_val == PARSE_EVENTS__TERM_TYPE_NUM) - ret += scnprintf(newval + ret, sizeof(newval) - ret, - "%s=%#x", term->config, term->val.num); - else if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) - ret += scnprintf(newval + ret, sizeof(newval) - ret, - "%s=%s", term->config, term->val.str); - } - alias->name = strdup(name); - if (dirfd >= 0) { - /* - * load unit name and scale if available - */ - perf_pmu__parse_unit(alias, dirfd, name); - perf_pmu__parse_scale(alias, dirfd, name); - perf_pmu__parse_per_pkg(alias, dirfd, name); - perf_pmu__parse_snapshot(alias, dirfd, name); - } - alias->desc = desc ? strdup(desc) : NULL; alias->long_desc = long_desc ? strdup(long_desc) : desc ? strdup(desc) : NULL; alias->topic = topic ? strdup(topic) : NULL; + alias->pmu_name = pmu_name ? strdup(pmu_name) : NULL; if (unit) { - if (perf_pmu__convert_scale(unit, (char **)&unit, &alias->scale) < 0) + if (perf_pmu__convert_scale(unit, (char **)&unit, &alias->scale) < 0) { + perf_pmu_free_alias(alias); return -1; + } snprintf(alias->unit, sizeof(alias->unit), "%s", unit); } - alias->str = strdup(newval); - alias->pmu_name = pmu_name ? strdup(pmu_name) : NULL; - - if (!perf_pmu_merge_alias(alias, list)) - list_add_tail(&alias->list, list); + if (!pe) { + /* Update an event from sysfs with json data. */ + struct update_alias_data data = { + .pmu = pmu, + .alias = alias, + }; + + alias->from_sysfs = true; + if (pmu->events_table) { + if (pmu_events_table__find_event(pmu->events_table, pmu, name, + update_alias, &data) == 0) + pmu->loaded_json_aliases++; + } + } + if (!pe) + pmu->sysfs_aliases++; + else + pmu->loaded_json_aliases++; + list_add_tail(&alias->list, &pmu->aliases); return 0; } -static int perf_pmu__new_alias(struct list_head *list, int dirfd, char *name, FILE *file) -{ - char buf[256]; - int ret; - - ret = fread(buf, 1, sizeof(buf), file); - if (ret == 0) - return -EINVAL; - - buf[ret] = 0; - - /* Remove trailing newline from sysfs file */ - strim(buf); - - return __perf_pmu__new_alias(list, dirfd, name, NULL, buf, NULL); -} - static inline bool pmu_alias_info_file(char *name) { size_t len; @@ -458,18 +592,33 @@ static inline bool pmu_alias_info_file(char *name) } /* - * Process all the sysfs attributes located under the directory - * specified in 'dir' parameter. + * Reading the pmu event aliases definition, which should be located at: + * /sys/bus/event_source/devices/<dev>/events as sysfs group attributes. */ -static int pmu_aliases_parse(int dirfd, struct list_head *head) +static int pmu_aliases_parse(struct perf_pmu *pmu) { + char path[PATH_MAX]; struct dirent *evt_ent; DIR *event_dir; - int fd; + size_t len; + int fd, dir_fd; - event_dir = fdopendir(dirfd); - if (!event_dir) + len = perf_pmu__event_source_devices_scnprintf(path, sizeof(path)); + if (!len) + return 0; + scnprintf(path + len, sizeof(path) - len, "%s/events", pmu->name); + + dir_fd = open(path, O_DIRECTORY); + if (dir_fd == -1) { + pmu->sysfs_aliases_loaded = true; + return 0; + } + + event_dir = fdopendir(dir_fd); + if (!event_dir){ + close (dir_fd); return -EINVAL; + } while ((evt_ent = readdir(event_dir))) { char *name = evt_ent->d_name; @@ -484,7 +633,7 @@ static int pmu_aliases_parse(int dirfd, struct list_head *head) if (pmu_alias_info_file(name)) continue; - fd = openat(dirfd, name, O_RDONLY); + fd = openat(dir_fd, name, O_RDONLY); if (fd == -1) { pr_debug("Cannot open %s\n", name); continue; @@ -495,31 +644,15 @@ static int pmu_aliases_parse(int dirfd, struct list_head *head) continue; } - if (perf_pmu__new_alias(head, dirfd, name, file) < 0) + if (perf_pmu__new_alias(pmu, name, /*desc=*/ NULL, + /*val=*/ NULL, file, /*pe=*/ NULL) < 0) pr_debug("Cannot set up %s\n", name); fclose(file); } closedir(event_dir); - return 0; -} - -/* - * Reading the pmu event aliases definition, which should be located at: - * /sys/bus/event_source/devices/<dev>/events as sysfs group attributes. - */ -static int pmu_aliases(int dirfd, const char *name, struct list_head *head) -{ - int fd; - - fd = perf_pmu__pathname_fd(dirfd, name, "events", O_DIRECTORY); - if (fd < 0) - return 0; - - /* it'll close the fd */ - if (pmu_aliases_parse(fd, head)) - return -1; - + close (dir_fd); + pmu->sysfs_aliases_loaded = true; return 0; } @@ -741,28 +874,13 @@ out: return res; } -struct pmu_add_cpu_aliases_map_data { - /* List being added to. */ - struct list_head *head; - /* If a pmu_event lacks a given PMU the default used. */ - char *default_pmu_name; - /* The PMU that we're searching for events for. */ - struct perf_pmu *pmu; -}; - static int pmu_add_cpu_aliases_map_callback(const struct pmu_event *pe, const struct pmu_events_table *table __maybe_unused, void *vdata) { - struct pmu_add_cpu_aliases_map_data *data = vdata; - const char *pname = pe->pmu ?: data->default_pmu_name; + struct perf_pmu *pmu = vdata; - if (!strcmp(pname, data->pmu->name) || - (data->pmu->is_uncore && pmu_uncore_alias_match(pname, data->pmu->name))) { - /* need type casts to override 'const' */ - __perf_pmu__new_alias(data->head, -1, (char *)pe->name, (char *)pe->desc, - (char *)pe->event, pe); - } + perf_pmu__new_alias(pmu, pe->name, pe->desc, pe->event, /*val_fd=*/ NULL, pe); return 0; } @@ -770,68 +888,51 @@ static int pmu_add_cpu_aliases_map_callback(const struct pmu_event *pe, * From the pmu_events_table, find the events that correspond to the given * PMU and add them to the list 'head'. */ -void pmu_add_cpu_aliases_table(struct list_head *head, struct perf_pmu *pmu, - const struct pmu_events_table *table) +void pmu_add_cpu_aliases_table(struct perf_pmu *pmu, const struct pmu_events_table *table) { - struct pmu_add_cpu_aliases_map_data data = { - .head = head, - .default_pmu_name = perf_pmus__default_pmu_name(), - .pmu = pmu, - }; - - pmu_events_table_for_each_event(table, pmu_add_cpu_aliases_map_callback, &data); - free(data.default_pmu_name); + pmu_events_table__for_each_event(table, pmu, pmu_add_cpu_aliases_map_callback, pmu); } -static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu) +static void pmu_add_cpu_aliases(struct perf_pmu *pmu) { - const struct pmu_events_table *table; + if (!pmu->events_table) + return; - table = perf_pmu__find_events_table(pmu); - if (!table) + if (pmu->cpu_aliases_added) return; - pmu_add_cpu_aliases_table(head, pmu, table); + pmu_add_cpu_aliases_table(pmu, pmu->events_table); + pmu->cpu_aliases_added = true; } -struct pmu_sys_event_iter_data { - struct list_head *head; - struct perf_pmu *pmu; -}; - static int pmu_add_sys_aliases_iter_fn(const struct pmu_event *pe, const struct pmu_events_table *table __maybe_unused, - void *data) + void *vdata) { - struct pmu_sys_event_iter_data *idata = data; - struct perf_pmu *pmu = idata->pmu; + struct perf_pmu *pmu = vdata; if (!pe->compat || !pe->pmu) return 0; if (!strcmp(pmu->id, pe->compat) && pmu_uncore_alias_match(pe->pmu, pmu->name)) { - __perf_pmu__new_alias(idata->head, -1, - (char *)pe->name, - (char *)pe->desc, - (char *)pe->event, - pe); + perf_pmu__new_alias(pmu, + pe->name, + pe->desc, + pe->event, + /*val_fd=*/ NULL, + pe); } return 0; } -void pmu_add_sys_aliases(struct list_head *head, struct perf_pmu *pmu) +void pmu_add_sys_aliases(struct perf_pmu *pmu) { - struct pmu_sys_event_iter_data idata = { - .head = head, - .pmu = pmu, - }; - if (!pmu->id) return; - pmu_for_each_sys_event(pmu_add_sys_aliases_iter_fn, &idata); + pmu_for_each_sys_event(pmu_add_sys_aliases_iter_fn, pmu); } struct perf_event_attr * __weak @@ -840,13 +941,13 @@ perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused) return NULL; } -char * __weak +const char * __weak pmu_find_real_name(const char *name) { - return (char *)name; + return name; } -char * __weak +const char * __weak pmu_find_alias_name(const char *name __maybe_unused) { return NULL; @@ -863,40 +964,41 @@ static int pmu_max_precise(int dirfd, struct perf_pmu *pmu) struct perf_pmu *perf_pmu__lookup(struct list_head *pmus, int dirfd, const char *lookup_name) { struct perf_pmu *pmu; - LIST_HEAD(format); - LIST_HEAD(aliases); __u32 type; - char *name = pmu_find_real_name(lookup_name); - char *alias_name; - - /* - * The pmu data we store & need consists of the pmu - * type value and format definitions. Load both right - * now. - */ - if (pmu_format(dirfd, name, &format)) - return NULL; - - /* - * Check the aliases first to avoid unnecessary work. - */ - if (pmu_aliases(dirfd, name, &aliases)) - return NULL; + const char *name = pmu_find_real_name(lookup_name); + const char *alias_name; pmu = zalloc(sizeof(*pmu)); if (!pmu) return NULL; - pmu->is_core = is_pmu_core(name); - pmu->cpus = pmu_cpumask(dirfd, name, pmu->is_core); pmu->name = strdup(name); if (!pmu->name) goto err; - /* Read type, and ensure that type value is successfully assigned (return 1) */ + /* + * Read type early to fail fast if a lookup name isn't a PMU. Ensure + * that type value is successfully assigned (return 1). + */ if (perf_pmu__scan_file_at(pmu, dirfd, "type", "%u", &type) != 1) goto err; + INIT_LIST_HEAD(&pmu->format); + INIT_LIST_HEAD(&pmu->aliases); + INIT_LIST_HEAD(&pmu->caps); + + /* + * The pmu data we store & need consists of the pmu + * type value and format definitions. Load both right + * now. + */ + if (pmu_format(pmu, dirfd, name)) { + free(pmu); + return NULL; + } + pmu->is_core = is_pmu_core(name); + pmu->cpus = pmu_cpumask(dirfd, name, pmu->is_core); + alias_name = pmu_find_alias_name(name); if (alias_name) { pmu->alias_name = strdup(alias_name); @@ -909,14 +1011,8 @@ struct perf_pmu *perf_pmu__lookup(struct list_head *pmus, int dirfd, const char if (pmu->is_uncore) pmu->id = pmu_id(name); pmu->max_precise = pmu_max_precise(dirfd, pmu); - pmu_add_cpu_aliases(&aliases, pmu); - pmu_add_sys_aliases(&aliases, pmu); - - INIT_LIST_HEAD(&pmu->format); - INIT_LIST_HEAD(&pmu->aliases); - INIT_LIST_HEAD(&pmu->caps); - list_splice(&format, &pmu->format); - list_splice(&aliases, &pmu->aliases); + pmu->events_table = perf_pmu__find_events_table(pmu); + pmu_add_sys_aliases(pmu); list_add_tail(&pmu->list, pmus); pmu->default_config = perf_pmu__get_default_config(pmu); @@ -966,13 +1062,15 @@ void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu) if (pmu == &perf_pmu__fake) return; - list_for_each_entry(format, &pmu->format, list) + list_for_each_entry(format, &pmu->format, list) { + perf_pmu_format__load(pmu, format); if (format->value >= PERF_PMU_FORMAT_VALUE_CONFIG_END) { pr_warning("WARNING: '%s' format '%s' requires 'perf_event_attr::config%d'" "which is not supported by this version of perf!\n", pmu->name, format->name, format->value); return; } + } } bool evsel__is_aux_event(const struct evsel *evsel) @@ -1000,7 +1098,7 @@ void evsel__set_config_if_unset(struct perf_pmu *pmu, struct evsel *evsel, if (term) user_bits = term->val.cfg_chg; - bits = perf_pmu__format_bits(&pmu->format, config_name); + bits = perf_pmu__format_bits(pmu, config_name); /* Do nothing if the user changed the value */ if (bits & user_bits) @@ -1023,9 +1121,9 @@ pmu_find_format(struct list_head *formats, const char *name) return NULL; } -__u64 perf_pmu__format_bits(struct list_head *formats, const char *name) +__u64 perf_pmu__format_bits(struct perf_pmu *pmu, const char *name) { - struct perf_pmu_format *format = pmu_find_format(formats, name); + struct perf_pmu_format *format = pmu_find_format(&pmu->format, name); __u64 bits = 0; int fbit; @@ -1038,13 +1136,14 @@ __u64 perf_pmu__format_bits(struct list_head *formats, const char *name) return bits; } -int perf_pmu__format_type(struct list_head *formats, const char *name) +int perf_pmu__format_type(struct perf_pmu *pmu, const char *name) { - struct perf_pmu_format *format = pmu_find_format(formats, name); + struct perf_pmu_format *format = pmu_find_format(&pmu->format, name); if (!format) return -1; + perf_pmu_format__load(pmu, format); return format->value; } @@ -1135,8 +1234,7 @@ error: * Setup one of config[12] attr members based on the * user input data - term parameter. */ -static int pmu_config_term(const char *pmu_name, - struct list_head *formats, +static int pmu_config_term(struct perf_pmu *pmu, struct perf_event_attr *attr, struct parse_events_term *term, struct list_head *head_terms, @@ -1160,15 +1258,15 @@ static int pmu_config_term(const char *pmu_name, if (parse_events__is_hardcoded_term(term)) return 0; - format = pmu_find_format(formats, term->config); + format = pmu_find_format(&pmu->format, term->config); if (!format) { - char *pmu_term = pmu_formats_string(formats); + char *pmu_term = pmu_formats_string(&pmu->format); char *unknown_term; char *help_msg; if (asprintf(&unknown_term, "unknown term '%s' for pmu '%s'", - term->config, pmu_name) < 0) + term->config, pmu->name) < 0) unknown_term = NULL; help_msg = parse_events_formats_error_string(pmu_term); if (err) { @@ -1182,7 +1280,7 @@ static int pmu_config_term(const char *pmu_name, free(pmu_term); return -EINVAL; } - + perf_pmu_format__load(pmu, format); switch (format->value) { case PERF_PMU_FORMAT_VALUE_CONFIG: vp = &attr->config; @@ -1259,7 +1357,7 @@ static int pmu_config_term(const char *pmu_name, return 0; } -int perf_pmu__config_terms(const char *pmu_name, struct list_head *formats, +int perf_pmu__config_terms(struct perf_pmu *pmu, struct perf_event_attr *attr, struct list_head *head_terms, bool zero, struct parse_events_error *err) @@ -1267,8 +1365,7 @@ int perf_pmu__config_terms(const char *pmu_name, struct list_head *formats, struct parse_events_term *term; list_for_each_entry(term, head_terms, list) { - if (pmu_config_term(pmu_name, formats, attr, term, head_terms, - zero, err)) + if (pmu_config_term(pmu, attr, term, head_terms, zero, err)) return -EINVAL; } @@ -1286,25 +1383,25 @@ int perf_pmu__config(struct perf_pmu *pmu, struct perf_event_attr *attr, { bool zero = !!pmu->default_config; - return perf_pmu__config_terms(pmu->name, &pmu->format, attr, - head_terms, zero, err); + return perf_pmu__config_terms(pmu, attr, head_terms, zero, err); } static struct perf_pmu_alias *pmu_find_alias(struct perf_pmu *pmu, struct parse_events_term *term) { struct perf_pmu_alias *alias; - char *name; + const char *name; if (parse_events__is_hardcoded_term(term)) return NULL; if (term->type_val == PARSE_EVENTS__TERM_TYPE_NUM) { - if (term->val.num != 1) + if (!term->no_value) return NULL; if (pmu_find_format(&pmu->format, term->config)) return NULL; name = term->config; + } else if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) { if (strcasecmp(term->config, "event")) return NULL; @@ -1313,26 +1410,51 @@ static struct perf_pmu_alias *pmu_find_alias(struct perf_pmu *pmu, return NULL; } - list_for_each_entry(alias, &pmu->aliases, list) { - if (!strcasecmp(alias->name, name)) - return alias; + alias = perf_pmu__find_alias(pmu, name, /*load=*/ true); + if (alias || pmu->cpu_aliases_added) + return alias; + + /* Alias doesn't exist, try to get it from the json events. */ + if (pmu->events_table && + pmu_events_table__find_event(pmu->events_table, pmu, name, + pmu_add_cpu_aliases_map_callback, + pmu) == 0) { + alias = perf_pmu__find_alias(pmu, name, /*load=*/ false); } - return NULL; + return alias; } -static int check_info_data(struct perf_pmu_alias *alias, - struct perf_pmu_info *info) +static int check_info_data(struct perf_pmu *pmu, + struct perf_pmu_alias *alias, + struct perf_pmu_info *info, + struct parse_events_error *err, + int column) { + read_alias_info(pmu, alias); /* * Only one term in event definition can * define unit, scale and snapshot, fail * if there's more than one. */ - if ((info->unit && alias->unit[0]) || - (info->scale && alias->scale) || - (info->snapshot && alias->snapshot)) + if (info->unit && alias->unit[0]) { + parse_events_error__handle(err, column, + strdup("Attempt to set event's unit twice"), + NULL); return -EINVAL; + } + if (info->scale && alias->scale) { + parse_events_error__handle(err, column, + strdup("Attempt to set event's scale twice"), + NULL); + return -EINVAL; + } + if (info->snapshot && alias->snapshot) { + parse_events_error__handle(err, column, + strdup("Attempt to set event snapshot twice"), + NULL); + return -EINVAL; + } if (alias->unit[0]) info->unit = alias->unit; @@ -1351,7 +1473,7 @@ static int check_info_data(struct perf_pmu_alias *alias, * defined for the alias */ int perf_pmu__check_alias(struct perf_pmu *pmu, struct list_head *head_terms, - struct perf_pmu_info *info) + struct perf_pmu_info *info, struct parse_events_error *err) { struct parse_events_term *term, *h; struct perf_pmu_alias *alias; @@ -1372,10 +1494,14 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct list_head *head_terms, if (!alias) continue; ret = pmu_alias_terms(alias, &term->list); - if (ret) + if (ret) { + parse_events_error__handle(err, term->err_term, + strdup("Failure to duplicate terms"), + NULL); return ret; + } - ret = check_info_data(alias, info); + ret = check_info_data(pmu, alias, info, err, term->err_term); if (ret) return ret; @@ -1400,36 +1526,36 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct list_head *head_terms, return 0; } -int perf_pmu__new_format(struct list_head *list, char *name, - int config, unsigned long *bits) -{ - struct perf_pmu_format *format; +struct find_event_args { + const char *event; + void *state; + pmu_event_callback cb; +}; - format = zalloc(sizeof(*format)); - if (!format) - return -ENOMEM; +static int find_event_callback(void *state, struct pmu_event_info *info) +{ + struct find_event_args *args = state; - format->name = strdup(name); - format->value = config; - memcpy(format->bits, bits, sizeof(format->bits)); + if (!strcmp(args->event, info->name)) + return args->cb(args->state, info); - list_add_tail(&format->list, list); return 0; } -void perf_pmu__set_format(unsigned long *bits, long from, long to) +int perf_pmu__find_event(struct perf_pmu *pmu, const char *event, void *state, pmu_event_callback cb) { - long b; - - if (!to) - to = from; + struct find_event_args args = { + .event = event, + .state = state, + .cb = cb, + }; - memset(bits, 0, BITS_TO_BYTES(PERF_PMU_FORMAT_BITS)); - for (b = from; b <= to; b++) - __set_bit(b, bits); + /* Sub-optimal, but function is only used by tests. */ + return perf_pmu__for_each_event(pmu, /*skip_duplicate_pmus=*/ false, + &args, find_event_callback); } -void perf_pmu__del_formats(struct list_head *formats) +static void perf_pmu__del_formats(struct list_head *formats) { struct perf_pmu_format *fmt, *tmp; @@ -1466,15 +1592,145 @@ bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu) return !pmu->is_core || perf_pmus__num_core_pmus() == 1; } -bool perf_pmu__have_event(const struct perf_pmu *pmu, const char *name) +bool perf_pmu__have_event(struct perf_pmu *pmu, const char *name) { - struct perf_pmu_alias *alias; + if (perf_pmu__find_alias(pmu, name, /*load=*/ true) != NULL) + return true; + if (pmu->cpu_aliases_added || !pmu->events_table) + return false; + return pmu_events_table__find_event(pmu->events_table, pmu, name, NULL, NULL) == 0; +} - list_for_each_entry(alias, &pmu->aliases, list) { - if (!strcmp(alias->name, name)) - return true; +size_t perf_pmu__num_events(struct perf_pmu *pmu) +{ + size_t nr; + + if (!pmu->sysfs_aliases_loaded) + pmu_aliases_parse(pmu); + + nr = pmu->sysfs_aliases; + + if (pmu->cpu_aliases_added) + nr += pmu->loaded_json_aliases; + else if (pmu->events_table) + nr += pmu_events_table__num_events(pmu->events_table, pmu) - pmu->loaded_json_aliases; + + return pmu->selectable ? nr + 1 : nr; +} + +static int sub_non_neg(int a, int b) +{ + if (b > a) + return 0; + return a - b; +} + +static char *format_alias(char *buf, int len, const struct perf_pmu *pmu, + const struct perf_pmu_alias *alias, bool skip_duplicate_pmus) +{ + struct parse_events_term *term; + int pmu_name_len = skip_duplicate_pmus + ? pmu_name_len_no_suffix(pmu->name, /*num=*/NULL) + : (int)strlen(pmu->name); + int used = snprintf(buf, len, "%.*s/%s", pmu_name_len, pmu->name, alias->name); + + list_for_each_entry(term, &alias->terms, list) { + if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) + used += snprintf(buf + used, sub_non_neg(len, used), + ",%s=%s", term->config, + term->val.str); } - return false; + + if (sub_non_neg(len, used) > 0) { + buf[used] = '/'; + used++; + } + if (sub_non_neg(len, used) > 0) { + buf[used] = '\0'; + used++; + } else + buf[len - 1] = '\0'; + + return buf; +} + +int perf_pmu__for_each_event(struct perf_pmu *pmu, bool skip_duplicate_pmus, + void *state, pmu_event_callback cb) +{ + char buf[1024]; + struct perf_pmu_alias *event; + struct pmu_event_info info = { + .pmu = pmu, + }; + int ret = 0; + struct strbuf sb; + + strbuf_init(&sb, /*hint=*/ 0); + pmu_add_cpu_aliases(pmu); + list_for_each_entry(event, &pmu->aliases, list) { + size_t buf_used; + + info.pmu_name = event->pmu_name ?: pmu->name; + info.alias = NULL; + if (event->desc) { + info.name = event->name; + buf_used = 0; + } else { + info.name = format_alias(buf, sizeof(buf), pmu, event, + skip_duplicate_pmus); + if (pmu->is_core) { + info.alias = info.name; + info.name = event->name; + } + buf_used = strlen(buf) + 1; + } + info.scale_unit = NULL; + if (strlen(event->unit) || event->scale != 1.0) { + info.scale_unit = buf + buf_used; + buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used, + "%G%s", event->scale, event->unit) + 1; + } + info.desc = event->desc; + info.long_desc = event->long_desc; + info.encoding_desc = buf + buf_used; + parse_events_term__to_strbuf(&event->terms, &sb); + buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used, + "%s/%s/", info.pmu_name, sb.buf) + 1; + info.topic = event->topic; + info.str = sb.buf; + info.deprecated = event->deprecated; + ret = cb(state, &info); + if (ret) + goto out; + strbuf_setlen(&sb, /*len=*/ 0); + } + if (pmu->selectable) { + info.name = buf; + snprintf(buf, sizeof(buf), "%s//", pmu->name); + info.alias = NULL; + info.scale_unit = NULL; + info.desc = NULL; + info.long_desc = NULL; + info.encoding_desc = NULL; + info.topic = NULL; + info.pmu_name = pmu->name; + info.deprecated = false; + ret = cb(state, &info); + } +out: + strbuf_release(&sb); + return ret; +} + +bool pmu__name_match(const struct perf_pmu *pmu, const char *pmu_name) +{ + return !strcmp(pmu->name, pmu_name) || + (pmu->is_uncore && pmu_uncore_alias_match(pmu_name, pmu->name)) || + /* + * jevents and tests use default_core as a marker for any core + * PMU as the PMU name varies across architectures. + */ + (pmu->is_core && !strcmp(pmu_name, "default_core")); } bool perf_pmu__is_software(const struct perf_pmu *pmu) @@ -1710,7 +1966,7 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config, name ?: "N/A", buf, config_name, config); } -int perf_pmu__match(char *pattern, char *name, char *tok) +int perf_pmu__match(const char *pattern, const char *name, const char *tok) { if (!name) return -1; @@ -1756,17 +2012,19 @@ int perf_pmu__event_source_devices_fd(void) * then pathname will be filled with * "/sys/bus/event_source/devices/cs_etm/format" * - * Return 0 if the sysfs mountpoint couldn't be found or if no - * characters were written. + * Return 0 if the sysfs mountpoint couldn't be found, if no characters were + * written or if the buffer size is exceeded. */ int perf_pmu__pathname_scnprintf(char *buf, size_t size, const char *pmu_name, const char *filename) { - char base_path[PATH_MAX]; + size_t len; - if (!perf_pmu__event_source_devices_scnprintf(base_path, sizeof(base_path))) + len = perf_pmu__event_source_devices_scnprintf(buf, size); + if (!len || (len + strlen(pmu_name) + strlen(filename) + 1) >= size) return 0; - return scnprintf(buf, size, "%s%s/%s", base_path, pmu_name, filename); + + return scnprintf(buf + len, size - len, "%s/%s", pmu_name, filename); } int perf_pmu__pathname_fd(int dirfd, const char *pmu_name, const char *filename, int flags) @@ -1788,5 +2046,23 @@ void perf_pmu__delete(struct perf_pmu *pmu) zfree(&pmu->default_config); zfree(&pmu->name); zfree(&pmu->alias_name); + zfree(&pmu->id); free(pmu); } + +struct perf_pmu *pmu__find_core_pmu(void) +{ + struct perf_pmu *pmu = NULL; + + while ((pmu = perf_pmus__scan_core(pmu))) { + /* + * The cpumap should cover all CPUs. Otherwise, some CPUs may + * not support some events or have different event IDs. + */ + if (RC_CHK_ACCESS(pmu->cpus)->nr != cpu__max_cpu().cpu) + return NULL; + + return pmu; + } + return NULL; +} diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h index 6b414cecbad2..6a4e170c61d6 100644 --- a/tools/perf/util/pmu.h +++ b/tools/perf/util/pmu.h @@ -39,7 +39,7 @@ struct perf_pmu_caps { */ struct perf_pmu { /** @name: The name of the PMU such as "cpu". */ - char *name; + const char *name; /** * @alias_name: Optional alternate name for the PMU determined in * architecture specific code. @@ -49,7 +49,7 @@ struct perf_pmu { * @id: Optional PMU identifier read from * <sysfs>/bus/event_source/devices/<name>/identifier. */ - char *id; + const char *id; /** * @type: Perf event attributed type value, read from * <sysfs>/bus/event_source/devices/<name>/type. @@ -114,6 +114,21 @@ struct perf_pmu { * from json events in pmu-events.c. */ struct list_head aliases; + /** + * @events_table: The events table for json events in pmu-events.c. + */ + const struct pmu_events_table *events_table; + /** @sysfs_aliases: Number of sysfs aliases loaded. */ + uint32_t sysfs_aliases; + /** @sysfs_aliases: Number of json event aliases loaded. */ + uint32_t loaded_json_aliases; + /** @sysfs_aliases_loaded: Are sysfs aliases loaded from disk? */ + bool sysfs_aliases_loaded; + /** + * @cpu_aliases_added: Have all json events table entries for the PMU + * been added? + */ + bool cpu_aliases_added; /** @caps_initialized: Has the list caps been initialized? */ bool caps_initialized; /** @nr_caps: The length of the list caps. */ @@ -158,88 +173,49 @@ struct perf_pmu_info { bool snapshot; }; -#define UNIT_MAX_LEN 31 /* max length for event unit name */ - -/** - * struct perf_pmu_alias - An event either read from sysfs or builtin in - * pmu-events.c, created by parsing the pmu-events json files. - */ -struct perf_pmu_alias { - /** @name: Name of the event like "mem-loads". */ - char *name; - /** @desc: Optional short description of the event. */ - char *desc; - /** @long_desc: Optional long description. */ - char *long_desc; - /** - * @topic: Optional topic such as cache or pipeline, particularly for - * json events. - */ - char *topic; - /** - * @str: Comma separated parameter list like - * "event=0xcd,umask=0x1,ldlat=0x3". - */ - char *str; - /** @terms: Owned list of the original parsed parameters. */ - struct list_head terms; - /** @list: List element of struct perf_pmu aliases. */ - struct list_head list; - /** @unit: Units for the event, such as bytes or cache lines. */ - char unit[UNIT_MAX_LEN+1]; - /** @scale: Value to scale read counter values by. */ - double scale; - /** - * @per_pkg: Does the file - * <sysfs>/bus/event_source/devices/<pmu_name>/events/<name>.per-pkg or - * equivalent json value exist and have the value 1. - */ - bool per_pkg; - /** - * @snapshot: Does the file - * <sysfs>/bus/event_source/devices/<pmu_name>/events/<name>.snapshot - * exist and have the value 1. - */ - bool snapshot; - /** - * @deprecated: Is the event hidden and so not shown in perf list by - * default. - */ +struct pmu_event_info { + const struct perf_pmu *pmu; + const char *name; + const char* alias; + const char *scale_unit; + const char *desc; + const char *long_desc; + const char *encoding_desc; + const char *topic; + const char *pmu_name; + const char *str; bool deprecated; - /** - * @pmu_name: The name copied from the json struct pmu_event. This can - * differ from the PMU name as it won't have suffixes. - */ - char *pmu_name; }; -void pmu_add_sys_aliases(struct list_head *head, struct perf_pmu *pmu); +typedef int (*pmu_event_callback)(void *state, struct pmu_event_info *info); + +void pmu_add_sys_aliases(struct perf_pmu *pmu); int perf_pmu__config(struct perf_pmu *pmu, struct perf_event_attr *attr, struct list_head *head_terms, struct parse_events_error *error); -int perf_pmu__config_terms(const char *pmu_name, struct list_head *formats, +int perf_pmu__config_terms(struct perf_pmu *pmu, struct perf_event_attr *attr, struct list_head *head_terms, bool zero, struct parse_events_error *error); -__u64 perf_pmu__format_bits(struct list_head *formats, const char *name); -int perf_pmu__format_type(struct list_head *formats, const char *name); +__u64 perf_pmu__format_bits(struct perf_pmu *pmu, const char *name); +int perf_pmu__format_type(struct perf_pmu *pmu, const char *name); int perf_pmu__check_alias(struct perf_pmu *pmu, struct list_head *head_terms, - struct perf_pmu_info *info); -struct list_head *perf_pmu__alias(struct perf_pmu *pmu, - struct list_head *head_terms); -void perf_pmu_error(struct list_head *list, char *name, void *scanner, char const *msg); + struct perf_pmu_info *info, struct parse_events_error *err); +int perf_pmu__find_event(struct perf_pmu *pmu, const char *event, void *state, pmu_event_callback cb); -int perf_pmu__new_format(struct list_head *list, char *name, - int config, unsigned long *bits); -void perf_pmu__set_format(unsigned long *bits, long from, long to); -int perf_pmu__format_parse(int dirfd, struct list_head *head); -void perf_pmu__del_formats(struct list_head *formats); +int perf_pmu__format_parse(struct perf_pmu *pmu, int dirfd, bool eager_load); +void perf_pmu_format__set_value(void *format, int config, unsigned long *bits); bool perf_pmu__has_format(const struct perf_pmu *pmu, const char *name); bool is_pmu_core(const char *name); bool perf_pmu__supports_legacy_cache(const struct perf_pmu *pmu); bool perf_pmu__auto_merge_stats(const struct perf_pmu *pmu); -bool perf_pmu__have_event(const struct perf_pmu *pmu, const char *name); +bool perf_pmu__have_event(struct perf_pmu *pmu, const char *name); +size_t perf_pmu__num_events(struct perf_pmu *pmu); +int perf_pmu__for_each_event(struct perf_pmu *pmu, bool skip_duplicate_pmus, + void *state, pmu_event_callback cb); +bool pmu__name_match(const struct perf_pmu *pmu, const char *pmu_name); + /** * perf_pmu_is_software - is the PMU a software PMU as in it uses the * perf_sw_context in the kernel? @@ -258,13 +234,12 @@ bool perf_pmu__file_exists(struct perf_pmu *pmu, const char *name); int perf_pmu__test(void); struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu); -void pmu_add_cpu_aliases_table(struct list_head *head, struct perf_pmu *pmu, +void pmu_add_cpu_aliases_table(struct perf_pmu *pmu, const struct pmu_events_table *table); char *perf_pmu__getcpuid(struct perf_pmu *pmu); const struct pmu_events_table *pmu_events_table__find(void); const struct pmu_metrics_table *pmu_metrics_table__find(void); -void perf_pmu_free_alias(struct perf_pmu_alias *alias); int perf_pmu__convert_scale(const char *scale, char **end, double *sval); @@ -275,10 +250,10 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config, const char *config_name); void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu); -int perf_pmu__match(char *pattern, char *name, char *tok); +int perf_pmu__match(const char *pattern, const char *name, const char *tok); -char *pmu_find_real_name(const char *name); -char *pmu_find_alias_name(const char *name); +const char *pmu_find_real_name(const char *name); +const char *pmu_find_alias_name(const char *name); double perf_pmu__cpu_slots_per_cycle(void); int perf_pmu__event_source_devices_scnprintf(char *pathname, size_t size); int perf_pmu__pathname_scnprintf(char *buf, size_t size, @@ -289,5 +264,6 @@ int perf_pmu__pathname_fd(int dirfd, const char *pmu_name, const char *filename, struct perf_pmu *perf_pmu__lookup(struct list_head *pmus, int dirfd, const char *lookup_name); struct perf_pmu *perf_pmu__create_placeholder_core_pmu(struct list_head *core_pmus); void perf_pmu__delete(struct perf_pmu *pmu); +struct perf_pmu *pmu__find_core_pmu(void); #endif /* __PMU_H */ diff --git a/tools/perf/util/pmu.y b/tools/perf/util/pmu.y index dff4e892ac4d..600c8c158c8e 100644 --- a/tools/perf/util/pmu.y +++ b/tools/perf/util/pmu.y @@ -1,6 +1,5 @@ %define api.pure full -%parse-param {struct list_head *format} -%parse-param {char *name} +%parse-param {void *format} %parse-param {void *scanner} %lex-param {void* scanner} @@ -11,6 +10,9 @@ #include <linux/bitmap.h> #include <string.h> #include "pmu.h" +#include "pmu-bison.h" + +int perf_pmu_lex(YYSTYPE * yylval_param , void *yyscanner); #define ABORT_ON(val) \ do { \ @@ -18,6 +20,20 @@ do { \ YYABORT; \ } while (0) +static void perf_pmu_error(void *format, void *scanner, const char *msg); + +static void perf_pmu__set_format(unsigned long *bits, long from, long to) +{ + long b; + + if (!to) + to = from; + + memset(bits, 0, BITS_TO_BYTES(PERF_PMU_FORMAT_BITS)); + for (b = from; b <= to; b++) + __set_bit(b, bits); +} + %} %token PP_CONFIG @@ -42,16 +58,12 @@ format_term format_term: PP_CONFIG ':' bits { - ABORT_ON(perf_pmu__new_format(format, name, - PERF_PMU_FORMAT_VALUE_CONFIG, - $3)); + perf_pmu_format__set_value(format, PERF_PMU_FORMAT_VALUE_CONFIG, $3); } | PP_CONFIG PP_VALUE ':' bits { - ABORT_ON(perf_pmu__new_format(format, name, - $2, - $4)); + perf_pmu_format__set_value(format, $2, $4); } bits: @@ -78,9 +90,8 @@ PP_VALUE %% -void perf_pmu_error(struct list_head *list __maybe_unused, - char *name __maybe_unused, - void *scanner __maybe_unused, - char const *msg __maybe_unused) +static void perf_pmu_error(void *format __maybe_unused, + void *scanner __maybe_unused, + const char *msg __maybe_unused) { } diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c index c58ba9fb6a36..6631367c756f 100644 --- a/tools/perf/util/pmus.c +++ b/tools/perf/util/pmus.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/list.h> +#include <linux/list_sort.h> +#include <linux/string.h> #include <linux/zalloc.h> #include <subcmd/pager.h> #include <sys/types.h> +#include <ctype.h> #include <dirent.h> #include <pthread.h> #include <string.h> @@ -33,6 +36,31 @@ static LIST_HEAD(other_pmus); static bool read_sysfs_core_pmus; static bool read_sysfs_all_pmus; +int pmu_name_len_no_suffix(const char *str, unsigned long *num) +{ + int orig_len, len; + + orig_len = len = strlen(str); + + /* Non-uncore PMUs have their full length, for example, i915. */ + if (!strstarts(str, "uncore_")) + return len; + + /* + * Count trailing digits and '_', if '_{num}' suffix isn't present use + * the full length. + */ + while (len > 0 && isdigit(str[len - 1])) + len--; + + if (len > 0 && len != orig_len && str[len - 1] == '_') { + if (num) + *num = strtoul(&str[len], NULL, 10); + return len - 1; + } + return orig_len; +} + void perf_pmus__destroy(void) { struct perf_pmu *pmu, *tmp; @@ -122,6 +150,25 @@ static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name) return perf_pmu__lookup(core_pmu ? &core_pmus : &other_pmus, dirfd, name); } +static int pmus_cmp(void *priv __maybe_unused, + const struct list_head *lhs, const struct list_head *rhs) +{ + unsigned long lhs_num = 0, rhs_num = 0; + struct perf_pmu *lhs_pmu = container_of(lhs, struct perf_pmu, list); + struct perf_pmu *rhs_pmu = container_of(rhs, struct perf_pmu, list); + const char *lhs_pmu_name = lhs_pmu->name ?: ""; + const char *rhs_pmu_name = rhs_pmu->name ?: ""; + int lhs_pmu_name_len = pmu_name_len_no_suffix(lhs_pmu_name, &lhs_num); + int rhs_pmu_name_len = pmu_name_len_no_suffix(rhs_pmu_name, &rhs_num); + int ret = strncmp(lhs_pmu_name, rhs_pmu_name, + lhs_pmu_name_len < rhs_pmu_name_len ? lhs_pmu_name_len : rhs_pmu_name_len); + + if (lhs_pmu_name_len != rhs_pmu_name_len || ret != 0 || lhs_pmu_name_len == 0) + return ret; + + return lhs_num < rhs_num ? -1 : (lhs_num > rhs_num ? 1 : 0); +} + /* Add all pmus in sysfs to pmu list: */ static void pmu_read_sysfs(bool core_only) { @@ -156,6 +203,8 @@ static void pmu_read_sysfs(bool core_only) if (!perf_pmu__create_placeholder_core_pmu(&core_pmus)) pr_err("Failure to set up any core PMUs\n"); } + list_sort(NULL, &core_pmus, pmus_cmp); + list_sort(NULL, &other_pmus, pmus_cmp); if (!list_empty(&core_pmus)) { read_sysfs_core_pmus = true; if (!core_only) @@ -227,6 +276,43 @@ struct perf_pmu *perf_pmus__scan_core(struct perf_pmu *pmu) return NULL; } +static struct perf_pmu *perf_pmus__scan_skip_duplicates(struct perf_pmu *pmu) +{ + bool use_core_pmus = !pmu || pmu->is_core; + int last_pmu_name_len = 0; + const char *last_pmu_name = (pmu && pmu->name) ? pmu->name : ""; + + if (!pmu) { + pmu_read_sysfs(/*core_only=*/false); + pmu = list_prepare_entry(pmu, &core_pmus, list); + } else + last_pmu_name_len = pmu_name_len_no_suffix(pmu->name ?: "", NULL); + + if (use_core_pmus) { + list_for_each_entry_continue(pmu, &core_pmus, list) { + int pmu_name_len = pmu_name_len_no_suffix(pmu->name ?: "", /*num=*/NULL); + + if (last_pmu_name_len == pmu_name_len && + !strncmp(last_pmu_name, pmu->name ?: "", pmu_name_len)) + continue; + + return pmu; + } + pmu = NULL; + pmu = list_prepare_entry(pmu, &other_pmus, list); + } + list_for_each_entry_continue(pmu, &other_pmus, list) { + int pmu_name_len = pmu_name_len_no_suffix(pmu->name ?: "", /*num=*/NULL); + + if (last_pmu_name_len == pmu_name_len && + !strncmp(last_pmu_name, pmu->name ?: "", pmu_name_len)) + continue; + + return pmu; + } + return NULL; +} + const struct perf_pmu *perf_pmus__pmu_for_pmu_filter(const char *str) { struct perf_pmu *pmu = NULL; @@ -258,219 +344,153 @@ int __weak perf_pmus__num_mem_pmus(void) struct sevent { /** PMU for event. */ const struct perf_pmu *pmu; - /** - * Optional event for name, desc, etc. If not present then this is a - * selectable PMU and the event name is shown as "//". - */ - const struct perf_pmu_alias *event; - /** Is the PMU for the CPU? */ - bool is_cpu; + const char *name; + const char* alias; + const char *scale_unit; + const char *desc; + const char *long_desc; + const char *encoding_desc; + const char *topic; + const char *pmu_name; + bool deprecated; }; static int cmp_sevent(const void *a, const void *b) { const struct sevent *as = a; const struct sevent *bs = b; - const char *a_pmu_name = NULL, *b_pmu_name = NULL; - const char *a_name = "//", *a_desc = NULL, *a_topic = ""; - const char *b_name = "//", *b_desc = NULL, *b_topic = ""; + bool a_iscpu, b_iscpu; int ret; - if (as->event) { - a_name = as->event->name; - a_desc = as->event->desc; - a_topic = as->event->topic ?: ""; - a_pmu_name = as->event->pmu_name; - } - if (bs->event) { - b_name = bs->event->name; - b_desc = bs->event->desc; - b_topic = bs->event->topic ?: ""; - b_pmu_name = bs->event->pmu_name; - } /* Put extra events last. */ - if (!!a_desc != !!b_desc) - return !!a_desc - !!b_desc; + if (!!as->desc != !!bs->desc) + return !!as->desc - !!bs->desc; /* Order by topics. */ - ret = strcmp(a_topic, b_topic); + ret = strcmp(as->topic ?: "", bs->topic ?: ""); if (ret) return ret; /* Order CPU core events to be first */ - if (as->is_cpu != bs->is_cpu) - return as->is_cpu ? -1 : 1; + a_iscpu = as->pmu ? as->pmu->is_core : true; + b_iscpu = bs->pmu ? bs->pmu->is_core : true; + if (a_iscpu != b_iscpu) + return a_iscpu ? -1 : 1; /* Order by PMU name. */ if (as->pmu != bs->pmu) { - a_pmu_name = a_pmu_name ?: (as->pmu->name ?: ""); - b_pmu_name = b_pmu_name ?: (bs->pmu->name ?: ""); - ret = strcmp(a_pmu_name, b_pmu_name); + ret = strcmp(as->pmu_name ?: "", bs->pmu_name ?: ""); if (ret) return ret; } /* Order by event name. */ - return strcmp(a_name, b_name); + return strcmp(as->name, bs->name); } -static bool pmu_alias_is_duplicate(struct sevent *alias_a, - struct sevent *alias_b) +static bool pmu_alias_is_duplicate(struct sevent *a, struct sevent *b) { - const char *a_pmu_name = NULL, *b_pmu_name = NULL; - const char *a_name = "//", *b_name = "//"; - - - if (alias_a->event) { - a_name = alias_a->event->name; - a_pmu_name = alias_a->event->pmu_name; - } - if (alias_b->event) { - b_name = alias_b->event->name; - b_pmu_name = alias_b->event->pmu_name; - } - /* Different names -> never duplicates */ - if (strcmp(a_name, b_name)) + if (strcmp(a->name ?: "//", b->name ?: "//")) return false; /* Don't remove duplicates for different PMUs */ - a_pmu_name = a_pmu_name ?: (alias_a->pmu->name ?: ""); - b_pmu_name = b_pmu_name ?: (alias_b->pmu->name ?: ""); - return strcmp(a_pmu_name, b_pmu_name) == 0; + return strcmp(a->pmu_name, b->pmu_name) == 0; } -static int sub_non_neg(int a, int b) -{ - if (b > a) - return 0; - return a - b; -} +struct events_callback_state { + struct sevent *aliases; + size_t aliases_len; + size_t index; +}; -static char *format_alias(char *buf, int len, const struct perf_pmu *pmu, - const struct perf_pmu_alias *alias) +static int perf_pmus__print_pmu_events__callback(void *vstate, + struct pmu_event_info *info) { - struct parse_events_term *term; - int used = snprintf(buf, len, "%s/%s", pmu->name, alias->name); - - list_for_each_entry(term, &alias->terms, list) { - if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) - used += snprintf(buf + used, sub_non_neg(len, used), - ",%s=%s", term->config, - term->val.str); - } + struct events_callback_state *state = vstate; + struct sevent *s; - if (sub_non_neg(len, used) > 0) { - buf[used] = '/'; - used++; + if (state->index >= state->aliases_len) { + pr_err("Unexpected event %s/%s/\n", info->pmu->name, info->name); + return 1; } - if (sub_non_neg(len, used) > 0) { - buf[used] = '\0'; - used++; - } else - buf[len - 1] = '\0'; - - return buf; + s = &state->aliases[state->index]; + s->pmu = info->pmu; +#define COPY_STR(str) s->str = info->str ? strdup(info->str) : NULL + COPY_STR(name); + COPY_STR(alias); + COPY_STR(scale_unit); + COPY_STR(desc); + COPY_STR(long_desc); + COPY_STR(encoding_desc); + COPY_STR(topic); + COPY_STR(pmu_name); +#undef COPY_STR + s->deprecated = info->deprecated; + state->index++; + return 0; } void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *print_state) { struct perf_pmu *pmu; - struct perf_pmu_alias *event; - char buf[1024]; int printed = 0; - int len, j; + int len; struct sevent *aliases; + struct events_callback_state state; + bool skip_duplicate_pmus = print_cb->skip_duplicate_pmus(print_state); + struct perf_pmu *(*scan_fn)(struct perf_pmu *); + + if (skip_duplicate_pmus) + scan_fn = perf_pmus__scan_skip_duplicates; + else + scan_fn = perf_pmus__scan; pmu = NULL; len = 0; - while ((pmu = perf_pmus__scan(pmu)) != NULL) { - list_for_each_entry(event, &pmu->aliases, list) - len++; - if (pmu->selectable) - len++; - } + while ((pmu = scan_fn(pmu)) != NULL) + len += perf_pmu__num_events(pmu); + aliases = zalloc(sizeof(struct sevent) * len); if (!aliases) { pr_err("FATAL: not enough memory to print PMU events\n"); return; } pmu = NULL; - j = 0; - while ((pmu = perf_pmus__scan(pmu)) != NULL) { - bool is_cpu = pmu->is_core; - - list_for_each_entry(event, &pmu->aliases, list) { - aliases[j].event = event; - aliases[j].pmu = pmu; - aliases[j].is_cpu = is_cpu; - j++; - } - if (pmu->selectable) { - aliases[j].event = NULL; - aliases[j].pmu = pmu; - aliases[j].is_cpu = is_cpu; - j++; - } + state = (struct events_callback_state) { + .aliases = aliases, + .aliases_len = len, + .index = 0, + }; + while ((pmu = scan_fn(pmu)) != NULL) { + perf_pmu__for_each_event(pmu, skip_duplicate_pmus, &state, + perf_pmus__print_pmu_events__callback); } - len = j; qsort(aliases, len, sizeof(struct sevent), cmp_sevent); - for (j = 0; j < len; j++) { - const char *name, *alias = NULL, *scale_unit = NULL, - *desc = NULL, *long_desc = NULL, - *encoding_desc = NULL, *topic = NULL, - *pmu_name = NULL; - bool deprecated = false; - size_t buf_used; - + for (int j = 0; j < len; j++) { /* Skip duplicates */ if (j > 0 && pmu_alias_is_duplicate(&aliases[j], &aliases[j - 1])) continue; - if (!aliases[j].event) { - /* A selectable event. */ - pmu_name = aliases[j].pmu->name; - buf_used = snprintf(buf, sizeof(buf), "%s//", pmu_name) + 1; - name = buf; - } else { - if (aliases[j].event->desc) { - name = aliases[j].event->name; - buf_used = 0; - } else { - name = format_alias(buf, sizeof(buf), aliases[j].pmu, - aliases[j].event); - if (aliases[j].is_cpu) { - alias = name; - name = aliases[j].event->name; - } - buf_used = strlen(buf) + 1; - } - pmu_name = aliases[j].event->pmu_name ?: (aliases[j].pmu->name ?: ""); - if (strlen(aliases[j].event->unit) || aliases[j].event->scale != 1.0) { - scale_unit = buf + buf_used; - buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used, - "%G%s", aliases[j].event->scale, - aliases[j].event->unit) + 1; - } - desc = aliases[j].event->desc; - long_desc = aliases[j].event->long_desc; - topic = aliases[j].event->topic; - encoding_desc = buf + buf_used; - buf_used += snprintf(buf + buf_used, sizeof(buf) - buf_used, - "%s/%s/", pmu_name, aliases[j].event->str) + 1; - deprecated = aliases[j].event->deprecated; - } print_cb->print_event(print_state, - pmu_name, - topic, - name, - alias, - scale_unit, - deprecated, + aliases[j].pmu_name, + aliases[j].topic, + aliases[j].name, + aliases[j].alias, + aliases[j].scale_unit, + aliases[j].deprecated, "Kernel PMU event", - desc, - long_desc, - encoding_desc); + aliases[j].desc, + aliases[j].long_desc, + aliases[j].encoding_desc); + zfree(&aliases[j].name); + zfree(&aliases[j].alias); + zfree(&aliases[j].scale_unit); + zfree(&aliases[j].desc); + zfree(&aliases[j].long_desc); + zfree(&aliases[j].encoding_desc); + zfree(&aliases[j].topic); + zfree(&aliases[j].pmu_name); } if (printed && pager_in_use()) printf("\n"); diff --git a/tools/perf/util/pmus.h b/tools/perf/util/pmus.h index a21464432d0f..4c67153ac257 100644 --- a/tools/perf/util/pmus.h +++ b/tools/perf/util/pmus.h @@ -5,6 +5,8 @@ struct perf_pmu; struct print_callbacks; +int pmu_name_len_no_suffix(const char *str, unsigned long *num); + void perf_pmus__destroy(void); struct perf_pmu *perf_pmus__find(const char *name); diff --git a/tools/perf/util/print-events.h b/tools/perf/util/print-events.h index d7fab411e75c..bf4290bef0cd 100644 --- a/tools/perf/util/print-events.h +++ b/tools/perf/util/print-events.h @@ -26,6 +26,7 @@ struct print_callbacks { const char *expr, const char *threshold, const char *unit); + bool (*skip_duplicate_pmus)(void *print_state); }; /** Print all events, the default when no options are specified. */ diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c index 16822a8a540f..1a5b7fa459b2 100644 --- a/tools/perf/util/probe-event.c +++ b/tools/perf/util/probe-event.c @@ -53,6 +53,8 @@ bool probe_event_dry_run; /* Dry run flag */ struct probe_conf probe_conf = { .magic_num = DEFAULT_PROBE_MAGIC_NUM }; +static char *synthesize_perf_probe_point(struct perf_probe_point *pp); + #define semantic_error(msg ...) pr_err("Semantic error :" msg) int e_snprintf(char *str, size_t size, const char *format, ...) @@ -961,8 +963,9 @@ static int try_to_find_probe_trace_events(struct perf_probe_event *pev, debuginfo__delete(dinfo); if (ntevs == 0) { /* No error but failed to find probe point. */ - pr_warning("Probe point '%s' not found.\n", - synthesize_perf_probe_point(&pev->point)); + char *probe_point = synthesize_perf_probe_point(&pev->point); + pr_warning("Probe point '%s' not found.\n", probe_point); + free(probe_point); return -ENODEV; } else if (ntevs < 0) { /* Error path : ntevs < 0 */ @@ -2009,7 +2012,7 @@ out: } /* Compose only probe point (not argument) */ -char *synthesize_perf_probe_point(struct perf_probe_point *pp) +static char *synthesize_perf_probe_point(struct perf_probe_point *pp) { struct strbuf buf; char *tmp, *ret = NULL; @@ -2062,14 +2065,18 @@ char *synthesize_perf_probe_command(struct perf_probe_event *pev) goto out; tmp = synthesize_perf_probe_point(&pev->point); - if (!tmp || strbuf_addstr(&buf, tmp) < 0) + if (!tmp || strbuf_addstr(&buf, tmp) < 0) { + free(tmp); goto out; + } free(tmp); for (i = 0; i < pev->nargs; i++) { tmp = synthesize_perf_probe_arg(pev->args + i); - if (!tmp || strbuf_addf(&buf, " %s", tmp) < 0) + if (!tmp || strbuf_addf(&buf, " %s", tmp) < 0) { + free(tmp); goto out; + } free(tmp); } @@ -2800,13 +2807,18 @@ static void warn_uprobe_event_compat(struct probe_trace_event *tev) if (!tev->uprobes || tev->nargs == 0 || !buf) goto out; - for (i = 0; i < tev->nargs; i++) - if (strglobmatch(tev->args[i].value, "[$@+-]*")) { - pr_warning("Please upgrade your kernel to at least " - "3.14 to have access to feature %s\n", + for (i = 0; i < tev->nargs; i++) { + if (strchr(tev->args[i].value, '@')) { + pr_warning("%s accesses a variable by symbol name, but that is not supported for user application probe.\n", tev->args[i].value); break; } + if (strglobmatch(tev->args[i].value, "[$+-]*")) { + pr_warning("Please upgrade your kernel to at least 3.14 to have access to feature %s\n", + tev->args[i].value); + break; + } + } out: free(buf); } diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h index 8ad5b1579f1d..7e3b6c3d1f74 100644 --- a/tools/perf/util/probe-event.h +++ b/tools/perf/util/probe-event.h @@ -137,7 +137,6 @@ int parse_probe_trace_command(const char *cmd, struct probe_trace_event *tev); char *synthesize_perf_probe_command(struct perf_probe_event *pev); char *synthesize_probe_trace_command(struct probe_trace_event *tev); char *synthesize_perf_probe_arg(struct perf_probe_arg *pa); -char *synthesize_perf_probe_point(struct perf_probe_point *pp); int perf_probe_event__copy(struct perf_probe_event *dst, struct perf_probe_event *src); diff --git a/tools/perf/util/python-ext-sources b/tools/perf/util/python-ext-sources index d4c9b4cd35ef..26e1c8d973ea 100644 --- a/tools/perf/util/python-ext-sources +++ b/tools/perf/util/python-ext-sources @@ -40,3 +40,12 @@ util/rwsem.c util/hashmap.c util/perf_regs.c util/fncache.c +util/perf-regs-arch/perf_regs_aarch64.c +util/perf-regs-arch/perf_regs_arm.c +util/perf-regs-arch/perf_regs_csky.c +util/perf-regs-arch/perf_regs_loongarch.c +util/perf-regs-arch/perf_regs_mips.c +util/perf-regs-arch/perf_regs_powerpc.c +util/perf-regs-arch/perf_regs_riscv.c +util/perf-regs-arch/perf_regs_s390.c +util/perf-regs-arch/perf_regs_x86.c diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c index 4eed8ec23994..c29f5f0bb552 100644 --- a/tools/perf/util/python.c +++ b/tools/perf/util/python.c @@ -113,6 +113,11 @@ bool evsel__is_aux_event(const struct evsel *evsel __maybe_unused) return false; } +bool perf_pmus__supports_extended_type(void) +{ + return false; +} + /* * Add this one here not to drag util/metricgroup.c */ diff --git a/tools/perf/util/s390-sample-raw.c b/tools/perf/util/s390-sample-raw.c index c10b891dbad6..115b16edb451 100644 --- a/tools/perf/util/s390-sample-raw.c +++ b/tools/perf/util/s390-sample-raw.c @@ -27,7 +27,7 @@ #include "color.h" #include "sample-raw.h" #include "s390-cpumcf-kernel.h" -#include "pmu-events/pmu-events.h" +#include "util/pmu.h" #include "util/sample.h" static size_t ctrset_size(struct cf_ctrset_entry *set) @@ -132,56 +132,58 @@ static int get_counterset_start(int setnr) struct get_counter_name_data { int wanted; - const char *result; + char *result; }; -static int get_counter_name_callback(const struct pmu_event *evp, - const struct pmu_events_table *table __maybe_unused, - void *vdata) +static int get_counter_name_callback(void *vdata, struct pmu_event_info *info) { struct get_counter_name_data *data = vdata; int rc, event_nr; + const char *event_str; - if (evp->name == NULL || evp->event == NULL) + if (info->str == NULL) return 0; - rc = sscanf(evp->event, "event=%x", &event_nr); + + event_str = strstr(info->str, "event="); + if (!event_str) + return 0; + + rc = sscanf(event_str, "event=%x", &event_nr); if (rc == 1 && event_nr == data->wanted) { - data->result = evp->name; + data->result = strdup(info->name); return 1; /* Terminate the search. */ } return 0; } -/* Scan the PMU table and extract the logical name of a counter from the - * PMU events table. Input is the counter set and counter number with in the - * set. Construct the event number and use this as key. If they match return - * the name of this counter. +/* Scan the PMU and extract the logical name of a counter from the event. Input + * is the counter set and counter number with in the set. Construct the event + * number and use this as key. If they match return the name of this counter. * If no match is found a NULL pointer is returned. */ -static const char *get_counter_name(int set, int nr, const struct pmu_events_table *table) +static char *get_counter_name(int set, int nr, struct perf_pmu *pmu) { struct get_counter_name_data data = { .wanted = get_counterset_start(set) + nr, .result = NULL, }; - if (!table) + if (!pmu) return NULL; - pmu_events_table_for_each_event(table, get_counter_name_callback, &data); + perf_pmu__for_each_event(pmu, /*skip_duplicate_pmus=*/ true, + &data, get_counter_name_callback); return data.result; } -static void s390_cpumcfdg_dump(struct perf_sample *sample) +static void s390_cpumcfdg_dump(struct perf_pmu *pmu, struct perf_sample *sample) { size_t i, len = sample->raw_size, offset = 0; unsigned char *buf = sample->raw_data; const char *color = PERF_COLOR_BLUE; struct cf_ctrset_entry *cep, ce; - const struct pmu_events_table *table; u64 *p; - table = pmu_events_table__find(); while (offset < len) { cep = (struct cf_ctrset_entry *)(buf + offset); @@ -199,11 +201,12 @@ static void s390_cpumcfdg_dump(struct perf_sample *sample) color_fprintf(stdout, color, " [%#08zx] Counterset:%d" " Counters:%d\n", offset, ce.set, ce.ctr); for (i = 0, p = (u64 *)(cep + 1); i < ce.ctr; ++i, ++p) { - const char *ev_name = get_counter_name(ce.set, i, table); + char *ev_name = get_counter_name(ce.set, i, pmu); color_fprintf(stdout, color, "\tCounter:%03d %s Value:%#018lx\n", i, ev_name ?: "<unknown>", be64_to_cpu(*p)); + free(ev_name); } offset += ctrset_size(&ce); } @@ -216,14 +219,14 @@ static void s390_cpumcfdg_dump(struct perf_sample *sample) */ void evlist__s390_sample_raw(struct evlist *evlist, union perf_event *event, struct perf_sample *sample) { - struct evsel *ev_bc000; + struct evsel *evsel; if (event->header.type != PERF_RECORD_SAMPLE) return; - ev_bc000 = evlist__event2evsel(evlist, event); - if (ev_bc000 == NULL || - ev_bc000->core.attr.config != PERF_EVENT_CPUM_CF_DIAG) + evsel = evlist__event2evsel(evlist, event); + if (evsel == NULL || + evsel->core.attr.config != PERF_EVENT_CPUM_CF_DIAG) return; /* Display raw data on screen */ @@ -231,5 +234,5 @@ void evlist__s390_sample_raw(struct evlist *evlist, union perf_event *event, str pr_err("Invalid counter set data encountered\n"); return; } - s390_cpumcfdg_dump(sample); + s390_cpumcfdg_dump(evsel->pmu, sample); } diff --git a/tools/perf/util/scripting-engines/Build b/tools/perf/util/scripting-engines/Build index c220fec97032..586b94e90f4e 100644 --- a/tools/perf/util/scripting-engines/Build +++ b/tools/perf/util/scripting-engines/Build @@ -5,4 +5,5 @@ perf-$(CONFIG_LIBPYTHON) += trace-event-python.o CFLAGS_trace-event-perl.o += $(PERL_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow -Wno-nested-externs -Wno-undef -Wno-switch-default -Wno-bad-function-cast -Wno-declaration-after-statement -Wno-switch-enum -CFLAGS_trace-event-python.o += $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow -Wno-deprecated-declarations -Wno-switch-enum +# -Wno-declaration-after-statement: The python headers have mixed code with declarations (decls after asserts, for instance) +CFLAGS_trace-event-python.o += $(PYTHON_EMBED_CCOPTS) -Wno-redundant-decls -Wno-strict-prototypes -Wno-unused-parameter -Wno-shadow -Wno-deprecated-declarations -Wno-switch-enum -Wno-declaration-after-statement diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 00d18c74c090..1e9aa8ed15b6 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -833,8 +833,8 @@ static void perf_event__hdr_attr_swap(union perf_event *event, perf_event__attr_swap(&event->attr.attr); size = event->header.size; - size -= (void *)&event->attr.id - (void *)event; - mem_bswap_64(event->attr.id, size); + size -= perf_record_header_attr_id(event) - (void *)event; + mem_bswap_64(perf_record_header_attr_id(event), size); } static void perf_event__event_update_swap(union perf_event *event, diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py index 869738fc06c3..79d5e2955f85 100644 --- a/tools/perf/util/setup.py +++ b/tools/perf/util/setup.py @@ -66,6 +66,9 @@ if cc_is_clang: else: cflags += ['-Wno-cast-function-type' ] +# The python headers have mixed code with declarations (decls after asserts, for instance) +cflags += [ "-Wno-declaration-after-statement" ] + src_perf = getenv('srctree') + '/tools/perf' build_lib = getenv('PYTHON_EXTBUILD_LIB') build_tmp = getenv('PYTHON_EXTBUILD_TMP') diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c index d45d5dcb0e2b..afe6db8e7bf4 100644 --- a/tools/perf/util/stat-display.c +++ b/tools/perf/util/stat-display.c @@ -578,7 +578,7 @@ static void print_metric_only_csv(struct perf_stat_config *config __maybe_unused if (!valid_only_metric(unit)) return; unit = fixunit(tbuf, os->evsel, unit); - snprintf(buf, sizeof buf, fmt, val); + snprintf(buf, sizeof(buf), fmt ?: "", val); ends = vals = skip_spaces(buf); while (isdigit(*ends) || *ends == '.') ends++; @@ -600,7 +600,7 @@ static void print_metric_only_json(struct perf_stat_config *config __maybe_unuse if (!valid_only_metric(unit)) return; unit = fixunit(tbuf, os->evsel, unit); - snprintf(buf, sizeof(buf), fmt, val); + snprintf(buf, sizeof(buf), fmt ?: "", val); ends = vals = skip_spaces(buf); while (isdigit(*ends) || *ends == '.') ends++; diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index 967e583392c7..ec3506042217 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -729,7 +729,7 @@ size_t perf_event__fprintf_stat_round(union perf_event *event, FILE *fp) size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp) { - struct perf_stat_config sc; + struct perf_stat_config sc = {}; size_t ret; perf_event__read_stat_config(&sc, &event->stat_config); diff --git a/tools/perf/util/svghelper.c b/tools/perf/util/svghelper.c index 5c62d3118c41..0e4dc31c6c9c 100644 --- a/tools/perf/util/svghelper.c +++ b/tools/perf/util/svghelper.c @@ -331,7 +331,7 @@ static char *cpu_model(void) file = fopen("/proc/cpuinfo", "r"); if (file) { while (fgets(buf, 255, file)) { - if (strstr(buf, "model name")) { + if (strcasestr(buf, "model name")) { strlcpy(cpu_m, &buf[13], 255); break; } diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c index 8bd466d1c2bd..95e99c332d7e 100644 --- a/tools/perf/util/symbol-elf.c +++ b/tools/perf/util/symbol-elf.c @@ -1440,6 +1440,8 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map, curr_dso->kernel = dso->kernel; curr_dso->long_name = dso->long_name; curr_dso->long_name_len = dso->long_name_len; + curr_dso->binary_type = dso->binary_type; + curr_dso->adjust_symbols = dso->adjust_symbols; curr_map = map__new2(start, curr_dso); dso__put(curr_dso); if (curr_map == NULL) diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index f849f9ef68e6..3f36675b7c8f 100644 --- a/tools/perf/util/symbol.c +++ b/tools/perf/util/symbol.c @@ -2204,15 +2204,20 @@ int dso__load_vmlinux(struct dso *dso, struct map *map, if (symsrc__init(&ss, dso, symfs_vmlinux, symtab_type)) return -1; + /* + * dso__load_sym() may copy 'dso' which will result in the copies having + * an incorrect long name unless we set it here first. + */ + dso__set_long_name(dso, vmlinux, vmlinux_allocated); + if (dso->kernel == DSO_SPACE__KERNEL_GUEST) + dso->binary_type = DSO_BINARY_TYPE__GUEST_VMLINUX; + else + dso->binary_type = DSO_BINARY_TYPE__VMLINUX; + err = dso__load_sym(dso, map, &ss, &ss, 0); symsrc__destroy(&ss); if (err > 0) { - if (dso->kernel == DSO_SPACE__KERNEL_GUEST) - dso->binary_type = DSO_BINARY_TYPE__GUEST_VMLINUX; - else - dso->binary_type = DSO_BINARY_TYPE__VMLINUX; - dso__set_long_name(dso, vmlinux, vmlinux_allocated); dso__set_loaded(dso); pr_debug("Using %s for symbols\n", symfs_vmlinux); } diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c index 45714a2785fd..a0579c7d7b9e 100644 --- a/tools/perf/util/synthetic-events.c +++ b/tools/perf/util/synthetic-events.c @@ -2145,7 +2145,7 @@ int perf_event__synthesize_attr(struct perf_tool *tool, struct perf_event_attr * return -ENOMEM; ev->attr.attr = *attr; - memcpy(ev->attr.id, id, ids * sizeof(u64)); + memcpy(perf_record_header_attr_id(ev), id, ids * sizeof(u64)); ev->attr.header.type = PERF_RECORD_HEADER_ATTR; ev->attr.header.size = (u16)size; diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c index 0b166404c5c3..fe5e6991ae4b 100644 --- a/tools/perf/util/thread.c +++ b/tools/perf/util/thread.c @@ -80,6 +80,15 @@ err_thread: return NULL; } +static void (*thread__priv_destructor)(void *priv); + +void thread__set_priv_destructor(void (*destructor)(void *priv)) +{ + assert(thread__priv_destructor == NULL); + + thread__priv_destructor = destructor; +} + void thread__delete(struct thread *thread) { struct namespaces *namespaces, *tmp_namespaces; @@ -112,6 +121,10 @@ void thread__delete(struct thread *thread) exit_rwsem(thread__namespaces_lock(thread)); exit_rwsem(thread__comm_lock(thread)); thread__free_stitch_list(thread); + + if (thread__priv_destructor) + thread__priv_destructor(thread__priv(thread)); + RC_CHK_FREE(thread); } diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h index 9068a21ce0fa..e79225a0ea46 100644 --- a/tools/perf/util/thread.h +++ b/tools/perf/util/thread.h @@ -71,6 +71,8 @@ struct thread *thread__new(pid_t pid, pid_t tid); int thread__init_maps(struct thread *thread, struct machine *machine); void thread__delete(struct thread *thread); +void thread__set_priv_destructor(void (*destructor)(void *priv)); + struct thread *thread__get(struct thread *thread); void thread__put(struct thread *thread); diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c index 2a96df4c8d42..8554db3fc0d7 100644 --- a/tools/perf/util/unwind-libdw.c +++ b/tools/perf/util/unwind-libdw.c @@ -17,6 +17,7 @@ #include "event.h" #include "perf_regs.h" #include "callchain.h" +#include "util/env.h" static char *debuginfo_path; @@ -170,12 +171,14 @@ static bool memory_read(Dwfl *dwfl __maybe_unused, Dwarf_Addr addr, Dwarf_Word * void *arg) { struct unwind_info *ui = arg; + const char *arch = perf_env__arch(ui->machine->env); struct stack_dump *stack = &ui->sample->user_stack; u64 start, end; int offset; int ret; - ret = perf_reg_value(&start, &ui->sample->user_regs, PERF_REG_SP); + ret = perf_reg_value(&start, &ui->sample->user_regs, + perf_arch_reg_sp(arch)); if (ret) return false; @@ -253,6 +256,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg, .max_stack = max_stack, .best_effort = best_effort }; + const char *arch = perf_env__arch(ui_buf.machine->env); Dwarf_Word ip; int err = -EINVAL, i; @@ -269,7 +273,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg, if (!ui->dwfl) goto out; - err = perf_reg_value(&ip, &data->user_regs, PERF_REG_IP); + err = perf_reg_value(&ip, &data->user_regs, perf_arch_reg_ip(arch)); if (err) goto out; diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c index ebfde537b99b..c0641882fd2f 100644 --- a/tools/perf/util/unwind-libunwind-local.c +++ b/tools/perf/util/unwind-libunwind-local.c @@ -553,6 +553,7 @@ static int access_mem(unw_addr_space_t __maybe_unused as, int __write, void *arg) { struct unwind_info *ui = arg; + const char *arch = perf_env__arch(ui->machine->env); struct stack_dump *stack = &ui->sample->user_stack; u64 start, end; int offset; @@ -565,7 +566,7 @@ static int access_mem(unw_addr_space_t __maybe_unused as, } ret = perf_reg_value(&start, &ui->sample->user_regs, - LIBUNWIND__ARCH_REG_SP); + perf_arch_reg_sp(arch)); if (ret) return ret; @@ -714,6 +715,7 @@ static void _unwind__finish_access(struct maps *maps) static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb, void *arg, int max_stack) { + const char *arch = perf_env__arch(ui->machine->env); u64 val; unw_word_t ips[max_stack]; unw_addr_space_t addr_space; @@ -721,7 +723,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb, int ret, i = 0; ret = perf_reg_value(&val, &ui->sample->user_regs, - LIBUNWIND__ARCH_REG_IP); + perf_arch_reg_ip(arch)); if (ret) return ret; diff --git a/tools/perf/util/unwind.h b/tools/perf/util/unwind.h index b2a03fa5289b..9f7164c6d9aa 100644 --- a/tools/perf/util/unwind.h +++ b/tools/perf/util/unwind.h @@ -42,14 +42,6 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg, #define LIBUNWIND__ARCH_REG_ID(regnum) libunwind__arch_reg_id(regnum) #endif -#ifndef LIBUNWIND__ARCH_REG_SP -#define LIBUNWIND__ARCH_REG_SP PERF_REG_SP -#endif - -#ifndef LIBUNWIND__ARCH_REG_IP -#define LIBUNWIND__ARCH_REG_IP PERF_REG_IP -#endif - int LIBUNWIND__ARCH_REG_ID(int regnum); int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized); void unwind__flush_access(struct maps *maps); diff --git a/tools/scripts/utilities.mak b/tools/scripts/utilities.mak index 172e47273b5d..d69d0345cc23 100644 --- a/tools/scripts/utilities.mak +++ b/tools/scripts/utilities.mak @@ -177,3 +177,23 @@ $(if $($(1)),$(call _ge_attempt,$($(1)),$(1)),$(call _ge_attempt,$(2))) endef _ge_attempt = $(or $(get-executable),$(call _gea_err,$(2))) _gea_err = $(if $(1),$(error Please set '$(1)' appropriately)) + +# version-ge3 +# +# Usage $(call version-ge3,2.6.4,$(FLEX_VERSION)) +# +# To compare if a 3 component version is greater or equal to another, first use +# was to check the flex version to see if we can use compiler warnings as +# errors for one of the cases flex generates code C compilers complains about. + +version-ge3 = $(shell echo "$(1).$(2)" | awk -F'.' '{ printf("%d\n", (10000000 * $$1 + 10000 * $$2 + $$3) >= (10000000 * $$4 + 10000 * $$5 + $$6)) }') + +# version-lt3 +# +# Usage $(call version-lt3,2.6.2,$(FLEX_VERSION)) +# +# To compare if a 3 component version is less thjan another, first use was to +# check the flex version to see if we can use compiler warnings as errors for +# one of the cases flex generates code C compilers complains about. + +version-lt3 = $(shell echo "$(1).$(2)" | awk -F'.' '{ printf("%d\n", (10000000 * $$1 + 10000 * $$2 + $$3) < (10000000 * $$4 + 10000 * $$5 + $$6)) }') |