summaryrefslogtreecommitdiffstats
path: root/kernel/trace (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'trace-v4.3' of ↵Linus Torvalds2015-09-088-416/+516
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing update from Steven Rostedt: "Mostly this is just clean ups and micro optimizations. The changes with more meat are: - Allowing the trace event filters to filter on CPU number and process ids - Two new markers for trace output latency were added (10 and 100 msec latencies) - Have tracing_thresh filter function profiling time I also worked on modifying the ring buffer code for some future work, and moved the adding of the timestamp around. One of my changes caused a regression, and since other changes were built on top of it and already tested, I had to operate a revert of that change. Instead of rebasing, this change set has the code that caused a regression as well as the code to revert that change without touching the other changes that were made on top of it" * tag 'trace-v4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: ring-buffer: Revert "ring-buffer: Get timestamp after event is allocated" tracing: Don't make assumptions about length of string on task rename tracing: Allow triggers to filter for CPU ids and process names ftrace: Format MCOUNT_ADDR address as type unsigned long tracing: Introduce two additional marks for delay ftrace: Fix function_graph duration spacing with 7-digits ftrace: add tracing_thresh to function profile tracing: Clean up stack tracing and fix fentry updates ring-buffer: Reorganize function locations ring-buffer: Make sure event has enough room for extend and padding ring-buffer: Get timestamp after event is allocated ring-buffer: Move the adding of the extended timestamp out of line ring-buffer: Add event descriptor to simplify passing data ftrace: correct the counter increment for trace_buffer data tracing: Fix for non-continuous cpu ids tracing: Prefer kcalloc over kzalloc with multiply
| * ring-buffer: Revert "ring-buffer: Get timestamp after event is allocated"Steven Rostedt (Red Hat)2015-09-031-106/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The commit a4543a2fa9ef31 "ring-buffer: Get timestamp after event is allocated" is needed for some future work. But after adding it, there is a race somewhere that causes the saved timestamp to have a slight shift, and get ahead of the actual timestamp and make it look like time goes backwards. I'm still looking into why this happens, but in the mean time, this is holding up other work to get in. I'm reverting the change for now (which makes the problem go away), and will add it back after I know what is wrong and fix it. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Allow triggers to filter for CPU ids and process namesDaniel Wagner2015-08-122-2/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By extending the filter rules by more generic fields we can write triggers filters like echo 'stacktrace if cpu == 1' > \ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger or echo 'stacktrace if comm == sshd' > \ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger CPU and COMM are not part of struct trace_entry. We could add the two new fields to ftrace_common_field list and fix up all depending sides. But that looks pretty ugly. Another thing I would like to avoid that the 'format' file contents changes. All this can be avoided by introducing another list which contains non field members of struct trace_entry. Link: http://lkml.kernel.org/r/1439210146-24707-1-git-send-email-daniel.wagner@bmw-carit.de Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Introduce two additional marks for delayJungseok Lee2015-07-211-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A fine granulity support for delay would be very useful when profiling VM logics, such as page allocation including page reclaim and memory compaction with function graph. Thus, this patch adds two additional marks with two changes. - An equal sign in mark selection function is removed to align code behavior with comments and documentation. - The function graph example related to delay in ftrace.txt is updated to cover all supported marks. Link: http://lkml.kernel.org/r/1436626300-1679-3-git-send-email-jungseoklee85@gmail.com Cc: Byungchul Park <byungchul.park@lge.com> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ftrace: Fix function_graph duration spacing with 7-digitsSteven Rostedt (Red Hat)2015-07-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jungseok Lee noticed the following: Currently, row's width of 7-digit duration numbers not aligned with other cases like the following example. 3) $ 3999884 us | } 3) | finish_task_switch() { 3) 0.365 us | _raw_spin_unlock_irq(); 3) 3.333 us | } 3) $ 3999976 us | } 3) $ 3999979 us | } /* schedule */ As adding a single white space in case of 7-digit numbers, the format could be unified easily as follows. 3) $ 2237472 us | } 3) | finish_task_switch() { 3) 0.364 us | _raw_spin_unlock_irq(); 3) 3.125 us | } 3) $ 2237556 us | } 3) $ 2237559 us | } /* schedule */ Instead of making a special case for 7-digit numbers, the logic of the len and the space loop is slightly modified to make the two cases have the same format. Link: http://lkml.kernel.org/r/1436626300-1679-2-git-send-email-jungseoklee85@gmail.com Reported-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ftrace: add tracing_thresh to function profileUmesh Tiwari2015-07-211-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | This patch extends tracing_thresh functionality to function profile tracer. If tracing_thresh is set, print those entries only, whose average is > tracing thresh. Link: http://lkml.kernel.org/r/1434972488-8571-1-git-send-email-umesh.t@samsung.com Signed-off-by: Umesh Tiwari <umesh.t@samsung.com> [ Removed unnecessary 'moved' comment ] Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Clean up stack tracing and fix fentry updatesSteven Rostedt (Red Hat)2015-07-211-45/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Akashi Takahiro was porting the stack tracer to arm64 and found some issues with it. One was that it repeats the top function, due to the stack frame added by the mcount caller and added by itself. This was added when fentry came in, and before fentry created its own stack frame. But x86's fentry now creates its own stack frame, and there's no need to insert the function again. This also cleans up the code a bit, where it doesn't need to do something special for fentry, and doesn't include insertion of a duplicate entry for the called function being traced. Link: http://lkml.kernel.org/r/55A646EE.6030402@linaro.org Some-suggestions-by: Jungseok Lee <jungseoklee85@gmail.com> Some-suggestions-by: Mark Rutland <mark.rutland@arm.com> Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Reorganize function locationsSteven Rostedt (Red Hat)2015-07-211-411/+403
| | | | | | | | | | | | | | | | | | Functions in ring-buffer.c have gotten interleaved between different use cases. Move the functions around to get like functions closer together. This may or may not help gcc keep cache locality, but it makes it a little easier to work with the code. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Make sure event has enough room for extend and paddingSteven Rostedt (Red Hat)2015-07-211-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | Now that events only add time extends after it is committed, in case an event comes in before it can discard the allocated event, the time extend needs to be stored within the event. If the event is bigger than then size needed for the time extend, padding must be added. The minimum padding size is 8 bytes. Thus if the event is 12 bytes (size of time extend + 4), there will not be enough room to add both the time extend and padding. Make sure all events are either 8 bytes or 16 or more bytes. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Get timestamp after event is allocatedSteven Rostedt (Red Hat)2015-07-211-47/+114
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the capturing of the timestamp to after an event is allocated. If the event is not a commit (where it is an event that preempted another event), then no timestamp is needed, because the delta of nested events is always zero. If the event starts on a new page, no delta needs to be calculated as the full timestamp will be added to the page header, and the event will have a delta of zero. Now if the event requires a time extend (the delta does not fit in the 27 bit delta slot in the header), then the event is discarded, the length is extended to hold the TIME_EXTEND event that allows for a 59 bit delta, and the commit is tried again. If the event can't be discarded (another event came in after it), then the TIME_EXTEND is added directly to the allocated event and the rest of the event is given padding. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Move the adding of the extended timestamp out of lineSteven Rostedt (Red Hat)2015-07-211-16/+25
| | | | | | | | | | | | | | | | | | | | | | Requiring a extended time stamp is an uncommon occurrence, and it is best to do it out of line when needed. Add a noinline function that handles the extended timestamp and have it called with an unlikely to completely move it out of the fast path. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Add event descriptor to simplify passing dataSteven Rostedt (Red Hat)2015-07-211-39/+52
| | | | | | | | | | | | | | | | Add rb_event_info descriptor to pass event info to functions a bit easier than using a bunch of parameters. This will also allow for changing the code around a bit to find better fast paths. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ftrace: correct the counter increment for trace_buffer dataUmesh Tiwari2015-07-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | In ftrace_dump, for disabling buffer, iter.tr->trace_buffer.data is used. But for enabling, iter.trace_buffer->data is used. Even though, both point to same buffer, for readability, same convention should be used. Link: http://lkml.kernel.org/r/1434972306-20043-1-git-send-email-umesh.t@samsung.com Signed-off-by: Umesh Tiwari <umesh.t@samsung.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Fix for non-continuous cpu idsGil Fruchter2015-07-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently exception occures due to access beyond buffer_iter range while using index of cpu bigger than num_possible_cpus(). Below there is an example for such exception when we use cpus 0,1,16,17. In order to fix buffer allocation size for non-continuous cpu ids we allocate according to the max cpu id and not according to the amount of possible cpus. Example: $ cat /sys/kernel/debug/tracing/per_cpu/cpu1/trace Path: /bin/busybox CPU: 0 PID: 82 Comm: cat Not tainted 4.0.0 #29 task: 80734c80 ti: 80012000 task.ti: 80012000 [ECR ]: 0x00220100 => Invalid Read @ 0x00000000 by insn @ 0x800abafc [EFA ]: 0x00000000 [BLINK ]: ring_buffer_read_finish+0x24/0x64 [ERET ]: rb_check_pages+0x20/0x188 [STAT32]: 0x00001a00 : BTA: 0x800abafc SP: 0x80013f0c FP: 0x57719cf8 LPS: 0x200036b4 LPE: 0x200036b8 LPC: 0x00000000 r00: 0x8002aca0 r01: 0x00001606 r02: 0x00000000 r03: 0x00000001 r04: 0x00000000 r05: 0x804b4954 r06: 0x00030003 r07: 0x8002a260 r08: 0x00000286 r09: 0x00080002 r10: 0x00001006 r11: 0x807351a4 r12: 0x00000001 Stack Trace: rb_check_pages+0x20/0x188 ring_buffer_read_finish+0x24/0x64 tracing_release+0x4e/0x170 __fput+0x62/0x158 task_work_run+0xa2/0xd4 do_notify_resume+0x52/0x7c resume_user_mode_begin+0xdc/0xe0 Link: http://lkml.kernel.org/r/1433835155-6894-3-git-send-email-gilf@ezchip.com Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Gil Fruchter <gilf@ezchip.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Prefer kcalloc over kzalloc with multiplyGil Fruchter2015-07-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | Use kcalloc for allocating an array instead of kzalloc with multiply, as that is what kcalloc is used for. Found with checkpatch. Link: http://lkml.kernel.org/r/1433835155-6894-2-git-send-email-gilf@ezchip.com Signed-off-by: Gil Fruchter <gilf@ezchip.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-nextLinus Torvalds2015-09-032-18/+65
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull networking updates from David Miller: "Another merge window, another set of networking changes. I've heard rumblings that the lightweight tunnels infrastructure has been voted networking change of the year. But what do I know? 1) Add conntrack support to openvswitch, from Joe Stringer. 2) Initial support for VRF (Virtual Routing and Forwarding), which allows the segmentation of routing paths without using multiple devices. There are some semantic kinks to work out still, but this is a reasonably strong foundation. From David Ahern. 3) Remove spinlock fro act_bpf fast path, from Alexei Starovoitov. 4) Ignore route nexthops with a link down state in ipv6, just like ipv4. From Andy Gospodarek. 5) Remove spinlock from fast path of act_gact and act_mirred, from Eric Dumazet. 6) Document the DSA layer, from Florian Fainelli. 7) Add netconsole support to bcmgenet, systemport, and DSA. Also from Florian Fainelli. 8) Add Mellanox Switch Driver and core infrastructure, from Jiri Pirko. 9) Add support for "light weight tunnels", which allow for encapsulation and decapsulation without bearing the overhead of a full blown netdevice. From Thomas Graf, Jiri Benc, and a cast of others. 10) Add Identifier Locator Addressing support for ipv6, from Tom Herbert. 11) Support fragmented SKBs in iwlwifi, from Johannes Berg. 12) Allow perf PMUs to be accessed from eBPF programs, from Kaixu Xia. 13) Add BQL support to 3c59x driver, from Loganaden Velvindron. 14) Stop using a zero TX queue length to mean that a device shouldn't have a qdisc attached, use an explicit flag instead. From Phil Sutter. 15) Use generic geneve netdevice infrastructure in openvswitch, from Pravin B Shelar. 16) Add infrastructure to avoid re-forwarding a packet in software that was already forwarded by a hardware switch. From Scott Feldman. 17) Allow AF_PACKET fanout function to be implemented in a bpf program, from Willem de Bruijn" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1458 commits) netfilter: nf_conntrack: make nf_ct_zone_dflt built-in netfilter: nf_dup{4, 6}: fix build error when nf_conntrack disabled net: fec: clear receive interrupts before processing a packet ipv6: fix exthdrs offload registration in out_rt path xen-netback: add support for multicast control bgmac: Update fixed_phy_register() sock, diag: fix panic in sock_diag_put_filterinfo flow_dissector: Use 'const' where possible. flow_dissector: Fix function argument ordering dependency ixgbe: Resolve "initialized field overwritten" warnings ixgbe: Remove bimodal SR-IOV disabling ixgbe: Add support for reporting 2.5G link speed ixgbe: fix bounds checking in ixgbe_setup_tc for 82598 ixgbe: support for ethtool set_rxfh ixgbe: Avoid needless PHY access on copper phys ixgbe: cleanup to use cached mask value ixgbe: Remove second instance of lan_id variable ixgbe: use kzalloc for allocating one thing flow: Move __get_hash_from_flowi{4,6} into flow_dissector.c ixgbe: Remove unused PCI bus types ...
| * | bpf: add support for %s specifier to bpf_trace_printk()Alexei Starovoitov2015-08-291-2/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | %s specifier makes bpf program and kernel debugging easier. To make sure that trace_printk won't crash the unsafe string is copied into stack and unsafe pointer is substituted. The following C program: #include <linux/fs.h> int foo(struct pt_regs *ctx, struct filename *filename) { void *name = 0; bpf_probe_read(&name, sizeof(name), &filename->name); bpf_trace_printk("executed %s\n", name); return 0; } when attached to kprobe do_execve() will produce output in /sys/kernel/debug/tracing/trace_pipe : make-13492 [002] d..1 3250.997277: : executed /bin/sh sh-13493 [004] d..1 3250.998716: : executed /usr/bin/gcc gcc-13494 [002] d..1 3250.999822: : executed /usr/lib/gcc/x86_64-linux-gnu/4.7/cc1 gcc-13495 [002] d..1 3251.006731: : executed /usr/bin/as gcc-13496 [002] d..1 3251.011831: : executed /usr/lib/gcc/x86_64-linux-gnu/4.7/collect2 collect2-13497 [000] d..1 3251.012941: : executed /usr/bin/ld Suggested-by: Brendan Gregg <brendan.d.gregg@gmail.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | lib: introduce strncpy_from_unsafe()Alexei Starovoitov2015-08-291-16/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | generalize FETCH_FUNC_NAME(memory, string) into strncpy_from_unsafe() and fix sparse warnings that were present in original implementation. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | bpf: Implement function bpf_perf_event_read() that get the selected hardware ↵Kaixu Xia2015-08-101-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PMU conuter According to the perf_event_map_fd and index, the function bpf_perf_event_read() can convert the corresponding map value to the pointer to struct perf_event and return the Hardware PMU counter value. Signed-off-by: Kaixu Xia <xiakaixu@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | | Merge branch 'for-4.3/core' of git://git.kernel.dk/linux-blockLinus Torvalds2015-09-021-7/+3
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull core block updates from Jens Axboe: "This first core part of the block IO changes contains: - Cleanup of the bio IO error signaling from Christoph. We used to rely on the uptodate bit and passing around of an error, now we store the error in the bio itself. - Improvement of the above from myself, by shrinking the bio size down again to fit in two cachelines on x86-64. - Revert of the max_hw_sectors cap removal from a revision again, from Jeff Moyer. This caused performance regressions in various tests. Reinstate the limit, bump it to a more reasonable size instead. - Make /sys/block/<dev>/queue/discard_max_bytes writeable, by me. Most devices have huge trim limits, which can cause nasty latencies when deleting files. Enable the admin to configure the size down. We will look into having a more sane default instead of UINT_MAX sectors. - Improvement of the SGP gaps logic from Keith Busch. - Enable the block core to handle arbitrarily sized bios, which enables a nice simplification of bio_add_page() (which is an IO hot path). From Kent. - Improvements to the partition io stats accounting, making it faster. From Ming Lei. - Also from Ming Lei, a basic fixup for overflow of the sysfs pending file in blk-mq, as well as a fix for a blk-mq timeout race condition. - Ming Lin has been carrying Kents above mentioned patches forward for a while, and testing them. Ming also did a few fixes around that. - Sasha Levin found and fixed a use-after-free problem introduced by the bio->bi_error changes from Christoph. - Small blk cgroup cleanup from Viresh Kumar" * 'for-4.3/core' of git://git.kernel.dk/linux-block: (26 commits) blk: Fix bio_io_vec index when checking bvec gaps block: Replace SG_GAPS with new queue limits mask block: bump BLK_DEF_MAX_SECTORS to 2560 Revert "block: remove artifical max_hw_sectors cap" blk-mq: fix race between timeout and freeing request blk-mq: fix buffer overflow when reading sysfs file of 'pending' Documentation: update notes in biovecs about arbitrarily sized bios block: remove bio_get_nr_vecs() fs: use helper bio_add_page() instead of open coding on bi_io_vec block: kill merge_bvec_fn() completely md/raid5: get rid of bio_fits_rdev() md/raid5: split bio for chunk_aligned_read block: remove split code in blkdev_issue_{discard,write_same} btrfs: remove bio splitting and merge_bvec_fn() calls bcache: remove driver private bio splitting code block: simplify bio_add_page() block: make generic_make_request handle arbitrarily sized bios blk-cgroup: Drop unlikely before IS_ERR(_OR_NULL) block: don't access bio->bi_error after bio_put() block: shrink struct bio down to 2 cache lines again ...
| * | | block: add a bi_error field to struct bioChristoph Hellwig2015-07-291-7/+3
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we have two different ways to signal an I/O error on a BIO: (1) by clearing the BIO_UPTODATE flag (2) by returning a Linux errno value to the bi_end_io callback The first one has the drawback of only communicating a single possible error (-EIO), and the second one has the drawback of not beeing persistent when bios are queued up, and are not passed along from child to parent bio in the ever more popular chaining scenario. Having both mechanisms available has the additional drawback of utterly confusing driver authors and introducing bugs where various I/O submitters only deal with one of them, and the others have to add boilerplate code to deal with both kinds of error returns. So add a new bi_error field to store an errno value directly in struct bio and remove the existing mechanisms to clean all this up. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* | | Merge branch 'sched-core-for-linus' of ↵Linus Torvalds2015-09-012-2/+2
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "The biggest change in this cycle is the rewrite of the main SMP load balancing metric: the CPU load/utilization. The main goal was to make the metric more precise and more representative - see the changelog of this commit for the gory details: 9d89c257dfb9 ("sched/fair: Rewrite runnable load and utilization average tracking") It is done in a way that significantly reduces complexity of the code: 5 files changed, 249 insertions(+), 494 deletions(-) and the performance testing results are encouraging. Nevertheless we need to keep an eye on potential regressions, since this potentially affects every SMP workload in existence. This work comes from Yuyang Du. Other changes: - SCHED_DL updates. (Andrea Parri) - Simplify architecture callbacks by removing finish_arch_switch(). (Peter Zijlstra et al) - cputime accounting: guarantee stime + utime == rtime. (Peter Zijlstra) - optimize idle CPU wakeups some more - inspired by Facebook server loads. (Mike Galbraith) - stop_machine fixes and updates. (Oleg Nesterov) - Introduce the 'trace_sched_waking' tracepoint. (Peter Zijlstra) - sched/numa tweaks. (Srikar Dronamraju) - misc fixes and small cleanups" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (44 commits) sched/deadline: Fix comment in enqueue_task_dl() sched/deadline: Fix comment in push_dl_tasks() sched: Change the sched_class::set_cpus_allowed() calling context sched: Make sched_class::set_cpus_allowed() unconditional sched: Fix a race between __kthread_bind() and sched_setaffinity() sched: Ensure a task has a non-normalized vruntime when returning back to CFS sched/numa: Fix NUMA_DIRECT topology identification tile: Reorganize _switch_to() sched, sparc32: Update scheduler comments in copy_thread() sched: Remove finish_arch_switch() sched, tile: Remove finish_arch_switch sched, sh: Fold finish_arch_switch() into switch_to() sched, score: Remove finish_arch_switch() sched, avr32: Remove finish_arch_switch() sched, MIPS: Get rid of finish_arch_switch() sched, arm: Remove finish_arch_switch() sched/fair: Clean up load average references sched/fair: Provide runnable_load_avg back to cfs_rq sched/fair: Remove task and group entity load when they are dead sched/fair: Init cfs_rq's sched_entity load average ...
| * | | sched: Introduce the 'trace_sched_waking' tracepointPeter Zijlstra2015-08-032-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mathieu reported that since 317f394160e9 ("sched: Move the second half of ttwu() to the remote cpu") trace_sched_wakeup() can happen out of context of the waker. This is a problem when you want to analyse wakeup paths because it is now very hard to correlate the wakeup event to whoever issued the wakeup. OTOH trace_sched_wakeup() is issued at the point where we set p->state = TASK_RUNNING, which is right were we hand the task off to the scheduler, so this is an important point when looking at scheduling behaviour, up to here its been the wakeup path everything hereafter is due to scheduler policy. To bridge this gap, introduce a second tracepoint: trace_sched_waking. It is guaranteed to be called in the waker context. Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Francis Giraldeau <francis.giraldeau@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150609091336.GQ3644@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | tracing/uprobes: Do not print '0x (null)' when offset is 0Wang Nan2015-08-261-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When manually added uprobe point with zero address, 'uprobe_events' output '(null)' instead of 0x00000000: # echo p:probe_libc/abs_0 /path/to/lib.bin:0x0 arg1=%ax > \ /sys/kernel/debug/tracing/uprobe_events # cat /sys/kernel/debug/tracing/uprobe_events p:probe_libc/abs_0 /path/to/lib.bin:0x (null) arg1=%ax This patch fixes this behavior: # cat /sys/kernel/debug/tracing/uprobe_events p:probe_libc/abs_0 /path/to/lib.bin:0x0000000000000000 Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1440586666-235233-8-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | | | tracing, perf: Implement BPF programs attached to uprobesWang Nan2015-08-062-1/+6
| |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By copying BPF related operation to uprobe processing path, this patch allow users attach BPF programs to uprobes like what they are already doing on kprobes. After this patch, users are allowed to use PERF_EVENT_IOC_SET_BPF on a uprobe perf event. Which make it possible to profile user space programs and kernel events together using BPF. Because of this patch, CONFIG_BPF_EVENTS should be selected by CONFIG_UPROBE_EVENT to ensure trace_call_bpf() is compiled even if KPROBE_EVENT is not set. Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1435716878-189507-3-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | | ftrace: Fix breakage of set_ftrace_pidSteven Rostedt (Red Hat)2015-07-241-18/+34
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4104d326b670 ("ftrace: Remove global function list and call function directly") simplified the ftrace code by removing the global_ops list with a new design. But this cleanup also broke the filtering of PIDs that are added to the set_ftrace_pid file. Add back the proper hooks to have pid filtering working once again. Cc: stable@vger.kernel.org # 3.16+ Reported-by: Matt Fleming <matt@console-pimps.org> Reported-by: Richard Weinberger <richard.weinberger@gmail.com> Tested-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | tracing: Have branch tracer use recursive field of task structSteven Rostedt (Red Hat)2015-07-082-7/+11
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fengguang Wu's tests triggered a bug in the branch tracer's start up test when CONFIG_DEBUG_PREEMPT set. This was because that config adds some debug logic in the per cpu field, which calls back into the branch tracer. The branch tracer has its own recursive checks, but uses a per cpu variable to implement it. If retrieving the per cpu variable calls back into the branch tracer, you can see how things will break. Instead of using a per cpu variable, use the trace_recursion field of the current task struct. Simply set a bit when entering the branch tracing and clear it when leaving. If the bit is set on entry, just don't do the tracing. There's also the case with lockdep, as the local_irq_save() called before the recursion can also trigger code that can call back into the function. Changing that to a raw_local_irq_save() will protect that as well. This prevents the recursion and the inevitable crash that follows. Link: http://lkml.kernel.org/r/20150630141803.GA28071@wfg-t540p.sh.intel.com Cc: stable@vger.kernel.org # 3.10+ Reported-by: Fengguang Wu <fengguang.wu@intel.com> Tested-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* Merge tag 'trace-v4.2' of ↵Linus Torvalds2015-06-2621-568/+534
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "This patch series contains several clean ups and even a new trace clock "monitonic raw". Also some enhancements to make the ring buffer even faster. But the biggest and most noticeable change is the renaming of the ftrace* files, structures and variables that have to deal with trace events. Over the years I've had several developers tell me about their confusion with what ftrace is compared to events. Technically, "ftrace" is the infrastructure to do the function hooks, which include tracing and also helps with live kernel patching. But the trace events are a separate entity altogether, and the files that affect the trace events should not be named "ftrace". These include: include/trace/ftrace.h -> include/trace/trace_events.h include/linux/ftrace_event.h -> include/linux/trace_events.h Also, functions that are specific for trace events have also been renamed: ftrace_print_*() -> trace_print_*() (un)register_ftrace_event() -> (un)register_trace_event() ftrace_event_name() -> trace_event_name() ftrace_trigger_soft_disabled() -> trace_trigger_soft_disabled() ftrace_define_fields_##call() -> trace_define_fields_##call() ftrace_get_offsets_##call() -> trace_get_offsets_##call() Structures have been renamed: ftrace_event_file -> trace_event_file ftrace_event_{call,class} -> trace_event_{call,class} ftrace_event_buffer -> trace_event_buffer ftrace_subsystem_dir -> trace_subsystem_dir ftrace_event_raw_##call -> trace_event_raw_##call ftrace_event_data_offset_##call-> trace_event_data_offset_##call ftrace_event_type_funcs_##call -> trace_event_type_funcs_##call And a few various variables and flags have also been updated. This has been sitting in linux-next for some time, and I have not heard a single complaint about this rename breaking anything. Mostly because these functions, variables and structures are mostly internal to the tracing system and are seldom (if ever) used by anything external to that" * tag 'trace-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (33 commits) ring_buffer: Allow to exit the ring buffer benchmark immediately ring-buffer-benchmark: Fix the wrong type ring-buffer-benchmark: Fix the wrong param in module_param ring-buffer: Add enum names for the context levels ring-buffer: Remove useless unused tracing_off_permanent() ring-buffer: Give NMIs a chance to lock the reader_lock ring-buffer: Add trace_recursive checks to ring_buffer_write() ring-buffer: Allways do the trace_recursive checks ring-buffer: Move recursive check to per_cpu descriptor ring-buffer: Add unlikelys to make fast path the default tracing: Rename ftrace_get_offsets_##call() to trace_event_get_offsets_##call() tracing: Rename ftrace_define_fields_##call() to trace_event_define_fields_##call() tracing: Rename ftrace_event_type_funcs_##call to trace_event_type_funcs_##call tracing: Rename ftrace_data_offset_##call to trace_event_data_offset_##call tracing: Rename ftrace_raw_##call event structures to trace_event_raw_##call tracing: Rename ftrace_trigger_soft_disabled() to trace_trigger_soft_disabled() tracing: Rename FTRACE_EVENT_FL_* flags to EVENT_FILE_FL_* tracing: Rename struct ftrace_subsystem_dir to trace_subsystem_dir tracing: Rename ftrace_event_name() to trace_event_name() tracing: Rename FTRACE_MAX_EVENT to TRACE_EVENT_TYPE_MAX ...
| * ring_buffer: Allow to exit the ring buffer benchmark immediatelyPetr Mladek2015-06-151-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It takes a while until the ring_buffer_benchmark module is removed when the ring buffer hammer is running. It is because it takes few seconds and kthread_should_stop() is not being checked. This patch adds the check for kthread termination into the producer. It uses the existing @kill_test flag to finish the kthreads as cleanly as possible. It disables printing the "ERROR" message when the kthread is going. It makes sure that producer does not go into the 10sec sleep when it is being killed. Finally, it does not call wait_to_die() when kthread_should_stop() already returns true. Link: http://lkml.kernel.org/r/20150615155428.GD3135@pathway.suse.cz Signed-off-by: Petr Mladek <pmladek@suse.cz> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer-benchmark: Fix the wrong typeWang Long2015-06-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | The macro 'module_param' shows that the type of the variable disable_reader and write_iteration is unsigned integer. so, we change their type form int to unsigned int. Link: http://lkml.kernel.org/r/1433923927-67782-1-git-send-email-long.wanglong@huawei.com Signed-off-by: Wang Long <long.wanglong@huawei.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer-benchmark: Fix the wrong param in module_paramWang Long2015-06-101-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The {producer|consumer}_{nice|fifo} parameters are integer type, we should use 'int' as the second param in module_param. For example(consumer_fifo): the default value of consumer_fifo is -1. Without this patch: # cat /sys/module/ring_buffer_benchmark/parameters/consumer_fifo 4294967295 With this patch: # cat /sys/module/ring_buffer_benchmark/parameters/consumer_fifo -1 Link: http://lkml.kernel.org/r/1433923873-67712-1-git-send-email-long.wanglong@huawei.com Signed-off-by: Wang Long <long.wanglong@huawei.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Add enum names for the context levelsSteven Rostedt (Red Hat)2015-05-291-5/+22
| | | | | | | | | | | | | | Instead of having hard coded numbers for the context levels, use enums to describe them more. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Remove useless unused tracing_off_permanent()Steven Rostedt (Red Hat)2015-05-281-61/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | The tracing_off_permanent() call is a way to disable all ring_buffers. Nothing uses it and nothing should use it, as tracing_off() and friends are better, as they disable the ring buffers related to tracing. The tracing_off_permanent() even disabled non tracing ring buffers. This is a bit drastic, and was added to handle NMIs doing outputs that could corrupt the ring buffer when only tracing used them. It is now obsolete and adds a little overhead, it should be removed. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Give NMIs a chance to lock the reader_lockSteven Rostedt (Red Hat)2015-05-281-35/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, if an NMI does a dump of a ring buffer, it disables all ring buffers from ever doing any writes again. This is because it wont take the locks for the cpu_buffer and this can cause corruption if it preempted a read, or a read happens on another CPU for the current cpu buffer. This is a bit overkill. First, it should at least try to take the lock, and if it fails then disable it. Also, there's no need to disable all ring buffers, even those that are unrelated to what is being read. Only disable the per cpu ring buffer that is being read if it can not get the lock for it. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Add trace_recursive checks to ring_buffer_write()Steven Rostedt (Red Hat)2015-05-271-1/+8
| | | | | | | | | | | | | | | | | | | | The ring_buffer_write() function isn't protected by the trace recursive writes. Luckily, this function is not used as much and is unlikely to ever recurse. But it should still have the protection, because even a call to ring_buffer_lock_reserve() could cause ring buffer corruption if called when ring_buffer_write() is being used. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Allways do the trace_recursive checksSteven Rostedt (Red Hat)2015-05-271-9/+0
| | | | | | | | | | | | | | | | | | Currently the trace_recursive checks are only done if CONFIG_TRACING is enabled. That was because there use to be a dependency with tracing for the recursive checks (it used the task_struct trace recursive variable). But now it uses its own variable and there is no dependency. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Move recursive check to per_cpu descriptorSteven Rostedt (Red Hat)2015-05-271-18/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using a global per_cpu variable to perform the recursive checks into the ring buffer, use the already existing per_cpu descriptor that is part of the ring buffer itself. Not only does this simplify the code, it also allows for one ring buffer to be used within the guts of the use of another ring buffer. For example trace_printk() can now be used within the ring buffer to record changes done by an instance into the main ring buffer. The recursion checks will prevent the trace_printk() itself from causing recursive issues with the main ring buffer (it is just ignored), but the recursive checks wont prevent the trace_printk() from recording other ring buffers. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * ring-buffer: Add unlikelys to make fast path the defaultSteven Rostedt (Red Hat)2015-05-211-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I was running the trace_event benchmark and noticed that the times to record a trace_event was all over the place. I looked at the assembly of the ring_buffer_lock_reserver() and saw this: <ring_buffer_lock_reserve>: 31 c0 xor %eax,%eax 48 83 3d 76 47 bd 00 cmpq $0x1,0xbd4776(%rip) # ffffffff81d10d60 <ring_buffer_flags> 01 55 push %rbp 48 89 e5 mov %rsp,%rbp 75 1d jne ffffffff8113c60d <ring_buffer_lock_reserve+0x2d> 65 ff 05 69 e3 ec 7e incl %gs:0x7eece369(%rip) # a960 <__preempt_count> 8b 47 08 mov 0x8(%rdi),%eax 85 c0 test %eax,%eax +---- 74 12 je ffffffff8113c610 <ring_buffer_lock_reserve+0x30> | 65 ff 0d 5b e3 ec 7e decl %gs:0x7eece35b(%rip) # a960 <__preempt_count> | 0f 84 85 00 00 00 je ffffffff8113c690 <ring_buffer_lock_reserve+0xb0> | 31 c0 xor %eax,%eax | 5d pop %rbp | c3 retq | 90 nop +---> 65 44 8b 05 48 e3 ec mov %gs:0x7eece348(%rip),%r8d # a960 <__preempt_count> 7e 41 81 e0 ff ff ff 7f and $0x7fffffff,%r8d b0 08 mov $0x8,%al 65 8b 0d 58 36 ed 7e mov %gs:0x7eed3658(%rip),%ecx # fc80 <current_context> 41 f7 c0 00 ff 1f 00 test $0x1fff00,%r8d 74 1e je ffffffff8113c64f <ring_buffer_lock_reserve+0x6f> 41 f7 c0 00 00 10 00 test $0x100000,%r8d b0 01 mov $0x1,%al 75 13 jne ffffffff8113c64f <ring_buffer_lock_reserve+0x6f> 41 81 e0 00 00 0f 00 and $0xf0000,%r8d 49 83 f8 01 cmp $0x1,%r8 19 c0 sbb %eax,%eax 83 e0 02 and $0x2,%eax 83 c0 02 add $0x2,%eax 85 c8 test %ecx,%eax 75 ab jne ffffffff8113c5fe <ring_buffer_lock_reserve+0x1e> 09 c8 or %ecx,%eax 65 89 05 24 36 ed 7e mov %eax,%gs:0x7eed3624(%rip) # fc80 <current_context> The arrow is the fast path. After adding the unlikely's, the fast path looks a bit better: <ring_buffer_lock_reserve>: 31 c0 xor %eax,%eax 48 83 3d 76 47 bd 00 cmpq $0x1,0xbd4776(%rip) # ffffffff81d10d60 <ring_buffer_flags> 01 55 push %rbp 48 89 e5 mov %rsp,%rbp 75 7b jne ffffffff8113c66b <ring_buffer_lock_reserve+0x8b> 65 ff 05 69 e3 ec 7e incl %gs:0x7eece369(%rip) # a960 <__preempt_count> 8b 47 08 mov 0x8(%rdi),%eax 85 c0 test %eax,%eax 0f 85 9f 00 00 00 jne ffffffff8113c6a1 <ring_buffer_lock_reserve+0xc1> 65 8b 0d 57 e3 ec 7e mov %gs:0x7eece357(%rip),%ecx # a960 <__preempt_count> 81 e1 ff ff ff 7f and $0x7fffffff,%ecx b0 08 mov $0x8,%al 65 8b 15 68 36 ed 7e mov %gs:0x7eed3668(%rip),%edx # fc80 <current_context> f7 c1 00 ff 1f 00 test $0x1fff00,%ecx 75 50 jne ffffffff8113c670 <ring_buffer_lock_reserve+0x90> 85 d0 test %edx,%eax 75 7d jne ffffffff8113c6a1 <ring_buffer_lock_reserve+0xc1> 09 d0 or %edx,%eax 65 89 05 53 36 ed 7e mov %eax,%gs:0x7eed3653(%rip) # fc80 <current_context> 65 8b 05 fc da ec 7e mov %gs:0x7eecdafc(%rip),%eax # a130 <cpu_number> 89 c2 mov %eax,%edx Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_raw_##call event structures to trace_event_raw_##callSteven Rostedt (Red Hat)2015-05-142-2/+2
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The ftrace_raw_##call structures are built by macros for trace events. They have nothing to do with function tracing. Rename them. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_trigger_soft_disabled() to trace_trigger_soft_disabled()Steven Rostedt (Red Hat)2015-05-133-5/+5
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The ftrace_trigger_soft_disabled() tests if a trace_event is soft disabled (called but not traced), and returns true if it is. It has nothing to do with function tracing and should be renamed. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename FTRACE_EVENT_FL_* flags to EVENT_FILE_FL_*Steven Rostedt (Red Hat)2015-05-134-45/+45
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The FTRACE_EVENT_FL_* flags are flags to do with the trace_event files in the tracefs directory. They are not related to function tracing. Rename them to a more descriptive name. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename struct ftrace_subsystem_dir to trace_subsystem_dirSteven Rostedt (Red Hat)2015-05-133-20/+20
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The structure ftrace_subsystem_dir holds the information about trace event subsystems. It should not be named ftrace, rename it to trace_subsystem_dir. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_event_name() to trace_event_name()Steven Rostedt (Red Hat)2015-05-135-30/+30
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. ftrace_event_name() returns the name of an event tracepoint, has nothing to do with function tracing. Rename it to trace_event_name(). Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename FTRACE_MAX_EVENT to TRACE_EVENT_TYPE_MAXSteven Rostedt (Red Hat)2015-05-131-2/+2
| | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. Rename the max trace_event type size to something more descriptive and appropriate. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_output functions to trace_outputSteven Rostedt (Red Hat)2015-05-131-8/+8
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The ftrace_output_*() and ftrace_raw_output_*() functions represent the trace_event code. Rename them to just trace_output or trace_raw_output. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_event_buffer to trace_event_buffer.Steven Rostedt (Red Hat)2015-05-131-7/+7
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The ftrace_event_buffer functions and data structures are for trace_events and not for function hooks. Rename them to trace_event_buffer*. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_event_{call,class} to trace_event_{call,class}Steven Rostedt (Red Hat)2015-05-1315-129/+129
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The structures ftrace_event_call and ftrace_event_class have nothing to do with the function hooks, and are really trace_event structures. Rename ftrace_event_* to trace_event_*. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_event_file to trace_event_fileSteven Rostedt (Red Hat)2015-05-139-145/+145
| | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The structure ftrace_event_file is really about trace events and not "ftrace". Rename it to trace_event_file. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename (un)register_ftrace_event() to (un)register_trace_event()Steven Rostedt (Red Hat)2015-05-138-24/+24
| | | | | | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The functions (un)register_ftrace_event() is really about trace_events, and the name should be register_trace_event() instead. Also renamed ftrace_event_reg() to trace_event_reg() for the same reason. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Rename ftrace_print_*() functions ta trace_print_*()Steven Rostedt (Red Hat)2015-05-131-17/+17
| | | | | | | | | | | | | | | | | | The name "ftrace" really refers to the function hook infrastructure. It is not about the trace_events. The functions ftrace_print_*() are not part of the function infrastructure, and the names can be confusing. Rename them to be trace_print_*(). Signed-off-by: Steven Rostedt <rostedt@goodmis.org>